Dataset Viewer
Auto-converted to Parquet
query_halid
string
query
string
query_year
string
query_domain
list
query_affiliations
list
query_authorids
list
pos_halid
string
positive
string
pos_year
string
pos_domain
list
pos_affiliations
list
pos_authorids
list
neg_halids
string
negative
string
neg_year
string
neg_domain
list
neg_affiliations
list
neg_authorids
list
00262933
But the statistical work was never viewed by the transporter as an eventual systematical tool to promote. It was a punctual way to better fit the fraud management system with its environment. Control encounters By observing control encounters I tried to draw some usual scenarios of control and reporting. They enabled me to propose a flexible model of reporting as a sequenced interaction and made clear the key features of it. Moreover, I underlined some dimensions of control and reporting, such as violence and conflict especially. Once more, it appeared that such a kind of knowledge was of interest for the management, and not that much for controllers, who had a tacit understanding of what I explained. The use of this knowledge was actually very restricted, all the more so as the dimension of conflict and violence were denied. Evaders' interviews The interviews were very productive, because the y brought much new information. They made connections with the point of view of evaders, which enabled to address topics such as the motivation or the feelings (fear, anger, shame…) of evaders. These topics were not accessible by the controllers' or the database entrances.
2005
[ "shs" ]
[ "1172" ]
[ "3161" ]
00262932
20 We can notice the work of Michael Lipsky on street-level bureaucracy precisely accounts for this kind of decisions (Lipsky, 1983) that adapt bureaucratic rules. (that did not accept the payment rule) to accept the principle of a fine (whatever you call it: compensation or punition…). However, controllers are supposed to report. And even if they can sometimes decide not to do it, they have to account for their work. And the common traditional indicator for control work is the number of fines. 21 This aspect of controllers' work is very important because it explains the instrumental dimension of the relationship that settles by the control encounter. It entails another look at the interaction (a strategic one) 22 and raises the question of the tools at disposal for controllers to manage the relationship. The coercive means to manage the interaction Control is often seen as a repressive occupation. Such a view is at least partly true. The mission of reporting is anyway rather repressive… The question then is: to achieve a repressive aim, do controllers use repressive tools? A first answer is yes.
2005
[ "shs" ]
[ "1172" ]
[ "3161" ]
00291750
They need a visa to leave and to enter Lebanon. Many do not have passports -they have only been issued a travel document by the Lebanese authorities -and as a result many other countries refuse to give them visas, or even to let them transit their territories. Abu Taraq, who migrated to Germany with his family in 1994, explains his motivations: In Lebanon, we do not have any rights, to work, to education, nor to health. What is the future for my children? The Oslo Agreements have forgotten us... At least in Europe they respect us as human beings; we have the same rights as everybody else. My children can get an education, they can work, build a future. What is striking about all of the above excerpts is that they are from interviews with men. Rather than reflecting any methodological bias, this is a reflection of the fact that migration tends to be dominated by young males. Indeed their migration has created a significant sex imbalance in the refugee camps. Young women are fin ding it harder and harder to marry, and this can make their status and security even more precarious.
2002
[ "shs" ]
[ "199917" ]
[ "2673" ]
04159574
How does one combine a high-end image with such concerns? How is it possible to fulfill all the wide-ranging commitments with respect to seasonality, awareness of food waste, and so on? Some chefs offer porcelain crockery, which "they can collect from the customer's home and return to the restaurant premises" (J. Guèze), while others are thinking about crockery deposit systems to avoid the use of plastic packaging and cardboard boxes. This thought process coincides with the high-end image that they want to convey, and using innovative packaging is a step in that direction. "We are using packaging with a strong CSR [corporate social responsibility] connotation-recyclable, and, above all, very design-centric, with an origami inspiration. It can be transported easily, and it's very clean. It is in line with the positioning and vision of the restaurant" (F. Gagnaire). In addition, with home delivery, the delivery person does not know the norms of haute cuisine, the chef's philosophy, or the restaurant's brand image in general, especially if the chef appoints an external service provider. The delivery causes chefs to lose control over how they want dishes to be presented, both verbally and nonverbally. These skills are instilled in restaurant frontline staff but are not passed on to the service providers, who now play a major role: "It's a bit as if a waiter brings you a gourmet plate in a t-shirt and jeans; it kinda kills the dream" (J. Guèze).
2022
[ "shs" ]
[ "1043181", "458547", "33804", "88676", "347722", "531465", "1043181", "1042703" ]
[ "1145713", "1231313" ]
03710025
As with Dahl et al. (1999), the data collection was carried out among engineering students. The questionnaire was administered though the Qualtrics platform -dedicated to the creation and distribution of online questionnaires -and completed at the end of the creativity process based on design thinking. According to [START_REF] Calder | Designing research for application[END_REF], a sample of students is appropriate for testing a theory, although it has limitations in terms of external validity and generalizing the results. In total, the sample comprised 177 respondents (44 teams), 79.1% male, with an average age of 20.3 (SD = 0.74). The respondents were taking a bachelor's degree, specializing in various fields including materials science, biotechnology, electronics, and signal processing. Descriptive statistics for the main sample are presented in Appendix 3. Field of study. The teams were asked to respond to the challenge of the company Kicklox, worded as follows: "How should engineers be used around the Kicklox platform, the Uber of engineering?" To facilitate the response to the challenge, several subobjectives were linked to the problem (i.e. create a strong link with these engineers; ensure the engineer's full investment, particularly in the quality of the content; create a secure environment for the customer), together with constraints (i.e. being available 2 days per week to develop the solution; providing the lowest possible acquisition cost per user, or even offering a cash-redeemable solution; taking account of technical feasibility in particular; clearly defining the user/customer being addressed). One of Kicklox's cofounders gave a presentation of the company and the challenge.
2019
[ "shs" ]
[ "89889", "88676", "1041636", "458547", "1041636", "1042703", "1041636", "458547" ]
[ "1145713", "1231313" ]
01961139
The initial correspondence, reflecting Proto-Kiranti (PK) *r, poses the fewest problems, being well known since the work of Robert Shafer (1953: 148-149; see also [START_REF] Driem | The rise and fall of the phoneme /r/ in eastern Kiranti: sound change in Tibeto-Burman[END_REF]. As for the rime, velar finals are generally preserved in Limbu, Yamphu and Bantawa, but often lost elsewhere, with backing and rounding of the vowel (Winter 1987: 731). In Central and West Kiranti, the vocalic systems have often been multiplied under the influence of finals, with the creation of front rounded or centralized vowels in Bahing and in Khaling [START_REF] Michailovsky | Notes on the Kiranti Verb[END_REF]. Fronting is generally inhibited by velar finals, even where these are subsequently lost. Thus, in the 10-vowel system of Bahing, for example, the rime *ak generally has the reflex Ák, or, in contexts where the final is lost, ÁË. This is particularly clear in conjugated forms of the verb. Khaling aa and Kulung ÁË are also characteristic of syllables with final k. The examples below are intended to illustrate the correspondences of PK *ak. Although there is some uncertainty (between a/Á/o), *rak is quite well supported as the first syllable of our ethnonym. The initial of the second syllable shows a regular correspondence between South Kiranti d, Limbu th, and Yamphu zero; the pertinent feature in Limbu is the aspiration, since Limbu does not have an opposition of voicing. However, the expected correspondence in Central and West Kiranti is t, not d [START_REF] Michailovsky | Notes on the Kiranti Verb[END_REF]Winter 1987: 730).
2003
[ "shs" ]
[ "406905" ]
[ "957269" ]
03149077
The PCR was positive, and the CT-scan demonstrated bilateral crazy paving. 48 hours after admission, he complained of chest pain. 12-lead ECG demonstrated inverted T waves in inferior leads. Hs-TNI was at 355 ng/L then 570 ng/L three hours later (N< 17 ng/L). Transthoracic echocardiography showed a left ventricular ejection fraction at 60% without wall motion abnormalities, no diastolic dysfunction and a mild mitral regurgitation. Coronary angiogram performed via radial approach demonstrated a chronic total occlusion of the posterior descending artery with epicardial collateral from the left anterior descending artery (Rentrop 3, panel A). In the mid right coronary artery, a spontaneous dissecting coronary hematoma was observed with an intimal tear (panels B and C). Flow grade was Thrombolysis In Myocardial Infarction (TIMI) 3 in the posterolateral artery. Optical coherence tomography (OCT) was performed in the right coronary artery and confirmed the spontaneous dissecting coronary hematoma with an intimal rupture (panels D, E and F). A conservative management was decided.
2020
[ "sdv" ]
[ "194495", "139739" ]
[ "744032", "779118", "779119" ]
03149070
Two periods were defined: before the lockdown period (weeks 2-10) and during the lockdown period (weeks 11-14). Figures were plotted using Graphpad Prism 7.04 software. Results The weekly numbers of myocardial infarctions were roughly comparable before the lockdown period in 2020 and in 2018-2019. After lockdown, it dropped to a much lower level in 2020 versus 2918-2019 (Fig. 1A). Table 1 indicates that the cumulative incidence of myocardial infarctions during weeks 2-10 in 2020 differed from that in 2018-2019 by less than 10%, but markedly decreased by 31.0% during lockdown. However, the numbers of births remained stable over the study periods, without a substantial difference between 2020 and 2018-2019 (Fig. 1B). Lockdown had almost no effect on the numbers of births (Table 1). Discussion The upheaval induced by COVID-19 has many non-viral consequences and our multicentre study is the first one to tackle the issue of its effect on myocardial infarctions. The present study strongly suggests a decrease in the number of admissions for myocardial infarction during lockdown. Although we do not have a long follow-up to determine whether this trend will continue, this is an important warning for the medical community and authorities.
2020
[ "sdv" ]
[ "194495", "139739" ]
[ "779119", "744032", "779118" ]
04283890
36 Figure 1. Experimental tasks and model framework. (A) In the saccade trials, subjects executed a saccade to a 20 β€’ rightward target. In the pre-exposure phase, the target was extinguished at saccade onset. In the exposure and post-exposure phase, the target either stepped 6 β€’ inward (inward condition), 6 β€’ outward (outward condition) or stayed at its initial position (no step condition). In the pre-saccadic localization trials, subjects localized a 12 ms flash with a mouse cursor while holding gaze at the fixation point. In the post-saccadic localization trials, subjects performed a saccade to a 20 β€’ rightward target and then localized the 12 ms pre-saccadic flash with a mouse cursor. The mouse cursor was a blue line pointer. The yellow circle illustrates gaze location but was not present at the stimulus display. (B) The target with the physical distance P 1 is represented at the location V 1 on the visuospatial map. An inverse model maps V 1 onto a motor command M .
2023
[ "sdv" ]
[ "354310", "354310" ]
[ "1284175", "810985", "735972", "1284180" ]
03278345
34 Yang and Deng reported a one-pot enantioselective sequence starting with an organocatalysed Michael addition of cyclobutanone 31 onto 2-nitrovinylindole 32 (Scheme 10). 35 This reaction led to intermediate 33 and epi-33 which, upon action of boron trifluoride etherate Lewis acid, underwent a cyclisation to the five-membered ring. The latter could spontaneously perform a fragmentation and deliver the corresponding medium-size indolic systems in yields up to 84%, as a mixture of two diastereomers cis-and trans-34 respectively and with excellent enantioselectivities. The stereoselectivities have been rationalised: the diastereoselectivity could originate from the protonation step of the amide enolate and the enantioselectivity from the Michael addition step catalysed by a chiral bifunctional aminocatalyst. The authors not only demonstrated the broad substrate scope and the scalability of the process but also, the high synthetic potential of the obtained imbedded scaffolds. Another approach to cyclohepta [b]indoles and other useful seven-and even eight-membered carbocyclic platforms, relies on an intramolecular alkyne-de Mayo reaction (Scheme 11). 36 A photochemically induced cascade process starting from 35 in a polar protic solvent provided the desired adduct 36 in excellent yields. The mechanism to the seven-membered ring presumably proceeds through the intermediate tricyclic cyclobutene cis-S 0 -37 via a retro-Mannich reaction. As already stated, the formation of challenging larger ring systems can benefit from the strain release of small-size cyclic compounds. In 2016, Li and co-workers showed that the Scheme 10 Synthesis of medium-size indolic system (2016).
2021
[ "chim" ]
[ "186403", "186403", "186403", "186403" ]
[ "1268105" ]
02395349
While metal catalyzed hydroboration using other B-H sources (catechol boranes [START_REF] Brown | Modern Rhodium-Catalyzed Organic Reactions[END_REF] and tertiary amine borane complexes 16 ) have been mechanistically described by kinetic or DFT studies, the B-H bond metal activation of NHC-boranes has not been investigated yet. For a complete understanding of this activation process, complementary experiments 9 and a theoretical treatment are presented in this paper. We discuss a pertinent mechanism of NHC-borane intramolecular hydroboration promoted by a cationic rhodium species (Scheme 1c). The crucial role of the NHC-borane/alkene substrate as a bidentate ligand and the origin of enantioselectivity are highlighted. Computational details This work focuses on the mechanism of intramolecular enantioselective hydroboration. In general, enantioselectivity can be due to (small) differences between diastereomeric transition states, notably when a specific mechanistic step (corresponding to a specific transition state) is clearly relevant for the kinetics of the reaction (Curtin-Hammett's conditions). In other cases, especially with large catalytic systems and for complex pathways, several transition states may become kinetically relevant. 17 For both scenarios, accurate energy calculations are desirable. Furthermore, when the substrate is achiral and the reaction is enantioselective, the chirality transfer must be due to the interaction between the substrate and substituents of the bulky chiral ligand. Non-covalent interactions may become crucial for the energetics.
2019
[ "chim" ]
[ "186403", "186403", "186403" ]
[ "1268105" ]
04235958
1 Thus, it can be used to obtain structural information or to study reaction mechanisms. 2 In total synthesis, deuterium-labelled compounds have been used to modify the reaction selectivity. [START_REF] Atzrodt | Deuterium-and Tritium-Labelled Compounds: Applications in the Life Sciences[END_REF] H/D substitution plays also a major role in the development of new drugs and it serves, for instance, to increase their metabolic stability. 4 Interestingly, H/D substitution has also been reported to transform an achiral molecule into a chiral one. Chiral isotopologues have attracted the attention of organic chemists and spectroscopists because both their synthesis and the detection of their chiroptical properties are challenging. Several interesting examples have been reported in the literature (compounds A-F in SCHEME 1) and different chiroptical tools have been used to characterize them. 5,[START_REF] Miwa | Asymmetric Synthesis of Isotopic Atropisomers Based on Ortho-CH3/CD3 Discrimination and Their Structural Properties[END_REF] It should be noted that the vast majority of chiral isotopologues studied concern molecules with asymmetric carbons (compounds B, C, D and F). Conversely, chiral isotopologues with an inherent chiral structure are much less common and only a few examples have been reported so far (see for example compounds A and E). SCHEME 1 Examples of chiral isotopologues reported in the literature. As a new example, we report the synthesis of a deuteriumlabelled syn-cryptophane-B (SCHEME 2). Unlike the overwhelming majority of cryptophanes that have been prepared, syn-cryptophane-B (syn-1) is achiral with a C 3h symmetry.
2023
[ "chim" ]
[ "663", "24493", "341936", "186403", "186403", "194495", "24493" ]
[ "756339" ]
03770903
A prototype lowpass filter is then obtained [START_REF] Parks | Digital Filter Design[END_REF]. As these approximations always satisfy the LC-ladder realisability conditions, a prototype LC-ladder filter is synthesised. Using element and frequency transformations, other standard filters may be obtained [START_REF] Baher | Synthesis of Electrical Networks[END_REF]. However, the resulting impedances of the transformed elements may not represent components of practical interest. In [START_REF] Rossignol | Filter Design under Magnitude Constraints is a Finite Dimensional Convex Optimization Problem[END_REF], it is shown how Problem 1 can be formulated as an LMI optimisation problem. By adding the realisability conditions, one can formulate the LC-ladder filter design problem as an LMI optimisation problem. In the next section, it is shown how this approach can be extended to ladder filter with other lossless-passive elements. To achieve this, a generalised variable T (s) is introduced. The resulting design problem remains an LMI optimisation problem. III.
2018
[ "spi" ]
[ "527407", "408749", "408749", "408749", "527407" ]
[ "171579", "1228", "1271", "173133" ]
01984417
However, those for LC-ladder filters are rather simple. It appears that sufficient conditions are that s 21 (s) is T -bounded-real, with T (s) = s, and is a stable all-pole function, i.e. s 21 (s) = 1 g(s) with g a Hurwitz polynomial [START_REF] Baher | Synthesis of Electrical Networks[END_REF]. In the traditional approach, solutions of Problem 1 are calculated using the Butterworth or the Chebyshev approximations. A prototype lowpass filter is then obtained [START_REF] Parks | Digital Filter Design[END_REF]. As these approximations always satisfy the LC-ladder realisability conditions, a prototype LC-ladder filter is synthesised. Using element and frequency transformations, other standard filters may be obtained [START_REF] Baher | Synthesis of Electrical Networks[END_REF]. However, the resulting impedances of the transformed elements may not represent components of practical interest. In [START_REF] Rossignol | Filter Design under Magnitude Constraints is a Finite Dimensional Convex Optimization Problem[END_REF], it is shown how Problem 1 can be formulated as an LMI optimisation problem. By adding the realisability conditions, one can formulate the LC-ladder filter design problem as an LMI optimisation problem. In the next section, it is shown how this approach can be extended to ladder filter with other lossless-passive elements.
2018
[ "spi" ]
[ "408749", "527407", "408749", "408749", "527407" ]
[ "171579", "1228", "1271", "173133" ]
03190972
As the vibration is equal to 6 times the working frequency of the device, the solution to remove it was to work at higher frequencies. This would consequently increase the electrical impedance of the coils. To keep reasonable operational voltages, each phase coils connection was modified from a series connection of the original design to a parallel connection. Experimentally the working frequency was set to 660 Hz, which made the aluminum plates resonate at 3960 Hz and provided a more comfortable manipulation. For the measurements a force sensor was mounted on one of the aluminum plates and a digital scope recorded simultaneously the closed loop current of the electronic drivers and the force sensor signal. A thrust-intensity empirical relation is shown in Fig 6 . Each point corresponds to the mean of ten measurements and their standard deviation is included in Fig 6 . Detailed measurement values with percent relative standard deviations can be found in table I. Around 2.2 A the electronic driver starts to limit its output current and then This can be due to the electronic drivers solution adopted for this proof of concept. The time constant of the interface was also measured empirically in Fig 7 .
2017
[ "spi" ]
[ "96164", "96164", "96164", "96164", "96164" ]
[ "739419", "745212", "745154" ]
01560854
Data analysis We employed partial least squares (PLS) as our analysis approach and utilized the tool smartPLS [START_REF] Ringle | Smart PLS 2.0 M3[END_REF]. PLS is a second generation regression method that combines confirmatory factor analysis with linear regression, and this makes it possible to run the measurement and structural models simultaneously. Table 2 shows item wise averages and loadings of each construct in the model. For each construct the assessment of convergent validity or internal consistency is also included through the composite reliability coefficient [START_REF] Fornell | Evaluating structural equation models with unobservable variables and measurement error[END_REF]. Convergent validity indicates the extent to which the items of a scale that are theoretically related are also related in reality. As we can see from Table 2, all items have significant path loadings greater than threshold 0.7 recommended by Fornell and Larcker [START_REF] Fornell | Evaluating structural equation models with unobservable variables and measurement error[END_REF]. All the constructs have composite reliability values that exceed the threshold recommended by Nunnally [START_REF] Nunnally | Psychometric theory. 2nd Edition[END_REF]. Testing for discriminant validity involves checking whether the items measure the construct in question or other (related) constructs. Discriminant validity was verified with correlation analysis as recommended by Gefen and Straub [START_REF] Gefen | A practical guide to factorial validity using PLS-Graph: Tutorial and annotated example[END_REF].
2011
[ "info" ]
[ "93061" ]
[ "1012511" ]
01560844
Moodle is an open source course management system and has become very popular among educators for creating dynamic online course websites for students. Moodle can be used to conduct online courses or to augment face-to-face courses (hybrid courses). This study was conducted in an internationally acknowledged, multidisciplinary scientific university in Finland. The university has seven faculties. The university has been using Moodle since 2007 as its platform for creating course pages online. Data was collected via a web-based survey from the students of the university who use Moodle in hybrid courses. A list of students' email addresses was collected from the Moodle support team in the university. A total of 1100 email invitations were sent to randomly selected students of the university who had been registered in Moodle as student users. One reminder was sent to increase the response rate after a gap of one week. The survey ran for approximately two weeks.
2011
[ "info" ]
[ "93061" ]
[ "1012511" ]
01405060
Type of construction work. These steps are similar to the ones defined by Group 2, but the main theme for Group 3's design was simplicity for the citizen. For instance, they focused on letting office clerks decide whether or not a submitted application should be categorized as "permit application" or "notification" such that the citizen would not need to decide. This contrasts with the solutions suggested by groups 1 and 2, which suggested that the user (citizen) should decide. Summary. From the overview of the design process, we see that the redesign strategies varied between top-down and bottom-up approaches and that the three groups addressed a different number of usability problems. The main themes of the redesign proposals also differed between the three groups, but all applied a wizard approach of three to four steps. The wizard steps of Group 2 and 3 were similar while Group 1 selected an alternative order and also designed an extra step. All steps in the redesign solutions deviated from the order in the original pdf form. Problem Understanding This subsection describes our findings on the developers' perception of the problems of the system. In the following we describe the numbers and categories of identified strengths and weaknesses and the collective list of these as prioritized by the five participants. Categories of Strengths and Weaknesses.
2014
[ "info" ]
[ "300821", "300821", "473975", "300821", "300821" ]
[ "994709", "994710", "994711", "994712" ]
01646718
We say that a type connective is positive if its right introduction rule is non-invertible, and negative otherwise: (β†’, Γ—, 1) are negatives, and (+, 0) are positive. It is easy to decide equivalence of the simply-typed Ξ»-calculus with only connectives of one polarity: we previously remarked that it is easy to define canonical forms in the negative fragment STLC(β†’, Γ—, 1), but it is equally easy in the positive fragment STLC(+,0). It is only when both polarities are mixed that things become difficult. A key result of Zeilberger (2009, Separation Theorem, 4.3.14) is that a focusing-based presentation of the simply-typed Ξ»-calculus is canonical in the effectful setting where we assume that function calls may perform side-effects -at least using the specific reduction strategy studied in CBPV (Levy 1999). Two syntactically distinct effectful focused program expressions are observationally distinct -the canonicity proof relies on two distinct error C V I T 2 0 1 6 23:6 Search for Program Structure effects, to distinguish evaluation order, and an integer counter to detect repeated evaluation. The fact that any Ξ»-term can be given a focused form comes from a completeness theorem, the analogous of completeness of focusing as a subset of logical proofs. However, this syntax is not canonical anymore if we consider the stronger equivalences of pure functional programming, where duplicated or discarded computations cannot be observed. Let us write P , Q for positive types, types that start with a positive head connective, and N , M for negative types, that start with a negative head connective. In a Ξ»-term in focused form with types of both polarities -see a complete description in Scherer (2016, Chapter 10) -a non-invertible phases can be of two forms which we shall now define. It can start with a positive neutral (p : P ), which is a sequence of non-invertible constructors of the form Οƒ i for positive types, they commit to a sequence of risky choice to build the value to return.
2017
[ "info" ]
[ "29479" ]
[ "170100" ]
01646064
In programming terms, the fact that the right implication rule is invertible corresponds to an inversion principle on values: without loss of generality, one can consider that any value of type A β†’ B is of the form Ξ»x. t. Any value of type A1 Γ— A2 is of the form (t1, t2). This is strictly true for closed values in the empty context, but it is true modulo equivalence even in non-empty contexts, as is witnessed by the Ξ·-expansion principles. If a value (t : A β†’ B) is not a Ξ»-abstraction, we can consider the equivalent term Ξ»x. t x. But it is not the case that any value of type A + B is of the form Οƒi t, as our example X + Y Y + X demonstrated. Inspired by focusing we look back at our grammar of Ξ²Ξ·-normal forms: it is not about constructors and destructors, it is about term-formers that correspond to invertible rules and those that do not. To gracefully insert sums into this picture, the non-invertible Οƒi should go into the neutrals, and case-splitting should be a value. [START_REF] Scherer | Which simple types have a unique inhabitant? [END_REF] introduce focusing in more details, and present a grammar of focused normal forms is, lightly rephrased, as follows: values t ::= Ξ»x.
2017
[ "info" ]
[ "29479", "56052" ]
[ "170100" ]
01366688
One should also consider the differences in level of control between a learner using the FORGE tools and a direct testbed user for which a FIRE facility was envisioned. This is especially relevant when considering troubleshooting possible software and hardware failures that are often unavoidable when using state of the art research equipment and immature technologies. When a learner has only access via a web interface, a series of watchdog programs and actions should be defined to recover the experiment state in case there is a deviation of the expected experimentation path. Even when giving learners direct access to the experimentation machines at least a series of recovery scripts or instructions should be provided since absolute knowledge of the underlying system and quirks cannot be expected. Another aspect that we recommend (apart from what is stated above and below) is to focus on multi-platform approach and easy integration on existing eLearning platforms. Educators desire a good integration within the platforms they are using already. By supporting the creation of widgets that use their FIRE facility, one supports the inclusion of FIRE functionality via the widget in different Learning Management Systems or other digital media (such as eBooks). For example, this has been applied by the inclusion of FORGE widgets into the Moodle-based legacy eLearning LMS of Universidad do Brasilia and by the coupling of the Central Authentication Service (CAS) mechanism for student accounts of Ghent University. This allowed the learners seamless access to the lab, while also maintaining both user authorization and authentication. All interactions of the learner with the widgets and underlying FIRE facility should be collected using Learning Analytics, from the initial reservation of resources to the actual interactions during the lab. All learners should be uniquely identified so the full learning path can be analysed and where possible, technically and legally, the learner should be coupled to his/her real-life identity and university account if applicable.
2016
[ "info" ]
[ "389034", "389034", "118333", "302729", "302729", "300765", "466439", "466439" ]
[ "9674", "976595", "858104" ]
01594084
It appears that it would be very difficult to reach this objective for the Caribbean country producers. By contrast, the transitional tariff-rate quota regime could benefit West African countries where production costs are lower and where sorne multinationals (Dole and Del Monte) now run large plantations. West African countries have welcomed the new EU import banana regime. However, the quota C level could limit their future exports to the EU. Furthermore, as their historical import rights are smaller than expected exports, licenses would have to be purchased to export additional bananas. There are no certainties that the tariff-only regime will enter into force on 1 January 2006. The sening of the appropriate tariff is likely to be a point of considerable discussion until the deadline. The banana industry in ACP countries, notably in Caribbean states, is clearly at a competitive disadvantage with respect to LA suppliers. An EU policy that combines a simple tariff on dollar banana imports with direct aid to preferred suppliers presents several advantages relative to a multiple tariff-rate quota regime with cross-subsidization of non-preferred suppliers through allocation of import licenses within the preferred suppliers' quota. It reduces distortionary impacts and eliminates the quota rent problem.
2003
[ "shs" ]
[ "52709", "39083", "52709", "39083" ]
[ "735969" ]
02850915
TC may have the effect of reducing the quantity utilised of sorne inputs. EquaIly, TC may increase utilisation of other inputs. From a policy perspective, it rnay be interesting to know whether TC has a tendency to reduce labour use in agriculture. This information may be helpful to plan intersectoral shifts in the labour force into other sectors and to develop retraining programmes to ensure that the labour force is gainfully employed in other occupations. Based on Hick's original definition and assuming a two input -one output linearly homogeneous technology, technical change is said to be neutral if it leaves unchanged the rate of substitution between input pairs. However, as noted by [START_REF] Blackorby | Extended Hicks Neutral Technical Change[END_REF], "to compare situations before and after technical change, something must be held constant. Exact1y what is to be held constant has been the subject of sorne debate and constitutes the crux of the issue at hand". If factor endowments are held constant, technical change is measured along a ray where factor production remains the same. For agricultural technologies, at both the firm and the farm levels, it seems more useful to define neutrality holding factor priee ratios constant [START_REF] Binswanger | The Measurement of Technical Change Biases with Many Factors of Production[END_REF]. The dual measure oftechnical change biases he proposes is : > 0 if TC is input i using Bit = OIog S; 1Γ’ .
1993
[ "shs" ]
[ "37696", "37696", "305143" ]
[ "735969" ]
00751180
Inferential processing may apply to the result of other types of processing. E.g., an agent can infer a proposition from what she 'sees', i.e. from the result of applying visual processing to visual cues. For quelque, what counts is the processing. More precisely, quelque requires that the corresponding existential proposition be reached through inferential processing. (11) C-inf A form [quelque] x [R] [S] is appropriate only under interpretations where the epistemic agent infers that [βˆƒ] x [R] [S]. In most cases, inferential processing leads to conclusions that are weaker than those reached through perceptual processing, which accounts for the contrast in ( 12). (12) a. Yolande a dΓ» epist ouvrir la porte. En fait, je me souviens, je l'ai vue 'Yolanda must epist have opened the door. Actually, I remember I saw her' b. ?
2007
[ "shs" ]
[ "51028", "1053" ]
[ "859670" ]
01215546
FIGURE 5 ABOUT HERE The labyrinth of the Chartres Cathedral (source: authors). This image-object was often present in Gothic Cathedrals. In France, only two remain visible at the Chartres and Amiens cathedrals. At Chartres, the pilgrim coming into this symbolic space is destined to live a threestep experience (over the 261.5 metres of the labyrinth!) which was a way to do the Jerusalem or Compostela pilgrimage [START_REF] Attali | Chemins de sΓ’gesse[END_REF]. Firstly, he or she will think that the way to the centre will be easy. Then, it may seem that the path moves him/her more and more towards and around the extremities of the labyrinth. A feeling of being lost sets in. Then, as the pilgrim becomes desperate, his/her path leads to the centre. One must imagine a pilgrim, tired and exhausted.
2015
[ "shs" ]
[ "389243", "1032", "185880" ]
[ "914586", "921507" ]
00671690
The History of the "putting out" systems [START_REF] Kieser | Why organization theory needs historical analyses -and how this should be performed[END_REF] could be compared to current outsourcing and issues of de/centralisation. Putting out was a complex network of contracts of manufacture, usually analysed through labour process analysis (workers" control of product and process, division of labour, factory systems, technical superiority, matching of technology with skills, family lives) during the industrial revolution in Western societies, especially the UK. Historical material shows that putting out was a consequence -rather than the cause -of a division of work that was already in existence across rural communities in the North West of England in the textile industry. The centralisation of production was triggered by the need to fill the capacity of large-scale machinery, but putting out systems were far more effective than the centralized factory. Factory owners were forced to compromise as they were unable to find a technology for decentralised production. One could see parallels with the contracting out of workers through increasingly mobile ICTs, which takes place within countries and globally across borders, as opposed to just within regions. It may be possible to contrast and compare across cases, to highlight features particular to each historical context in order to gain some unexpected insights into current practices. While we are not suggesting that History repeats itself, informed historical analyses could serve to reflect on current thinking and critique existing theories of IT-enabled work design, for instance the consequences of offshoring on communities both in Southern and Northern parts of the world (see Howcroft and Richardson, 2010). The historiography of influential ideas and thinkers on action research and change management could bring insights into the topic of participatory design and empowerment through ICTs. [START_REF] Cooke | Writing the left out of management theory: The historiography of the management of change[END_REF] looked at the work of Kurt [START_REF] Lewin | Action research and minority problems[END_REF], who is noted for the development of action research in organizational studies.
2012
[ "shs" ]
[ "185880", "389243" ]
[ "921507", "914586" ]
04027124
For this mathematical task in the chemistry course, the praxeology is not the praxeology that would be expected in the mathematics course. TC confirmed in the interview that the students have not been able to solve this exercise. We observe in this example several issues associated with the didactical praxeology for teaching T nsv (which belongs to the praxeology at the discipline level T bmt ). Firstly, the "reminder" can in fact correspond to new knowledge. Here the property can be proven with secondary school knowledge, but it requires a complicated proof. Moreover students are not familiar with vector projections at secondary school. Second, for what TC identifies as a mathematical exercise, the kind of justification expected is very different from what would be expected in a mathematics course. DISCUSSION Are the didactical types of task (table 2) and the associated praxeologies specific for the target public of non-specialist students? In this section we discuss our results in order to answer this question. Our aim was not to compare the three teachers; nevertheless we also present some hypotheses about the differences between the didactical praxeologies they developed for the same types of tasks.
2022
[ "shs" ]
[ "1041771", "199013", "238177" ]
[ "2438", "1329187", "8750" ]
03655658
La forma ensayo[END_REF][START_REF] Alter | Chris Marker[END_REF][START_REF] Alter | The Essay Film after Fact and Fiction[END_REF]. This article aims to carry out an unprecedented in-depth analysis on Sans soleil's "thinking in act" (Moure 2004, 37), considering the essay film as a filmic form that, through the subjectivity of the filmmaker, generates a properly audiovisual thinking process, which arises from the relationships established among the elements of the sound image and the visual image. Continuing the essay film theoretical developments of Josep Maria Català-about "parataxic thinking" (2014, 209), focused on the juxtaposition of different elements-and Laura Rascaroli-about "interstitial thinking" (2017, 190) centred on the interstice that arises from that juxtaposition-, I will analyse Marker's essay film and its cinematic thinking process as a materialisation of Gilles Deleuze's time-image and crystal-image (1989). To do so, I will use the concept of sentence-image defined by Jacques Rancière as the materialisation of the essay film's thinking in act: "The sentence is not the sayable and the image is not the visible. By sentence-image, I intend the combination of two functions that are to be defined aesthetically-that is, by the way in which they undo the representative relationship between text and image" (2009,46). Thus, the sentence-image, which generates cinematic thinking, oscillates "between two poles, dialectical and symbolic [...] between the image that separates and the sentence which strives for continuous phrasing" (58). By creating different sentence-images, Marker develops a thinking process that forces the spectator to constantly transform the actual image/virtual image relationship of the film, concepts that Deleuze takes from Bergson to apply to the analysis of the time-image and the crystal-image. The first offers a direct image of time: "It is no longer time that depends on movement; it is aberrant movement that depends on time. The relation, sensory-motor situation à indirect image of time is replaced by a non-localizable relation, pure optical and sound situation à direct time-image" (1989,41). The second achieves the indiscernibility between actual image and virtual image: "the coalescence of an actual image and its virtual image, the indiscernibility of two distinct images" (127).
2022
[ "shs" ]
[ "1063811" ]
[ "748938" ]
04162605
JimΓ©nez's reflection on the nature of filmic material now extends to the position of the spectator: "To make a film is to mask; hide a part of oneself, so that it emerges for others, on those who see, listen to." The autobiographical account begins with the first childhood memories in the Andes, where JimΓ©nez lived until she was six years old. The early childhood memory of Lima is then associated with the memory of the mother. This is how the central device of the film begins: the revisiting of the physical spaces of her memories and, in some cases, the recreation of the experiences lived in them. Thus, the earliest memory of the mother, the taxi ride to the ballet, is narrated from the present physical position in this space. The daughter hands her mother her school reports: "I know that if I am number 1 instead of 37, my mother will love me again. I'm going to try. But I don't know why this idea hurts me." The film turns the autobiographical memory into a filmic revisitation and also into a kind of psychoanalytic regression in which the child character takes the floor. JimΓ©nez confirms the mutation of the memory, the transformative capacity of these recreations already enunciated regarding the Andes: "From now on, when I think of my pain in the absence of my mother's love, it will be the images of this film that will come to mind."
2023
[ "shs" ]
[ "1063811" ]
[ "748938" ]
01954577
Log-linear regressions were performed using Stata 9 [START_REF] Statacorp | Stata Statistical Software: Release 9[END_REF], by regressing the quantity of land entered into HLS (qland) over the payment rate per contract (avepr) and the average distance to the three main cities (avedist). The HLS data sample is truncated as HLS successful entrants are mostly selected from a population of farmers enrolling into the (O)ELS part of the Environmental Stewardship Scheme, and only operating HLS contract data were available. Both truncated and OLS regressions on the log-transformed variables for the given sample led to similar results, so the OLS results (log-linear model) only are reported in Table 2. Under a given budget constraint and controlling for the weighted environmental benefit per hectare (distance to cities), the quantity of land entered is hypothesised to decrease for higher average payment rates (hypothesis 1). A negative coefficient for the average payment rate per contract is consequently expected in the regression analysis. With land closer to cities having a higher environmental value per hectare, for constant payment rates, the quantity of land entered is hypothesised to decrease as the distance from the main cities increases. A negative coefficient for the average distance to main cities is thus expected in the regression analysis (hypothesis 2). The adjusted R 2 value is relatively high (49%) for cross-sectional data, possibly reflecting the fact that the sample is drawn from the same area with similar characteristics. All coefficients display the expected negative signs, and both the coefficients for average payment rates and for the average distance to main cities were found significant at 1%. No heteroskedasticity was detected (Breusch-Pagan / Cook-Weisberg test: Chi2 test statistic (1) = 0.21; p-value of 0.65).
2010
[ "shs" ]
[ "300739", "300739", "4177" ]
[ "170787" ]
01391484
The obtained formula is valid for any right-hand side in equation (16a). For instance, additional forcing such as an Ekman stress could be taken into account. In equation ( 17), the solenoidal component of the velocity, βˆ‡ βŠ₯ ψ, corresponds to the usual geostrophic velocity multiplied by a low-pass filter (17b). The irrotational (ageostrophic) component of the velocity, βˆ‡ ψ, dilates the anticyclones (maximum of pressure and negative vorticity) and shrinks the cyclones (minimum of pressure and positive vorticity) at small scales. Indeed, according to equation (17c), the divergence of the velocity corresponds to the vorticity Laplacian divided by k 2 c . Naturally, this structure is reminiscent of the Ekman model where divergence and vorticity would be related by a double vertical derivative: Ξ΄ = 2 E k βˆ‚ 2 z ΞΆ where Ξ΄ = βˆ‡β€’u, ΞΆ = βˆ‡ βŠ₯ β€’ u, (18) and E k is the thickness of the Ekman layer. The turbulent diffusion involved in equation ( 17c) is rather horizontal due to the strong stratification assumption (see ( 10)). In the proposed stochastic model, the divergent component and the low-pass filter of the system (17) are parameterized by the spatial cutoff frequency k c , which moves toward larger scales when the diffusion coefficient a H increases. If both the vorticity and the divergence can be measured at large scales, the previous relation should enable to estimate the cutoff frequency k c by fitting terms of equation (17c). Then, the horizontal diffusive coefficient, a H , or the variance of the horizontal small-scale velocity (at the time scale βˆ†t), a H /βˆ†t, can be deduced.
2017
[ "spi", "phys" ]
[ "486012", "300022" ]
[ "952791", "853729" ]
01391420
With such a velocity, the expression of the material derivative is changed. To explicit this change, we introduce the stochastic transport operator, D t . The material derivative, D t , generally coincides with this operator, especially for tracer transports. Otherwise, the difference between these operators has a simple analytic expression. The stochastic transport operator involves an anisotropic and inhomogeneous diffusion, a drift correction and a multiplicative noise. These terms are specified by the statistics of the sub-grid velocity. The diffusion term generalizes the Boussinesq assumption. Moreover, the link between the three previous terms ensures many desired properties for tracers, such as energy conservation and continuous variance increasing. For passive tracer, the PDEs of mean and variance field are derived. The unresolved velocity transfers energy from the small-scale mean field to the variance.
2017
[ "spi", "phys" ]
[ "486012", "300022" ]
[ "952791", "853729" ]
02008002
At such RBER values, one could expect that the reduction would become very high or reach infinity. This is not the case as retention errors may still occur and trigger refresh operations even with the proposed method. The reductions could be improved by increasing the value of 𝛼 𝐷𝐴𝑀𝑃 in (4) at the cost of a smaller tolerated retention RBER. The number of refresh-triggered erase operations and, implicitly, the time required for the execution of such operations Fig. 9 Reduction of the number of refresh-triggered erase operations compared to a systematic scheme with fixed refresh frequency. Each curve stops at the maximum tolerated RBER. The considered parameters are the same as those used in Fig. 5. is reduced to a larger extent than the time spent for refreshtriggered read and write operations. This means that the figures reported for the reduction of the read and write operations can be used as a lower bound for the reduction of the time spent for all three types of refresh-triggered operations. VI. CONCLUSIONS An approach was proposed to improve the tolerated raw bit error rate (RBER) in NAND flash-based SSDs via an estimation of the remaining retention time. This estimation can be performed each time a flash memory page is read and relies on the number of detected retention errors and the calculated retention age, i.e., the elapsed time since data was programmed.
2019
[ "spi", "phys" ]
[ "577943", "577943", "577943", "487992" ]
[ "1121850", "172470" ]
01332895
The paper shows that EDF has a zero competitive factor but nevertheless is optimal for online non-idling settings. INTRODUCTION E NERGY harvesting is a technology that allows to capture otherwise unused ambient energy and convert into electrical energy that can be used immediately or later thanks to a storage unit [START_REF] Priya | Energy Harvesting Technologies[END_REF]. This approach extends the life of batteries (or eliminates them entirely) and decreases maintenance. A variety of techniques are available for energy harvesting, including solar and wind powers, ocean waves, piezoelectricity, thermoelectricity, and physical motions. Energy harvesting is a perfect match for wireless devices and wireless sensor networks that otherwise rely on battery power. Some of the main applications include operating as power source for human wearable electronics, supplement battery storage devices, etc. Another key application that is being investigated in great detail is miniature self-powered sensors in medical implants for health monitoring and embedded sensors in structures such as bridges, buildings for remote condition monitoring. Levels of harvested energy may vary significantly from application to application. Therefore spare usage of available energy is of utmost importance. The system we target consists of three components (see Figure 1): a single processing unit with unique voltage and frequency, an energy harvester and a rechargeable energy storage. We address the problem of scheduling that arises in an energy harvesting system with real-time constraints where tasks have to meet deadlines.
2014
[ "info" ]
[ "21439", "21439" ]
[ "883818", "17150" ]
00822557
EDF is consequently the algorithm of choice under normal functioning since any feasible task set is guaranteed to be successfully scheduled by EDF. However, the feasibility analysis problem turns out to be less straightforward because any computing system could be subject to unpredictable situations that can stop the scheduler from guaranteeing all the deadlines. In order to make this scheduling algorithm resilient with exceptions which are failures and overload mainly, the algorithm must be combined with specific techniques first to recover from failures and second to cope with transient overload. C. Overload Management Several approaches have been proposed to address deadline missing in firm real-time systems. In the (m,k)-firm model, at least m jobs out of any k consecutive jobs from the same task must meet their deadlines for correct functioning [START_REF] Hamdaoui | A dynamic priority assignement technique for streams with (m, k)-firm deadlines[END_REF]. The elastic task model is an attractive model for adapting real-time systems in the presence of overload [START_REF] Buttazzo | Elastic Task Model for Adaptive Rate Control[END_REF]. The method is to reduce the load by enlarging activation periods. Tasks'periods are considered as springs and can change to adapt the QoS so as to keep the system underloaded. The Skip-Over model can also be used to handle overload conditions [START_REF] Koren | Skip-over algorithms and complexity for overloaded systems that allow skips[END_REF]. Koren and Shasha look at the problem of uniprocessor overload by authorizing occasional deadline violations in a controlled way. A periodic task Ο„ i is characterized besides its basic paramaters by a skip parameter s i .
2013
[ "info" ]
[ "21439", "21439", "21439" ]
[ "883818", "17150" ]
01676179
Following discussions one of the sketches was selected as base sketch to work from (DP 2) and some of the features on the list DA 4 were removed as not being necessary (DP 3). In the final meeting, the aim was to come up with a final design. The desired elements were reviewed and then a layout was created (DA 6). It was observed that one of the students was designated for drawing the design and the others gave suggestions and comments. Initially the participants wanted all the desired elements to show up on the homepage but this would have resulted in a cluttered look. At this point, they went back to reviewing existing related apps and websites and based on existing different designs, managed to create their final design sketch (DP 4, DA 7). As was seen in study 1, constraints existed which had an effect on the decision-making process. The most evident constraint was that of time (C 1), towards the end of each hourly meeting there was an obvious pressure to achieve something which led to ideas being accepted or discarded hurriedly in order to reach a resolution. This particularly affected DP 3 and was directly responsible for DP 4 which led to a final result more based on the review of existing solutions than all of the previous work undertaken. The second constraint was the skill-level of the participants (C 2) which meant that they looked at superficial aspects of the design only (no discussions of technical aspects) and having created the personas and scenarios, DA 3, they used these only in the creation of feature list DA 4, but otherwise they never made use of them again.
2017
[ "info" ]
[ "243421", "82150" ]
[ "1025906", "1017271" ]
04267389
Despite a more manageable training process compared to GANs, diffusion models necessitate a multi-step sampling procedure during inference, extending the processing time. This becomes particularly problematic for real-time applications like network traffic generation, where the demand is for the rapid generation of tens of thousands of flows per second, especially in high-throughput settings. This situation underscores the need for optimization techniques that can expedite the inference process of diffusion models while preserving generative quality. Dimensionality of traffic. Generating network traffic data introduces unique challenges stemming from the intrinsic structure of the data. For instance, both input and output lengths can vary, requiring a model capable of handling an inconsistent number of packets in each flow. Additionally, the high dimensionality of each packet, particularly when payloads are included, can complicate the training process and necessitate significant computational resources. Finally, network traffic flows can encompass up to tens of thousands of packets, further escalating the task's complexity. Traditional machine learning models might struggle with this sheer scale of data, underscoring the need for tailored solutions for network traffic data synthesis. Generative foundation model beyond traffic generation.
2023
[ "info" ]
[ "129172", "129172", "301301", "529665", "1084804", "129172" ]
[ "740518" ]
03135284
Facebook shows that less-developed regions exhibited larger performance degradations through their analysis of edge networks [START_REF] BΓΆttger | How the internet reacted to covid-19 -a perspective from facebook's edge network[END_REF]. Network latencies were approximately 30% higher during the lockdown in Italy [START_REF] Feldmann | The lockdown effect: Implications of the covid-19 pandemic on internet traffic[END_REF]. According to an NCTA report, networks in the United States saw less congestion [START_REF]NCTA: COVID-19: How Cable's Internet Networks Are Performing: METRICS, TRENDS & OBSERVATIONS[END_REF]. Due to decreased user mobility, cellular network patterns have shifted [START_REF] Lutu | A characterization of the covid-19 pandemic impact on a mobile network operator traffic[END_REF]: The authors found a decrease in the average user throughput as well as decreased handoffs. Feldmann et al. [START_REF] Feldmann | The lockdown effect: Implications of the covid-19 pandemic on internet traffic[END_REF] observed that the fixed-line Internet infrastructure was able to sustain the 15-20% increase in traffic that happened rapidly during a short window of one week. Our work differs from and builds on these previous studies in several ways: First, this study extends over a longer time frame, and it also uses longitudinal data to compare traffic patterns during the past six months to traffic patterns in previous years. Due to the nascent and evolving nature of COVID-19 and corresponding ISP responses, previous studies have been limited to relatively short time frames, and have mainly focused on Europe. Second, this work explores the ISP response to the shifting demands and traffic patterns; to our knowledge, this work is the first to begin to explore ISP and service provider responses. Application Measurements during COVID-19. Previous work has also studied application usage and performance, such as increases in web conferencing traffic, VPN, gaming, and messaging [START_REF] Feldmann | The lockdown effect: Implications of the covid-19 pandemic on internet traffic[END_REF].
2021
[ "info" ]
[ "129172", "46584", "7118", "432896", "129172" ]
[ "740518" ]
00582826
An enumeration is now possible and an identification / segmentation of these trees according to some objective criteria such as the size of canopy, the average color, local density of the coconut trees fields. Finally, a ground truth validation is performed in order to estimate the detection rate and error in each coconut trees class type leading to a precise extrapolation of the global number of trees. DATA IKONOS optical data is widely available through the whole Tuamotu archipelago and its high spatial resolution (about one meter resolution at ground level) is sufficient to carry out our objective. The study focuses on the atoll of Tikehau that is wellknown from the specialists and easily accessible from Tahiti as a validation study area before extending the method to the rest of the Tuamotu atolls. Tikehau data set was acquired by IKONOS2 on July and August 2003 and is already ortho-rectified and registered in the WGS84 projection. As the complete mosaic of the atoll of Tikehau has a resolution of 22032 by 15614 pixels, the original image is cut out into sub-images, each one locating a motu (a small island constituting an atoll). A motu is then selected in order to validate the proposed method. TREE FIELDS CLASSIFICATION The coconut trees crown segmentation process must be applied in coconut fields areas to avoid false alarms. In the images, several structures are distinguished such as the sea, the sand, the coral and some dwellings as well as the vegetation (coconut trees and other atoll vegetation types). First, it's necessary to generate high vegetation masks before applying the segmentation process. Due the lack of the near infra-red band (not available in our database), it is not possible to compute the well known NDVI vegetation index.
2008
[ "info" ]
[ "389984", "2411", "389984", "254642", "254642" ]
[ "879106", "1325537", "1164575", "743580" ]
02879271
We therefore consider this to be the optimal case. However, (s 1 s 2 ) 1/2 = 4dB can also be achieved by very different choices of parameters, e.g., s 1 = 7dB and s 2 = 1dB. In this case we explicitly show that no Wigner-negativity can be generated remotely with only photon subtraction, i.e., condition ( 6) is not satisfied. However, by implementing a local Gaussian transformation R = S -1 in mode g, we can fulfil (9) and reach a significant amount of Wigner-negativity equal to the optimal case W min β‰ˆ -0.135/2Ο€. This example shows that the main role of the local Gaussian transformation R is to balance the noise in modes f and g. This explains why the symmetric setup with s 1 = s 2 = 4dB is the optimal case. Impure two-modes states do not only arise due to losses, they could also originate from entanglement to additional modes. To explicitly explore this case, we now subtract a photon from a mode in a larger multimode state. In particular, we consider CV graph states [5][6][7][8][9], which form the backbone of measurement-based quantum computing in CV [44], and have tractable entanglement properties. Recently, EPR steering was experimentally observed in such a system [33]. These states are Gaussian, with a covariance matrix that is built in accordance with a graph G as a blueprint.
2020
[ "phys" ]
[ "541692", "541692" ]
[ "749023" ]
03037730
To get the isomorphism [START_REF] Crespi | Suppression law of quantum states in a 3D photonic fast Fourier transform chip[END_REF], all that remains to be done, is to define the action of U on the vacuum: U |0 GβŠ•K = |0 G βŠ— |0 K . ( 43 ) With these definitions, and with [START_REF] Dittel | Totally destructive interference for permutation-symmetric many-particle states[END_REF][START_REF] Ou Z Y, Rhee | Photon bunching and multiphoton interference in parametric down-conversion[END_REF] we can now understand the isomorphism [START_REF] Crespi | Suppression law of quantum states in a 3D photonic fast Fourier transform chip[END_REF] in a much more elegant way. It is also insightful to revisit the single-mode spaces F B (C) and F F (C) in the light of second quantisation. First of all, it should be emphasised that the singlemode space only has a single creation (and annihilation) operator a † . As we stressed before, the mathematical framework is essentially defined by the calculus of creation and annihilation operators. For the bosonic single-mode Fock space, we find that [START_REF] Crespi | Integrated multimode interferometers with arbitrary designs for photonic boson sampling[END_REF] reduces to [a, a † ] = 1, which is exactly the commutation relation that describes the ladder operators of a harmonic oscillator. For the fermionic case, we find that (39) describes an operator with properties {a, a † } = 1 and (a † ) 2 = 0. This is exactly the recipe for the Pauli operator Οƒ + , given by a matrix Οƒ + = 0 1 0 0 , (44) which solidifies the connection between fermionic systems and spin chains. This concludes our description of how second quantisation is used to describe states. However, the full potential of the formalism stems from its possibility to also describe observables, as we will see in the next section.
2020
[ "phys" ]
[ "541692" ]
[ "749023" ]
00656774
The calculated configuration of these states is as much as 80-90% pure Ο€d 5/2 βŠ— Ξ½d 3/2 . Noteworthy is the fact that both the USD [START_REF] Brown | [END_REF] and USDA/USDB [16] interactions predict the J = 4 state to be a Ξ²-decaying isomeric state, partly connected to the ground state by a delayed M 3 transition. In all interactions, the J = 3 state is found to be unbound, at an excitation energy of about 1.7 MeV. Taking the value of 0.80 (12) MeV for the neutron emission threshold, the J = 3 state is predicted to be unbound by about 0.9 MeV. Consequently, it is reasonable to discard the two possibilities of J = 4 and J = 3 for the observed excited state at 657 (7) keV, the former being possibly long-lived isomer, the latter being likely unbound. We therefore ascribe the observed peak at 657 (7)keV to arise from the decay of the J = 2 excited state to the J = 1 ground state. Besides the 'normal' positive parity states, low-lying negative parity states could be present in 26 9 F 17 . Indeed a 3/2 -intruder state has been discovered at 765 keV above the 3/2 + ground state in the 27 10 Ne 17 isotone [5][6][7]. From the recent work of Ref. [8], it is confirmed that the 3/2 -state has a large p 3/2 component.
2012
[ "phys" ]
[ "119", "506920", "119", "119", "119", "119", "506920", "388739", "506920", "506920", "506920" ]
[ "755248", "755406", "755220", "903877", "741727" ]
03339021
So, take courage and prepare yourselves for battle in the morning. Do not fear their numbers, for God has the power to deliver the many into the hands of the few. No strength of soldiers can equal a just cause. Remember that you fight not because you want to but because it is necessary. You go to battle not for glory nor dominion but for survival and your lives. Only the sword can open our road to life: we must either win or die. But is it not more glorious to die in battle by the sword of a soldier than in one's home by the dagger of a spy? I would rather die in battle than in a city or in a prison. Nobody shall kill me with impunity. Here, here we must fall, where our sword can avenge us.
2021
[ "shs" ]
[ "59299" ]
[ "15311" ]
03337393
Also, in complex sentences, the original subject or object is often given instead of a pronoun. Names of persons have been retained in the original language 1 (with certain exceptions, see the Index of Persons). It may seem strange to English-speaking readers to read "Emperor Friedrich" instead of "Emperor Frederick", but most will now accept "King Louis" instead of "King Lewis" which was used formerly. The practice of using the original form of names has been followed in a spirit of cultural internationalism, for which the editor requests the reader's tolerance. The same courage, however, was not shown concerning names of places: well-known places like Rome have been given in English. This leads to somewhat inconsequential forms like "Duke Philippe of Burgundy". In this area, complete consistency appears to be really difficult. Texts from the Bible are quoted from the Douai-Reims edition, sometimes in a form slightly modified to fit Piccolomini' text. Texts from classical authors are quoted from the Loeb edition, also sometimes modified. Rare is the translation in which no unrecognised errors have survived, and this applies, of course, to the present translation, too: the reader's generous benevolence is solicited!
2021
[ "shs" ]
[ "59299" ]
[ "15311" ]
01857756
It is also important to note that compliance is not governed by a belief in the legitimacy or validity of the norms embodied by protocols, but is driven instead by social desirability -i.e. when an individual produces the expected behavior, it is because s/he is adjusting to a social norm (in this instance, the expectations of superiors). It is important therefore to consider the reasons that may account for noncompliance by examining naΓ―ve knowledge, a potential obstacle hindering the implementation of hygiene protocols. These issues pertain more specifically to the articulation of scientific knowledge and naΓ―ve knowledge and the dynamics subtending the relation between these two kinds of knowledge [START_REF] Moscovici | Social representations and social explanations: from the Β« naive Β» to Β« amateur[END_REF]. Two specific groups of healthcare staff were targeted in this research: nurses and healthcare assistants. Nurses and healthcare assistants are the two groups most frequently in contact with patients and may therefore be said to play a determining role in the provision of healthcare and the transmission of hospital-acquired infections. However, it is important to note that the work performed by nurses and healthcare assistants (i.e. staff practices) and their level of training (staff Edith SalΓ¨s-Wuillemin, Rachel Morlot, AurΓ©lie Fontaine et al… knowledge) are not comparable. It is hypothesized that social representations of hygiene are likely to differ between the two groups. Study 1: Questionnaires assessing Representations of Hygiene among Nurses and Healthcare Assistants Method and Design The study was based on verbal association task included in questionnaires. The aim was to highlight the key concepts structuring representations of hygiene. Population 114 nurses and 35 healthcare assistants were interviewed as part of this study.
2018
[ "shs" ]
[ "477907", "477907", "103953", "57629" ]
[ "12693", "901761", "740522" ]
01904917
Observing the flow of information through the circulation of messages involves looking at the modes of stakeholder participation. They are materialized in information-communication practices. To answer this question, the analysis is based on the classification of the accounts. In order to characterize Twitter accounts we used some of attributes proposed by [START_REF] Juanals | Analysing cultural events on twitter[END_REF] : "relayed", "relaying", "mentioned" and "passing". As pointed out by [START_REF] Juanals | Analysing cultural events on twitter[END_REF], the value of this index is not significant in itself; it simply provides a means of comparing accounts. We identified six passing accounts that had a significant score [START_REF] Juanals | Categorizing air quality information flow on twitter using deep learning tools[END_REF]. The analysis of these passing accounts makes it possible to identify some of their characteristics. These are all accounts of organizations with the exception of one influencer. It is remarkable these key influential accounts do not share their communities of accounts. From the whole corpus, the data were partioned in restricted subcorpora built according to the criteria of the type of stakeholder (organizational or individual).
2018
[ "shs", "info" ]
[ "145342", "408942", "1057" ]
[ "654", "461" ]
00610032
Two main ideas underlie our approach: one is to provide cartographic representations of standards, the other to assist the navigation of an end-user through the corpus of standards. Considering that there is no universal representation independent of the goals and the organizational context, we aim to provide both graphic and textual representations, and several tools enabling comparison between several standards. It must be emphasized that all the representations are interconnected and that the platform provides specific interfaces allowing the enduser to navigate between them. Furthermore, this navigation will be assisted by applying specific knowledge based on the NaviText model [START_REF] Couto | NaviTexte, a Text Navigation Tool[END_REF]. Textual and Graphic Representation The glossary of the standardized domain is the main textual tool. For each term, semantic and usage variations in the selected corpus are provided and enriched links (see section 5.2 below) can be followed; at any moment, the textual contexts of the same term in two standards can be compared by accessing them in one or several standards. This very simple tool is extremely useful to preserve conceptual coherence during the writing process of a new standard by using the same word to refer to an identical concept or on the contrary by choosing a new word to highlight the creation of a new concept. Graphic representations complete the glossary. As explained in section 3.5, we consider that conceptual maps (or local ontologies) provide a useful level of abstraction, while at the same time keeping and foregrounding the relations between concepts and qualifying their semantics. For example, relations could be linked to the different phases of the PDCA cycle which governs all the standards.
2011
[ "shs" ]
[ "1057", "1057" ]
[ "654", "461" ]
03282789
In Switzerland, data as infrastructure and data governance play a key role in national strategic papers such as the "Digital Switzerland Strategy" (cf. [START_REF] Klievink | Digital Strategies in Action -a Comparative Analysis of National Data Infrastructure Development[END_REF], [START_REF]Schweizerische Eidgenossenschaft: Strategie "Digitale Schweiz[END_REF]) and the renewed "E-Government Strategy 2020-2023". Even though Switzerland is not an early adopter regarding digital transformation of the public sector [START_REF] Neuroni | E-Government und smarter Staat: Die Schweiz auf halbem Weg[END_REF], data-centric public sector transformation is gaining political awareness and importance. To realize the SDG or the Swiss E-Government Strategy, a successful implementation of the OOP requires transfer and re-use of sensitive or personal data between government agencies across borders involving actors on different levels of a political system. Significant effort for the development of a technical infrastructure as well as organizational frameworks are currently developed in several research projects. In an analysis of drivers and barriers for OOP implementation in the SCOOP4C project, trust is stressed as underlying condition [START_REF] Roustaei | Gap analysis report of challenges, needs and benefits of the OOP4C analysis[END_REF], [START_REF] Wimmer | Roadmap for future areas of actions, and policy recommendations[END_REF], [START_REF] Wimmer | Vision of the once-only principle for citizens, including key enablers and major barriers[END_REF]. In the TOOP project, trust is addressed with a technical approach by defining a trust architecture as part of the system design [START_REF] Pavleska | Cybersecurity Evaluation of Enterprise Architectures: The e-SENS case[END_REF], [START_REF] Grandy | Generic Federated OOP Architecture[END_REF]. In a broader perspective, TOOP also links the question of trust to the organizational culture of government agencies. However, a systematic and scientifically grounded rigorous analysis of the organizational challenges is currently lacking. To investigate and elicit the requirements of interoperable data and information sharing along a structured approach, the European Interoperability Framework (EIF) provides a conceptual model for public services and considers data-related services as a basic component for service provision [START_REF]European Commission: New European Interoperability Framework -Promoting seamless services and data flows for European public administrations[END_REF]. The SCOOP4C and TOOP projects rely on this EIF to structure their investigations of barriers, enablers and architecture for a comprehensive OOP implementation.
2020
[ "shs", "info" ]
[ "147310", "487376", "487376" ]
[ "993332", "1016625", "1104839" ]
02107699
It assumes that a given crop only yields one product. It assumes that water serves only one use: the evapotranspiration of the crop that will be sold. Finally, it assumes that only climatic and agronomic variables determine the quantity of water that is necessary to produce a crop. Such hypotheses rarely resist scrutiny. Agribusinesses with reliable and sizeable infrastructure may benefit from a supply of water on request when operating in very favorable conditions. Palestinian smallholders rely instead on shared springs or farmer managed shallow wells. In the Mediterranean area, spring flow varies widely through the year. This constrains the amount of water a farmer can access, as does the social organization allowing the farmer to access this spring. All Palestinian springs used in irrigation are shared according to "water turns". These are measured in terms of time periods during which the full flow of the spring is usually channeled towards a farmer's plot. Similarly, farmers relying on wells need to share with their neighbors, which constrains their access.
2017
[ "shs", "sde" ]
[ "107303", "107303" ]
[ "174372" ]
02103773
Section 2 first sketches an overview of the various literatures that can contribute to the theoretical framework necessary to study the transformation of irrigation in a multi-scalar manner. Section 3 then examines the nature of pioneer fronts and how they need to be conceptualized. The critical point for this article is that new land control creates new frontiers. A pioneer front is usually defined as a space where agriculture is being extended over previously uncultivated land. We re-examine this definition using cases around the world, especially Africa, and argue that a more precise definition of pioneer fronts should refer to the transformation of our interactions with the environment that are linked to the transformation of power relations within society. A pioneer front involves an in-depth reconfiguration of farmers' interaction with land and water that goes beyond turning to high yield varieties or an increased use of fertilizers. Within a pioneer front, land tenure and water tenure are deeply modified. The modalities of access to both land and water are transformed. Appropriation modalities are transformed. Section 4 then explores case studies of groundwater pioneer fronts and wastewater pioneer fronts presently occurring in the West Bank.
2018
[ "shs" ]
[ "107303", "107303" ]
[ "174372" ]
04008698
Though NWP was devised by the PML-N government under the Prime Ministership of Shahid Khaqan Abbasi, it was Prime Minister Imran Khan of the PTI who first held the meeting of the Council in October 2018 (National Water Council, 2018). Since then, not even a single meeting of the Council has been held that confirms that the issue has been de-prioritised and de-securitised. FINDINGS AND CONCLUSION Water scarcity was successfully securitised by two successive governments around the 2018 general elections period. Actors involved in the securitising process remained both at societal and state levels. They created an urgency about the issue thus ensuring that the provinces which usually held reservations about the construction of the new dams on the Indus River and its tributaries consented to the new megaprojects. Construction of the Diamer Basha dam was specifically securitised by the efforts of the CJP Saqib Nisar who launched a crowd-funding campaign. Though the campaign was destined to gather funds for the construction of the dam, it did more than thatit raised awareness among the general population about the urgency of the issue. The situation was such that even the poor sections of the society contributed to the dam fund by sending merely PKR 10 through their mobile devices, sending an SMScellular companies thus collected the donations through this simple method and submitted them to the Dam Fund. Different institutions of the states contributed to the fund by offering part of their salaries. Everyone in the electronic and social media was talking about the water scarcity issues and the construction of the Diamer Basha dam.
2023
[ "shs", "sde" ]
[ "1052438", "472159", "1052438", "472159" ]
[ "1232248" ]
01322603
The case study of the Austrian nationwide public access defibrillation (ANPAD) programme presented in this paper offers an exemplar of the process of co-creation. Here the Austrian Red Cross (ARC) took the lead role in organizing a co-creation network, acting on behalf of citizens and organizing an innovation network capable of creating both the demand and the supply side of a sustainable market for the production and safe application of portable automated external defibrillators (AEDs) by laypeople. This process involved, first, a raising of awareness regarding the need for portable defibrillators, amongst the general public and also politicians. The ARC acted as a representative of users in its dealings with medical professionals, politicians, and private sector businesses. It organized AED training in every first aid training in Austria, worked with research hospitals engaged in establishing an evidence base, worked with firms located in Austria to produce AED devices, and with large businesses to have portable AEDs installed on their premises. The remainder of this paper is organized as follows. Section 2 identifies overlapping areas within the existing social innovation and service innovation literatures. It identifies a common domain of interest, and how these fields of research can usefully be integrated. Section 3 examines the concept of co-creation, which usefully brings together the different strands of literature discussed in section 2. Building on this, section 4 details the theoretical multi-agent co-creation framework that will be used to analyse the dynamics of co-creation in social innovations.
2016
[ "shs" ]
[ "407023", "196396", "300498", "1188", "457940" ]
[ "2460" ]
01203646
In the reputational world, the brand benefit gained by the developer of the innovation is an immediate output. Indeed, the developer is not only seen as an innovator, but also as a professional sensitive to ecological problems, equity and fairness. However, even in the short term the environmental data platform has much broader effects, too, in terms of reputation. It increases attractiveness of platform and it's developers and thus improves possibilities to β€—market' concrete activities in the area of sustainability. In the longer term, other actors in addition to the original developer -public bodies and private companies participating in the application and further development of the platform -gain visibility for their sustainability efforts. Simultaneously the environmental sustainability as an important value becomes more visible and determinant in the society at large. All in all, the reputational -worldβ€– is however more tightly linked to specific actors than the other -worldsβ€–. To summarize, our case illustrates that relational, responsibility and reputational -worldsβ€– are equally important as the technological and market views for the understanding of complex system innovations that include service aspects. In addition, our case indicates that the impacts generated in the different -worldsβ€– are often interdependent and complementary to each other. For instance, some changes in relational and responsible -worldsβ€– are prerequisites to effects generated in the technical and financial spheres.
2015
[ "shs" ]
[ "33065", "33065", "33065", "1188", "33065" ]
[ "2460" ]
03352020
Results The descriptive statistics are estimated on a daily basis, and reported in Table 1. It is noted that the Bitcoin spreads are positively skewed with fat-tailed distribution. This implies, that the spread measures have the right-skewed distributions with most values to the right of their mean. Conversely, the Bitcoin return is negatively skewed with fat-tailed distribution. The negative skewness for the return indicates the le -skewed distributions with most values to the le of mean value. The fat-tailed distributions or higher kurtosis values for the spread proxies and return are indicating the extreme values in the corresponding dataset. On a monthly basis, the fluctuation in the Bitcoin spreads and its yields are graphed in Figure 1 and Figure 2, respectively. It is vividly noted that the liquidity cost and returns are time-varying in the Bitcoin market. It matters to unveil whether the liquidity cost is an appropriate measure to estimate yields Pre-pandemic crisis, the regression relationship is quantified in Table 2. On the same trading day, the Bitcoin returns are positive and significantly associated with its liquidity cost, estimated by ES and CBML measures.
2021
[ "shs" ]
[ "463709" ]
[ "1091662" ]
01182801
Bandit approaches perform similar to the greedy deterministic method. As the number of active atoms increases, the bandit approaches succeed better in recovering the extreme component of the gradient while the deterministic approach is slightly less accurate. Note that for any value of k, the randomized strategies suffer more than the other strategies for recovering the true vector w support. From a running time point of view, again, we note that the deterministic and noniid successive halving bandit approaches seem to be the most efficient methods. The gain in running time compared to the exact gradient OMP is slight but significant while it is larger when comparing with the successive reject algorithm. D. Sparse Approximation with CoSaMP To the best of our knowledge, there are very few greedy algorithms that are able to leverage from stochastic gradient. One of these algorithms has been introduced in [START_REF] Nguyen | Linear convergence of stochastic iterative greedy algorithms with sparse constraints[END_REF]. In this experiment, we want to evaluate the efficiency gain achieved by our inexact gradient approach compared to this stochastic greedy algorithm. Our objective is to show that the approach we propose is empirically significantly faster than a pure stochastic gradient approach. For the different versions of the CoSaMP algorithm, we have set the stopping criterion as follows.
2016
[ "scco", "info" ]
[ "23832", "388932", "388932" ]
[ "174806", "5004" ]
03277680
While their approach is competitive in term of quality of generated data, it is hardly tractable for large scale dataset, due to the multiple (up to 1000 in their experiments) discriminator trainings. Instead of considering adversarial training, some DP generative model works have investigated the use of distance on distributions. [START_REF] Harder | Differentially private mean embeddings with random features (dp-merf) for simple & practical synthetic data generation[END_REF] proposed random feature based maximum-mean embedding distance for computing distance between empirical distributions. Cao et al. (2021) considered the Sinkhorn divergence for computing distance between true and generated data and used gradient clipping and noise addition for privacy preservation. Their approach is then very similar to DP-SGD in the privacy mechanism. Instead, we perturb the Sliced Wasserstein distance by smoothing the distributions to compare. This yields a privacy mechanism that benefits subsampling amplification, as its sensitivity does not depend on the number of samples, and that preserves its utility as the smoothed Sliced Wasserstein distance is still a distance. Differential Privacy with Random Projections Sliced Wasserstein Distance leverages on Radon transform for mapping high-dimensional distributions into 1D distributions. This is related to projection on random directions and the sensitivity analysis of those projections on unitnorm random vector is key. The first use of random projection for differential privacy has been introduced by [START_REF] Kenthapadi | Privacy via the johnson-lindenstrauss transform[END_REF].
2021
[ "scco", "info" ]
[ "458139", "389520", "458139" ]
[ "174806", "5004" ]
03724129
The acyclicity and single-rootedness come at the cost of using inverse relations. Any role, core or non-core, can be reversed by adding -of to its name and changing the direction of the relation. Apart from avoiding cycles, inverse roles also serve to highlight the focus of a sentence by making sure that the central concept is the root of the AMR graph. The AMR Bank is a manually-produced corpus of AMR annotations in English. Only a portion of it (namely the Little Prince corpus and the BioAMR corpus) are freely available. The rest of the AMR Bank can be obtained by a (paid) license from the Linguistics Data Consortium. AMR was designed with English in mind and does not aim to be a universal semantic representation framework. That being said, there have been attempts to use the framework for other languages, notably Chinese, in the Chinese AMR (CAMR) Bank2 . While powerful in its ability to abstract from surface representation, there are a number of phenomena that the framework does not cover -tense, plurality, definiteness, scope, to name some of the more prominent ones. Some of these issues have been addressed: [START_REF] Bos | Separating argument structure from logical structure in AMR[END_REF] proposes an extension to deal with scope in AMR, while [START_REF] Donatelli | Annotation of tense and aspect semantics for sentential AMR[END_REF] proposes to augment AMR with tense and aspect.
2022
[ "scco", "info" ]
[ "150772", "150772", "150772" ]
[ "740210", "747", "2082" ]
02090938
This result seems intuitive. If only species 1 (or 2) disappears, there remains 2(E + J) attributes. But if only species 3 disappears, the number of remaining attributes decreases to a lower 2E + J. In Appendix D.1, however, we show that the property emphasized in Proposition 2 is fragile. More precisely, it holds only when ecological interactions are not too strong (even if ecological interactions are not a source of heterogeneity). The Influence of Ecological Interactions Incorporating this dimension in the model is an attempt to account for the complexities of the web of life. For instance, the interactions between two species can be considered unilateral, e.g. species 1 impacts species 2 but not vice versa, or bilateral, e.g. species 1 impacts species 2 and species 2 impacts species 1. In a two-species system, there are 2 2 = 4 interaction possibilities to consider. As soon as one contemplates a three-species ecosystem, however, there are 3 3 = 27 potential pairwise interactions between species (not even taking into account the added complexity that could be introduced by varying the intensity of each of these ecological interactions). It is evident that the number of interaction possibilities quickly explodes with the number of species in the system.
2019
[ "shs", "sdv" ]
[ "532853", "526949", "422966" ]
[ "22112" ]
01591987
15To model species interactions, we follow [START_REF] Courtois | Conservation priorities when species interact: the noah's ark metaphor revisited[END_REF]. We model each species i as having an autonomous survival probability q i which is the survival probability of species i in an ecosystem free of species interactions and without any management activity. Autonomous survival probability is a measure of the robustness of a species. Low survival probability characterizes species on the brink of extinction while high survival probability characterizes healthy species such as spreading ones. As a result of the interactions that occur between species, the survival probability of each species i also depends on the survival probabilities of all other species through interaction parameters r i,j =i , with r i,j =i ∈ R. Finally, the decision-maker can choose to target the survival probabilities of the invasive species present in the ecosystem. The amount of effort she invests in controlling invasive species k is denoted x k , and we denote by x k the maximum control effort constrained by P i ∈ [0, 1], βˆ€i. 16 The resulting survival probabilities in our stylized two-native two-invasive species ecosystem reads as: (4) P k = q k -x k + k =j r kj P j , P l = q l + l =j r lj P j , with the additional constraint: (5) x k ∈ [0, x k ] βˆ€k . System of equations ( 4) describes the stationary law of evolution of survival probabilities of native and invasive species composing the ecosystem. 17 16 An algorithm that computes x k is available upon request. 17 "Stationary" here refers to the fact that it can be interpreted as the steady state of an explicit dynamic system.
2017
[ "shs" ]
[ "133405", "2583", "199934", "422966", "320813" ]
[ "22112" ]
04299879
Controlling the influence of life expectancy (Table 4, model 4) for departments of residence causes a change in the sign of the life-expectancy coefficient, which remains significant at the 10 % level (Table 5, model 2): when life expectancy increases, height decreases. This phenomenon could be explained by the fact that non-fatal illnesses may be more important than fatal illnesses when it comes to determining height discrepancies between departments and districts. The regressions in Tables 4 and5, calculated as they are on the basis of observations at the individual level, serve as a fine filter for our analysis of the influence of one or another explanatory factor. However, when one takes into account the vast quantity of individual observations, the total variance in height is more a matter of genetics than of socio-economic factors. The adjusted R 2 s are therefore very weak, even if the coefficients of each variable considered individually are of great interest. 38FL01 38 This figure seems quite large; Baten and Komlos-no doubt on the basis of other, unspecified 38FL02 hypotheses-claim that the life-expectancy increase correlated with a 1-cm height increase is only 38FL03 1.2 years in length (Baten and Komlos 1998). In contrast, extrapolating from our results, one would 38FL04 conclude that over the course of the Industrial Revolution, life expectancy increased by more than one 38FL05 hundred years! Nevertheless, diachronic data lend plausibility to our calculation, at least in the case of 38FL06 nineteenth-century France: an increase in conscripts' mean height of 0.75 cm between the 1790-1799 and 38FL07 the 1820-1829 birth cohorts is paralleled by a 7.2-year increase in female life expectancy. U N C O R R E C T E D P R O O F Literacy is not just a proxy for per capita income, although, for one thing, it certainly helps to reduce the number of unwanted pregnancies and thus improves children's gross nutrition (Weir 1993, 1997). For another, in certain cases, the positive influence of instruction is observed even after one has controlled for income (Meyer and Selmer 1999; Steckel 1998): the positive influence of educational level would thus reflect not only the children's but also their parents' improved grasp of nutrition issues, enabling them to fight more successfully against diseases by means of health practices and paramedical care that were better adapted than were those of the illiterate conscripts.
2013
[ "shs", "sdv" ]
[ "866" ]
[ "874783" ]
03491595
They do not frequently offer written information (1.9%) but consult a fertility specialist (68.9%) or refer patients directly (78.2%). SRM were asked about fertility management after conservative treatment of EC/AH, but 23 to 30% of them did not answer the questions (Supplementary Table 2). Eighteen (34%) of them were at least involved in young patients with EC/AH after conservative treatment. If the patient's tubes and the spermogram were normal, 30 physicians (56.6%) considered that the patient could wait for spontaneous pregnancy to occur after conservative treatment for EC/AH. If fertility treatment was planned directly or after waiting this amount of time for a spontaneous pregnancy, most of the SRM (56.6 %; 30 physicians), chose IVF. For IVF, the main stimulation protocols chosen were an antagonist (28%) or an agonist (22%). For 54.7% of SRM, several pregnancies can be allowed as long as the disease does not relapse, but for 18.9 % of SRM a systematic hysterectomy must be performed after the first delivery. During fertility treatment, monitoring recurrence of EC/AH was preferentially performed by hysteroscopy and endometrial biopsy every 3 months (22.6%), or based on ultrasound evaluation of the endometrium (39.6%). Discussion The present study reports the findings of a survey of French gynecologists and their knowledge of, and attitudes toward, FP in EC/AH. Despite average knowledge and attitude scores, most of GS considered and gave advice to patients about FP before EC/AH treatment.
2020
[ "sdv" ]
[ "300156", "300156", "557826" ]
[ "1113617" ]
03489307
In a survey of more than 600 young women with early-stage breast cancer, 29% reported that concern about infertility influenced their treatment decisions [START_REF] Partridge | Webbased survey of fertility issues in young women with breast cancer[END_REF]. Hence, the American Society of Reproductive Medicine (ASRM) and the American Society of Clinical Oncology (ASCO) have put forth official guidelines recommending that patients be educated about the effect of cancer treatment on fertility and fertility preservation options [START_REF]Ethics Committee of the American Society for Reproductive Medicine. Fertility preservation and reproduction in cancer patients[END_REF][START_REF] Loren | Fertility preservation for patients with cancer: American Society of Clinical Oncology clinical practice guideline update[END_REF]. The European Society of Gynecological Oncology (ESGO) decided in 2007 to launch the Task Force for Fertility Preservation in Gynecologic Cancer. This task force was developed to promote knowledge of infertility induced by treatment of gynecologic cancers among healthcare workers and the public through national and international collaboration among oncologists, reproductive specialists [START_REF] Denschlag | Fertility-sparing approaches in gynecologic cancers: a review of ESGO task force activities[END_REF]. Strategies for fertility preservation prior to chemotherapy depend on the time required, the woman's age, its risks and efficacy, and the individual preference of the patient [START_REF] Von Wolff | Practical recommendations for fertility preservation in women by the FertiPROTEKT network. Part II: fertility preservation techniques[END_REF]. In the present study, the analysis of the preservation of fertility was based on the cryopreservation of embryos and oocytes, which are the two established methods of fertility preservation. In October 2012, ASRM published an official guideline stating that mature oocyte cryopreservation should no longer be considered experimental and can be recommended with appropriate counseling to patients receiving gonadotoxic therapies for cancer [START_REF]Mature oocyte cryopreservation: a guideline[END_REF]. In our database, no oocyte cryopreservation was found for fertility preservation before 2011.
2019
[ "sdv" ]
[ "221529" ]
[ "1113617" ]
02627272
Solutions for volumetric titrations were bought from Fluka (0.10 M NaOH and 0.10 M HNO 3 ) and Roth (0.010 M NaOH) and used directly in the potentiometric titrations. Solutions prepared from nitric acid (Merck Suprapur) and sodium hydroxide (0.1 M standard, Merck) were used to adjust the pH when necessary. Potassium thiocyanide, hydrochloric acid, potassium chloride and ammonium acetate were all from Merck. Extraction and purification of HA sample Peat samples were collected in the Mogi River region of RibeirΓ£o Preto, SΓ£o Paulo State, Brazil. The humic substances were extracted following the IHSS procedure for soil organic matter [START_REF] Botero | Peat humic substances enriched with nutrients for agricultural applications: competition between 10/11[END_REF] . The alkaline extracted (AE) soil HA was taken before the HCl/HF treatment whereas the fully purified (FP) HA underwent the full purification procedure. In brief, the humic matter purification procedure for soils consists in separating the insoluble humin from the soluble humic and fulvic acids using a 1/10 mass ratio of 0.1 M NaOH for 4 h under an inert atmosphere. This step is followed by acidification to pH 1 using 1.0 M HCl to separate the soluble FA from the acid-insoluble HA. The HA is resuspended in 0.1 M NaOH and precipitated in 0.1 M HCl/0.3 M HF to destroy the remaining mineral phase. Then, the solid is dispersed in water to form a slurry, and transferred to a Visking dialysis tube where it is subsequently dialysed against distilled water.
2017
[ "sdv" ]
[ "466264", "237201", "1005035", "1005035", "466264", "237201", "496852", "1005035", "237201" ]
[ "17385", "1027290", "181021", "17384" ]
00539228
Although use of perception systems should enhance driver awareness, as represented by the triangular shape in the figure, occlusion will mask several potential risks e.g. the distracting vehicle (DV), the power two wheeled vehicle (PTW), etc. Further, if there is an Intrusion Vehicle (IV) arriving at a prohibited speed, for the driver of the SV will be difficult to know that the IV will be travelling too fast to brake at the stop line on time. When the IV enters the SV sensor field of view, it will be likely too close already. However, if each vehicle can transmit their position, speed and other data, by associating this information onto a digital map representing the road geometry and other contextual information an extended digital representation of the vehicle immediate environment could be built as shown in Figure 4. Thus an application running in the SV can analyse and identify the possible risks informing the driver beyond what current sensors could provide. The figure shows the risk vehicles in red. By knowing the speed at which they are evolving, their distance to the intersection at the time of the query plus the state of the SV, it is possible to warn or even act in the SV. This is the Safety Margin concept deployed in the SAFESPOT project. Thus by sharing vehicle state information, projecting it on the road geometry, it is possible to extend the driver situational awareness. The fundamental functions for a V2V safety system would consist of a Wireless Communications dynamic network, a Digital Map and a Localisation system. A.
2010
[ "info" ]
[ "133641", "133641", "44462", "133641", "133642" ]
[ "883149" ]
00539237
Figure 1 shows the statistics associated to accidents in the Europe of 27. In the Europe of 27 (2004): 43 % of injury related accidents occurred at intersections. Out of the overall number of fatalities, 21% occurred at intersections, with 34% of the seriously injured [START_REF]CARE, IRF, IRTAD, TRACE, and National Statistics Databanks[END_REF]. The design of a system that is to address intersection safety, needs to identify the context under which this occur. This should give indicators on the type of intersections where accidents occur, the type of vehicles involved, the time of the day, the age distribution of the drivers involved, weather conditions, etc. Information that is used as input on the V2V system that is to be designed. As the ESV application is to be extended to intersection safety involving all type of vehicles. The application considers the statistic results so they are part of the design. For example, 80% of intersections occur in rural areas representing a low percentage of fatalities, by contrast fatalities inside urban areas are 42%. The road structure and geometry is another source of information. Table 1 summarises the context within which accidents occur at road intersections.
2010
[ "info" ]
[ "133641", "133641", "44462", "133642" ]
[ "883149" ]
00790107
It casts camera optimization problems mostly conducted in 6D into searches inside a 2D space on a manifold surface. Interestingly, our model can be easily extended to integrate most of the classical visual properties employed in the litterature [START_REF] Ranon R | Accurately Measuring the Satisfaction of Visual Properties in Virtual Camera Control[END_REF]. For example, size of key subjects (or distance to camera) can be expressed as the set of viewpoints on the manifold which are at a given distance from the camera (resolves as 0, 1 or 2 lines on the manifold surface). In a similar way, vantage angle properties (eg. see the front of a subject) represent sub-regions of the manifold. By reducing the search space to a manifold where the on-screen location of subjects are exact, we obviously restrict the generality of the technique. However, the benefits in terms of computational cost greatly favors our approach. Though the solution for two subjects appears easy to formulate with vector algebra, it has not been reported before and the model serves as an expressive way on which to build more evolved techniques. The techniques presented in the paper have the potential to replace most of the previous formulations related to camera control with a simpler and more efficient approach, and opens great possibilities to include more evolved on-screen composition techniques in a large range of applications in computer graphics. Figure 1 : 1 Figure 1: Heatmap representing the quality of on-screen composition for two subjects (white points) for a region of camera configurations (topview of a the 3D scene).
2012
[ "info" ]
[ "155296", "155296" ]
[ "3671", "853625" ]
01457232
The conclusions of the study are strong with respect to the weak uncertainties on the characterisations. CONCLUSION This study allows us to establish an environmental hierarchy between recycling solutions for aluminium cables. Whatever the electricity mix used by the recycling plant, the MTB mechanical recycling process is the most environmentally friendly. Additionally, LCA was conducted in order to help the company to highlight environmental hotspots of the system and try to design new solutions to decrease environmental impact of aluminium produced [START_REF] Grimaud | Reducing Environmental Impacts of Aluminium Recycling Process Using Life Cycle Assessment[END_REF]. On the one hand, the study demonstrates huge environmental benefits for aluminium recycled in comparison with primary aluminium. On the other hand, the results show the harmful environmental influence of the melting refining in comparison with mechanical recycling process. The LCA revealed that the closed product loop option (considering aluminium cables) has lower environmental impact over the other recycling scenario using mixed aluminium scraps. This performance has already been demonstrated for aluminium cans [START_REF] Niero | Circular economy : to be or not to be in a closed product loop ? A Life Cycle Assessment of aluminium cans with inclusion of alloying elements[END_REF]. To conclude, recycling when driven without loss of quality is a relevant alternative to mining.
2016
[ "spi", "sde" ]
[ "164351", "483036", "483040", "483036", "164351", "164351" ]
[ "1723", "177001" ]
01461568
The Fig. 3 presents aluminium recycling as modelled in the Ecoinvent dataset. The modelling is divided in 5 steps: 4 mechanical separation steps (in red on the figure) and 1 thermal steps (in blue on the Fig. 3). Scenario 3: MTB Cables Recycling The Fig. 4 shows all the steps take into account in the modelling of scenario 3. For this scenario, the distance of transport takes into account is 540 km for old scraps and 510 km for new scrap from various cable manufacturers. The intrinsic aluminium quality reaches at least 99.6% of aluminium purity (average quality check during the period 2012-2014). An intensive inventory analysis was developed during an internal survey conducted in collaboration with EVEA consultants firm at MTB Recycling plant during autumn 2014. Foreground data are based on measurement and on stakeholder interviews. Collection of background data comes from Ecoinvent 3.1 or relevant literature. For this scenario, the distance of transport takes into account is 540 km for old scraps and 510 km for new scrap from various cable manufacturers. The intrinsic aluminium quality reaches at least 99.6% of aluminium purity (average quality check during the period 2012-2014).
2016
[ "spi", "sde" ]
[ "164351", "164351", "164351" ]
[ "177001", "1723" ]
04184768
This wetland system is designed to reduce the forecasted water shortage in Melbourne by providing appropriate quality water for substitution of potable water for non-drinking purposes, such as toilet flushing, laundering, and gardening. RTM&C System The fundamental architecture of the RTM&C system consists of sensors, actuators, communication devices, and a web server. The water level and water quality sensors (RTM on-site hardware) are already installed at key points of the wetland to provide important information on the system's health and capacity in real-time. These key points include the sedimentation pond, inlet and outlet of the main wetland, and the stormwater harvesting pond. The actuators (RTC on-site hardware) have been installed at the control points to adjust the valve position, hence to flexibly hold and release water into and out of the wetland. The cabinet setup with multiple layers of power protection is shown in Figure 1(b). The control points include the inflow and outflow pipe of the main wetland, as well as the underground baseflow bypass between the harvesting pond and the downstream creek. The communication devices are essential to connect the on-site hardware to the centralised web server. The web server is the "brain" of the entire system, it consists of a centralised database to collect and store real-time data from the field and weather forecast from the weather agency (i.e., BOM in Australia). The collected data is used as input to the decision-making process of RTC strategies, resulting in the valve position target, which is then sent to the actuators for execution.
2023
[ "spi", "sde" ]
[ "500277", "1081708", "306322" ]
[ "981707", "1125292" ]
02570676
In addition, as described in [START_REF]Privacy and Security Risk Evaluation of Digital Proximity Tracing Systems -The DP-3T Project -21[END_REF]: "In decentralized systems in which infected people share their identifier, there is an easier way for an attacker to learn, when she was in close proximity to an infected person, without creating multiple accounts. The attacker can simply match the set of infected identifiers against each of her recorded Bluetooth identifiers to determine when she was in contact with an infected person and use this information to reveal the identity of the infected." Therefore, an adversary is able to identify all diagnosed users he has been close to during a time window corresponding to a period of contagiousness. The sharing or publication of this information can lead to the stigmatization and harassment of all diagnosed users. β€’ In the centralized approach, in contrast, when the user is notified that she was in close proximity to an infected person, this user only knows that at least one encountered person has been diagnosed. Although a user is able to re-identify the infected individual if she has met only one person, this re-identification task is much harder otherwise. For example, one way to carry out this attack would be to create an instance of the application (registered on the server) for each encountered person, which is much more costly to deploy. Therefore, risk IR 1 "Identify infected individuals" has a very large scalability in the decentralized approach. To make a clear distinction between the scalability of this attack in the two approaches, we revisit the definition of this risk in the proposed taxonomy by using the following definition: β€’ IR 1-1: Identify all infected individuals among encounters, when the adversary is able to find diagnosed users among all persons he has encountered during a period corresponding to a contagious period. The attacker proceeds by collecting pseudonyms of each person encountered, and then correlating this list of pseudonyms with the list of infected users' pseudonyms published by the authority to determine when she was in contact with an infected person and use this information to reveal the identity of the infected.
2020
[ "info" ]
[ "206120", "206120", "206120", "206120", "206120", "206120", "206120" ]
[ "6722", "908", "868662", "5208", "833548", "170349", "552" ]
02611265
The protocol works as follows: β€’ When Bernard goes to Germany, his App broadcasts, at each epoch j, HELLO F R,j messages as defined in Section 4. β€’ When Bernard meets a German User, let's say U ta, at epoch i: -U ta stores the (HELLO F R,i , time) pair in her LocalProximityList. -Bernard stores the (HELLO DE,i ,time ) pair in his LocalProximityList (where HELLO DE,i is the HELLO message broadcast by U ta at epoch i). β€’ If U ta is later tested and diagnosed COVID-positive: -U ta uploads her LocalProximityList on the German back-end server. -The German back-end server obtains the (HELLO F R,i ,time) pair and processes it as follows: * It parses HELLO F R,i to retrieve ecc F R (8 bits), ebid X (64 bits), time X (16 bits) and mac X (40 bits). * decrypts ecc F R , using K G , to recover the message country code, CC F R . Since CC F R is the country code for F rance, the (HELLO F R,i ,time) pair is forwarded to the French back-end server. * The French server processes it as described in Section 6. β€’ Similarly, if Bernard is later tested and diagnosed COVID-positive in France: -Bernard uploads his LocalProximityList on the French back-end server. -The French back-end server obtains the (HELLO GE,i ,time ) pair and processes it as follows: * It parses Hello GE,i to retrieve ecc GE (8 bits), ebid X (64 bits), time X (16 bits) and mac X (40 bits).
2020
[ "info" ]
[ "206120", "206120", "206120", "206120", "206120", "206120", "206120" ]
[ "868662", "908", "6722", "5208", "833548", "170349", "552" ]
01599665
To carry out a real conversation the process cannot take too long. Therefore "How fast can one type with the eyes?" is an equally interesting and important research question as "How fast can one type with the typewriter" was for decades. Previous work Results from past experiments with text entry by gaze have been collected in Tables 1 (experiments with a soft keyboard) and 2 (experiments with other techniques). We have included only longitudinal experiments where the participants came back to the lab on several days and thus had a chance to improve their performance through experience. The data in the tables is teased out from the publications. For [START_REF] Wobbrock | Longitudinal Evaluation of Discrete Consecutive Gaze Gestures for Text Entry[END_REF] the exact numbers were not reported and are therefore estimated from the graphs in the paper. The same holds for the MSD rate in [START_REF] Pedrosa | Filteryedping: Design Challenges and User Performance of Dwell-Free Eye Typing[END_REF]. Several papers reported on more than one studies. For [START_REF] RΓ€ihΓ€ | An Exploratory Study of Eye Typing Fundamentals: Dwell Time, Text Entry Rate, Errors, and Workload[END_REF], there were results for the learning phase (denoted by lp in Table 1) and the advanced phase (denoted by ap).
2015
[ "info" ]
[ "301029" ]
[ "1017253" ]
02348785
This means that the coverage zone of BPSK 1/2 is much smaller compared to the other modulations. In this way, the ratio of neighbors communicating through low-throughput links decreases significantly. In the pedestrian case (Figure 9), contact duration distribution shows a more spread behavior than Luxembourg's. This is due to the shorter relative speeds between human beings compared with the vehicular case which leads to longer contact duration. As such, this yields a more spread out distribution for the freespace and two-ray models and a completely different shape for the log-distance model. In fact, for the log-distance model, the highest contact duration probability is found between 25 to 30 seconds. The distribution of contact capacities in the Stockholm scenario, depicted in Figure 9b, has several similarities with the plots of the Luxembourg case -the fixed-rate plots follow the same shape as the distribution of contact duration, while the adaptive plot looks like that of a decreasing exponential, except for the log-distance. The unusual shape the log-distance with a step-wise linear adjusted modulation scheme contact capacity distribution is explained by carefully observing the behavior of nodes, which act as pedestrians. Indeed, due to the much shorter communication range in the case of log-distance, there are only two sorts of contacts. Either the contact is very short, leading to poor capacity, or the contact is long with little distance between the two nodes.
2019
[ "info" ]
[ "251992", "541705", "251992", "54302", "541705" ]
[ "955086", "8297" ]
01321387
The key point for an operator is to design a global strategy to select which nodes act as seeders and which ones as leechers, in order to reduce the total dissemination cost. We formulate this question as a stochastic control problem that we solve using an application of Pontryagin's Maximum Principle. We provide a mathematical framework to devise the optimal strategy for opportunistic offloading under a generic cost model. First, we show that an optimal solution exists; then, from this policy, we extract some insights to develop heuristics. Finally, we discuss the advantages of the proposed model compared to the classic seeder-only model. We demonstrate that separating seeders/leechers leads to better incentive strategies in the most demanding cases of content with a large span of delivery delays. I. INTRODUCTION Device-to-device (D2D) communications are a well-timed strategy for operators to face the ever-increasing mobile data demand by offloading part of the traffic from their cellular infrastructure. Motivated by the delay-tolerance and redundancy of some types of content, operators may send data only to a subset of requesting users (seeders), which act as opportunistic forwarders to help propagate content using D2D communications. The combination of two complementary channels (cellular and D2D) provides extra capacity, helping reduce the impact of redundant traffic.
2016
[ "info" ]
[ "251992", "389034", "54302" ]
[ "8297", "955086" ]
01273153
DVFS can make a significant di↡erence in both performance and energy consumption. Although not available on the manycore processors we evaluated, it is available for the Xeon E5 and GPU platforms. Therefore, for these platforms we always show two measurements. The first ones, Xeon E5 (2.4 GHz) and Tesla K20 (758 MHz), represent the experimental results when their frequencies are optimized for performance, i.e., using their maximum working frequencies. The second ones, Xeon E5 (1.6 GHz) and Tesla K20 (705 MHz), are relative to the optimal energy consumption setting, which for this kernel was 1.6 GHz and 705 MHz on Xeon Phi and Tesla K20, respectively. Figure 5 compares the time-to-solution and energy-to-solution across the processors using a problem size of 2 GB (180 3 grid points) and 500 time steps. For these experiments we used the optimal number of threads on each platform. With the exception of Xeon Phi (in which the best results were obtained with 224 threads), the thread count was equal to the number of physical cores of each processor. As shown in Figure 4, our solution for Xeon Phi keeps scaling considerably well past the 57 physical cores. To the best of our knowledge, GPUs are among the most energy e cient platforms currently in use for seismic wave propagation simulation. Yet, our proposed solution on MPPA-256 achieves the best energy-to-solution among the analyzed processors, consuming 78 %, 77 %, 88 %, 87 % and 86 % less energy than Tesla K20 (758 MHz), Tesla K20 (705 MHz), Xeon Phi, Xeon E5 (2.4 GHz), and Xeon E5 (1.6 GHz), respectively.
2016
[ "info" ]
[ "119004", "118356", "18404", "18404", "43688", "1042443" ]
[ "914915", "6046" ]
02378951
The function of each node and the overall relationship of this thesis are: β€’ VALIDATE: to execute the validation process on the platform to check if the input settings are correct. Note that before starting a DSE process, some parameters need to be set up from the designer (these parameters will be mentioned in the next Chapter). This button allows the evaluation if these input parameters are valid or not. If there is no error, the color of this button will turn to purple with "VALIDATED" status as in Figure 3.11. If not, there is a notification box of the problem/cause for the error. The button color is red with "INVALIDATED" status. β€’ GENERATE: to execute the generation process. Based on the platform model, this process will translate this model into programming language and include it in a DSE process. The programming language we use for the core of the DSE process is python. This process ends with a notification box and this button turns to purple with "GENERATED" status (Figure 3.11).
2019
[ "spi" ]
[ "185974", "255534" ]
[ "13594" ]
01533664
Construction of a continuous viscosity solution to a truncated equation. Let m β‰₯ 1, we first truncate the initial data as we did for f m in the proof of Theorem 2.1 by considering u 0m (x) = min{u 0 (x) + 1 m Ο†(x), m}. (40) Since u 0 ∈ E Β΅ (R N ), we get |u 0m (x)| ≀ C m , (41) |u 0m (x) -u 0m (y)| ≀ L m |x -y|. (42) Moreover, u 0m still satisfies [START_REF] Ciomaga | On the strong maximum principle for second-order nonlinear parabolic integro-differential equations[END_REF] with the constant C 0 + Β΅ and u 0m β†’ u 0 locally uniformly in R N . We then introduce the truncated evolution problem (2) with H mn (respectively f m ) defined by (33) (respectively (31)) for m, n β‰₯ 1 and with the initial data defined by (40). The classical comparison principle (see Theorem 4.2) holds for bounded discontinuous viscosity sub and supersolutions of u t -F(x, [u]) + b(x), Du + H mn (x, Du) = f m (x) in Q T , (43) with the initial data u mn (x, 0) = u 0m (x). Notice that u Β± mn (x, t) = Β±(C m + (C m + C H )t) are respectively a super and a subsolution of (43) satisfying the initial conditions u - mn (x, 0) = -C m ≀ u 0m (x) ≀ C m = u + mn (x, 0). Then by means of Perron's method, we obtain the existence and uniqueness of a bounded continuous viscosity solution u mn of (43) such that |u mn | ≀ CmT independent of n. We refer to classical references [START_REF] Crandall | User's guide to viscosity solutions of second order partial differential equations[END_REF] for the details. 2. Convergence of the solution of the truncated equation to a continuous solution of (2).
2019
[ "math" ]
[ "75" ]
[ "13594" ]
00374703
Before the implementation of the control law (25) in the electropneumatic system, the co-simulation was used. This technique consists in using jointly, the software developed by the researchers in modeling, and the software dedicated for system control. Thus, the physical model of electropneumatic system (1) was treated by AMESim, and the control law (25) was developed on Simulink. A satisfactory simulation results are obtained. Then, the control law is implemented using a Dspace 1104 controller boar with the dedicated digital signal processor. The measured signals, all analog, were run through the signal conditioning being read by the 16 bits analog/digital converter. Two pressure sensors are used, their precision is equal to 700 Pa (0.1% of the extended measurement) and their combined non linearity and hysteresis is equal to Β± 0.1% of the extended measurement. The cylinder velocity is determined by analog differentiation and low-pass filtering of the output of the position given by an analog potentiometer (Its precision and repeatability is equal to 10Β΅m and its linearity is 0.05% of the extended measurement.). The acceleration information is obtained by differentiating numerically the velocity. In order o assume the system convergence, the gains must be only positive.
2008
[ "spi" ]
[ "31070", "31070", "31070", "31070" ]
[ "859512", "171914", "8048", "839367" ]
01470317
We also compared the financial policies: investment policy, capital structure and payout policies. Table 4 presents the results of these tests. Table 3 shows that the financial performance of firms where government is a shareholder is not significantly different from private firms. The public firms' return on equity, return on sales and stock price performance are smaller but not significantly. This similar performance is partly explained by the fact that private and public financial policies are quite close (Table 4). Indeed, payout policies, debt structure and the ratio of investment expenditures on assets are comparable. We only observe significant differences in return on assets (ROA), Tobin's Q and productivity of employees. We also notice greater investment expenditures on sales in public firms, and a different level of debt. However, we think differences in debt and Tobin's Q are not significant: -The difference in the level of debt is only significant when comparing with the second sample. And it concerns short term debt as the long term debt ratio is similar. So we assume that the capital structure do not differ significantly between public and private firms. -Moreover, differences in Tobin's Q only appear when comparing public firms to the first sample.
2009
[ "shs", "qfin" ]
[ "108098" ]
[ "977041" ]
02057673
The puzzle has become more complex with the growing importance of share repurchases. This paper surveys research on pay out policies focusing on the firm point of view. It sets up some answers to why firms pay out. We will not focus on explanations of share price reactions or investment strategies linked to pay out policies. We will only focus on what happens inside the firm. To follow this path, we have chosen to classify concepts depending on who takes the decision in the firm: shareholders or managers. If there is no agency conflict, decisions will be in accordance with shareholders' wishes. Otherwise, managers will influence pay out policies. Of course, shareholders' wishes are not homogeneous and some shareholders may be more influential than others. In French firms, the major agency conflict is not between managers and shareholders, but between controlling shareholders and minority shareholders.
2006
[ "shs" ]
[ "108098", "172354" ]
[ "977041" ]
03468357
For convenience, we group these control variables into two. The first group covers domestic real economic factors such as growth, income level, economic structure, and investment or saving rate. The second group includes monetary variables, financial structure and global factors. These control variables are first individually considered and then combined in our estimation. The choice of these two groups of economic factors is mainly motivated by the conventional literature, and the main purpose is for them to serve as control variables for the robustness testing of our core findings. Some of the coefficients of these control variables may offer more intuitively expected signs, while others can be ambiguous in theory. For instance, a higher income level, as measured by per capita PPP, should lift corporate leverage, as it is typically associated with a deeper financial system and greater repayment and servicing capacities. Also, higher economic growth should directly help ease the corporate debt/GDP ratio, other things being equal. Its coefficient therefore should be expected to be negative. In addition, a higher investment rate may suggest a greater need for external financing, hence we may expect a positive coefficient.
2019
[ "shs", "qfin" ]
[ "1082618", "1063734" ]
[ "1119489", "749495" ]
03066218
As a consequence, its width can be changed at will, thus speeding up or slowing down the erase process. The triangular barrier also allows for much lower electric fields during the erase process, thus increasing the durability of the device. Finally, QDs in III-V semiconductors can be manufactured to store holes instead of electrons employing the type II band alignment. Since holes have a larger effective mass than electrons their storage time is much longer for the same localization energy. III. QD-FLASH CELLs A QD flash cell is a modulation-doped field effect transistor (MODFET), in which a layer of QDs has been embedded between the 2-dimensional hole gas (2DHG) and the gate. Fully functioning protypes have been manufactured already 8 years ago using InAs QDs embedded in GaAs or Al0.9Ga0.1As [6], thus demonstrating the feasibility of the QD-Flash concept. The charge state of the QDs is controlled by the gate voltage and read out is done using the 2DHG. The structure of QD-Flash is sketched in Fig. 1 Fig. 1: Sketch of a QD-Flash The logic state "0" is realized in a QD-Flash when the QDs are not occupied by holes. Conversely, the logic state "1" is realized when holes are localized in the QDs.
2019
[ "phys" ]
[ "86624", "559375", "1296", "1067467" ]
[ "741251" ]
04187851
The detection angle corresponds to the angle between the surface and the direction of the photoelectrons and was varied between 25Β° (close to grazing incidence, surface sensitive) and 75Β° (nearly normal incidence, bulk sensitive). All the spectra presented here were obtained for a 45Β° angle. The surface morphology was analyzed with a Zeiss Supra 55 scanning electron microscope (SEM) using an InLens detector, i.e. by measuring backscattered electrons along the beam direction. The incident electron beam was set to 1 keV for better surface sensitivity. In the case of 2D-BN, the InLens intensity increases with the 2D-BN thickness [START_REF] Sutter | Thickness determination of few-layer hexagonal boron nitride films by scanning electron microscopy and Auger electron spectroscopy[END_REF]. Raman spectra were measured using a Horiba Scientific LabRAM HR confocal spectrometer with a 473 nm laser spot (power ~10mW). The beam was focused to a size smaller than 1 Β΅m by a 100x objective. Results and discussion When growing 2D-BN by PAMBE, the flux of injected species reaching the sample surface plays a key role on the film stoichiometry, thickness as well as morphology. The impact of the atomic boron flux was firstly analyzed by varying the B-cell temperature T B from 1700 to 1850Β°C, resulting in an estimated B flux ratio of 11 [START_REF] Paule | A Langmuir determination of the sublimation pressure of boron[END_REF] between the highest and lowest flux (see table 2). The N 2 plasma cell between the metallic coupled 2D-BN layer at the interface and the uncoupled components for films thicker than one monolayer, consistent with the already reported values [START_REF] Preobrajenski | Ni 3d-BN Ο€ hybridization at the h-BN βˆ• Ni(111) interface observed with core-level spectroscopies[END_REF].
2023
[ "spi", "phys" ]
[ "1066983", "1067464", "1066983" ]
[ "741251" ]
03727283
If we assume that the gas transfer occurs only through the thin films (border-blocking assumption), we can thus rewrite equation (6.6) as follows: d A dt = Ξ² h d = Ξ² 1 - 2r PB d = Ξ² 1 -2 Ξ΅ Ξ± d 1/2 A 1/4 (6.9) where we corrected the ideal area growth rate Ξ² by the actual portion of film available for gas transfer. We remark however that equation (6.9) considers only the film reduction due to the surface Plateau borders, while it neglects the further area reduction due to the presence of the vertical ones at each bubble vertex. We can now think of introducing a critical bubble area A c at which the two surface Plateau borders merge and thus the thin vertical films vanish. This happens when r PB = d/2, thus from equation (6.7) we obtain: A c = Ξ± 2 d 2 16Ξ΅ 2 (6.10) and introducing this relation in equation (6.9) we can write: d A dt = Ξ² 1 - A A c 1/4 (6.11) As we can see from this relation, once the foam reaches the critical area A c the coarsening rate goes to zero, thus leading to an unphysical arrest of coarsening due to the film disappearance. In the proximity of A c one should thus replug the gas transfer through the Plateau borders in order to describe the coarsening rate correctly. In the samples that we are considering in this section, the liquid fraction is Ξ΅=10% and the gap is d=2 mm, thus from (6.10) we obtain a critical area of roughly 58 mm 2 as the geometrical prefactor Ξ± can be calculated to be approximately 1.52 for a rather dry foam [START_REF] Gay | Rapid Plateau border size variations expected in three simple experiments on 2D liquid foams[END_REF]. From the mean area evolution showed in figure 6.14 (a) we see that the foam samples do not reach this critical area, we are thus far away from the film vanishing. To simplify the notation, we can make equation (6.11) dimensionless by introducing a dimensionless time t = Ξ²t/A c and a dimensionless area Δ€ = A/A c , so that it becomes: d Δ€ dt = 1 -Δ€1/4 (6.12) We can solve this differential equation under the initial condition Δ€(t = 0) = Δ€0 , obtaining the following solution: t = Δ€ Δ€0 d Δ€ 1 -Δ€1/4 = 4( Δ€1/4 0 -Δ€1/4 ) + 2( Δ€1/2 0 -Δ€1/2 ) + 4 3 ( Δ€3/4 0 -Δ€3/4 ) + 4 ln 1 - Δ€1/4 0 1 -Δ€1/4 (6.13) This gives an implicit relation t( Δ€) that we can compare with our experimental results. From our experiments we estimate the value A 0 by fitting A(t) with a power law function, and we consider Ξ² to be roughly 3.6β€’10 -4 mm 2 /s, as the effective diffusion coefficient estimated in quasi-2D drained foams made of the same Fairy solution [START_REF] Guidolin | Controlling foam ageing in viscoelastic media[END_REF]. In figure 6.15 we compare the experimental curve Δ€( t) with the one predicted by equation (6.13).
2022
[ "phys" ]
[ "1051087" ]
[ "1150810" ]
00664528
AdaBoost considers each curve as a weak classifier and iteratively selects relevant curves to increase the authentication accuracy. We demonstrate these ideas on a subset taken from FRGC v2 (Face Recognition Grand Challenge) database. The proposed approach increases authentication performances relative to a simple fusion of scores from all curves. Introduction In order to meet the needs of security, a growing international concern, biometrics is presented as a potentially powerful solution. Biometrics aim to use behavioral and/or physiological characteristics of people to recognize them or to verify their identities. In particular, fingerprint and iris-based systems have showen good performances. However they require cooperation of users who may find them intrusive. Since face recognition is contactless and less restrictive, it emerges as a more attractive and natural biometric for security applications. In the last few years, face recognition using the 3D shape of the face has emerged as a major research trend due to its theoretical robustness to lighting condition and pose variations. However, the problem remains open on the issue of robustness of these approaches to facial expressions [START_REF] Amor | New experiments on icp-based 3D face recognition and authentication[END_REF].
2011
[ "info" ]
[ "111636", "110943", "111636", "144103", "111636", "144103", "81932", "110943" ]
[ "919425", "18887", "170389", "906382" ]
00726088
I. INTRODUCTION Since facial biometrics is natural, contact free, nonintrusive, and psychologically supported, it has emerged as a popular modality in the biometrics community. Unfortunately, the technology for 2D image-based face recognition still faces difficult challenges, such as pose variations, changes in lighting conditions, occlusions, and facial expressions. Due to the robustness of 3D observations to lighting conditions and pose variations, face recognition using shapes of facial surfaces has become a major research area in the last few years. Many of the state-of-the-art methods have focused on the variability caused by facial deformations, e.g. those due to face expressions, and have proposed methods that are robust to such shape variations. At the same time, gender classification is emerging as an interesting problem that can be a useful preprocessing step for face recognition. Gender is similar to other soft biometric traits, such as skin color, age, eyes colors, and so on, used by humans to distinguish their peers. Most existing work on gender classification uses 2D-images to extract distinctive facial features like hair density and inner morphology of the face, but 3D shape has not yet been used extensively for gender classification. Several works in psychology have shown that gender has close relationships both with 2D information and 3D shape [START_REF] Bruce | Sex discrimination: how do we tell the difference between male and female faces? [END_REF] [START_REF] O'toole | Sex classification is better with three-dimensional head structure than with image intensity information[END_REF], and it motivates the use of 3D shapes for gender classification.
2012
[ "info" ]
[ "111636", "110943", "111636", "144103", "111636", "144103", "81932", "110943" ]
[ "919425", "18887", "170389", "906382" ]
00166017
In this article, we propose a new protocol to achieve delay guarantees in wireless multihop networks. With this study, we show that it is possible to design an efficient measurementbased admission control protocol for the delay parameter. The proposed protocol, called DEAN (Delay Estimation in Ad Hoc Networks), is based on a a priori estimation of average end-to-end delay. This estimation is derived from a simple model of IEEE 802.11 nodes and from an accurate evaluation of each link's collision probability. By combining this estimation with accurate admission controls, the estimated delay is guaranteed after a new flow starts. Such guarantees depend mainly on a strong correlation between the estimated delay and the available bandwidth as an efficient estimation of available bandwidth. This latter is estimated with the protocol ABE (Available Bandwidth Estimation) that provides an accurate evaluation [START_REF] Sarr | Improving Accuracy in Available Bandwidth Estimation for 802.11-based Ad Hoc Networks[END_REF]. Moreover, our protocol DEAN is not costly in terms of overhead since it uses the control packets required for the estimation of the available bandwidth and provided by ABE and thus does not add any overhead. Finally, extensive simulations show that our protocol DEAN is very efficient to provide delay guarantees. The remainder of this paper is organised as follows: Section 2 presents related work.
2007
[ "info" ]
[ "2372", "35418" ]
[ "833995", "6601" ]
00870689
BCA corrects for the output loss in CGE models but less so in sectoral models. The Explanation seems that in PE models, a higher output loss is due to a drop in demand for CO2-intensive materials, loss which is mitigated by BCA. The features of BCA (coverage, level of adjustment, etc.) are of the highest importance for the WTO consistency, feasibility, and political acceptability. The purpose of the meta-regression was also to assess their impact on competitiveness and leakage. In the meta-regression, the inclusion of all sectors in the scheme appears to be the most efficient feature to reduce leakage ratio, followed by the inclusion of export rebates and adjustment level based on foreign carbon content. Yet one can guess, in the case of hypothetical BCA implementation, that political and juridical aspects will be the more determinant and that only a "light" version (adjustment based on best available technologies, probably without the inclusion of indirect emissions) is likely to see the light of day. Besides, the importance of the coalition size and the Abatement target are statistically confirmed and quantified: the smaller the abating coalition and the more stringent the cap, the bigger the leakage ratio. Policy features providing where and what flexibility (the possibility of Offsets and extension to all greenhouse gases) reduce the leakage ratio. Finally, this meta-analysis confirms the importance of Armington elasticities in the leakage ratio estimation, a result crucial in terms of uncertainty analysis.
2013
[ "shs" ]
[ "135977", "135977" ]
[ "3394", "1130" ]
01137932
For others, a-b means that the parameter takes more than two different values within a articles, and that there are b values taken by the parameter in total. Theoretically, the bigger is the abatement, the higher is the leakage in absolute terms (tons of carbon emissions). As the leakage ratio is the leakage in absolute terms divided by the abatement and the latter increases as well, there is an indeterminacy about the relationship between the abatement and the leakage ratio. In the meta-regression model, the correlation is positive, but the statistical significancy is weak (a p-value below 0.1 is reached only for the no-BCAs sample), which may be attributable to the small variability of this parameter. In Alexeeva-Talebi et al. (2012b) (which was not included in our study because there was no BCAs), the correlation is negative (leakage of 32%, 29% and 27% for Europe abating respectively 10%, 20% and 30% of its emissions). In [START_REF] BΓΆhringer | Alternative designs for tariffs on embodied carbon: A global cost-effectiveness analysis[END_REF] however, the relationship is positive (leakage of 15.3%, 17.9% and 21% for Europe abating respectively 10%, 20% and 30% of its emissions). Concerning the policy parameters, authorizing permit trading (linking) within the coalition is not statistically significant. In the two studies that change explicitly this parameter in the different scenarios [START_REF] Lanzi | Alternative approaches for levelling carbon prices in a world with fragmented carbon markets[END_REF][START_REF] Springmann | A look inwards: carbon tariffs versus internal improvements in emissions-trading systems[END_REF], permit trading diminishes leakage to a small extent. It is therefore the lack of variability between studies that may explain this non-significance (about half of the articles have permit trading in all their scenarios and the other half do not in all their scenarios). Conversely, extending carbon pricing to all GHG sources is statistically significant, especially when BCAs are implemented (decreasing the leakage ratio by 6 percentage points).
2014
[ "shs" ]
[ "135977", "148117", "441569", "135977" ]
[ "3394", "1130" ]
01778383
Collective experience of forwards gives a clear advantage during phases of collective combat. The art of working together, sharing the action either on offence or defence is the essence of rugby. The collective investment and shared effort in all forwards' actions is crucial, whether in rucks to keep the ball, synchronisation during line-out, maul for placement, collective push and orientation in scrums. Containing and guiding teammates during scrums starts with a collective link, placement and work throughout the push. This element of game combines physical skill and a strong complicity, acquired over the years. Collectively adapting to adverse scrums, providing a common effort, direct scrum pressure, meeting together in a difficult situation requires a shared knowledge and combined action. This action knowledge is central to forwards' play and is apparently acquired more slowly. This may be why teams winning the World Cup have forwards with a collective experience significantly higher than those which do not win. We show that, some factors like size and experience might be predictors of success. However, it is probable that there are other factors that explain why only four countries have ever won the Rugby World Cup. Indeed, winning teams in a Rugby World Cup may also owe their victory to their nation's economic, historical, political and technological investment in this sport. [START_REF] Guillaume | Success in developing regions: world records evolution through a geopolitical prism[END_REF] CONCLUSION We show that forwards and backs are becoming heavier from one World Cup to the next.
2012
[ "shs" ]
[ "441096", "415984", "301664", "441096", "415984", "441096", "415984", "303623", "439907", "441096", "415984", "301664" ]
[ "1027328", "19038", "1031020", "1041396" ]
00607757
In these conditions, character displacement is analogous to evolutionary branching, without the need for positive assortative mating to evolve. We determine how other ecological and environmental conditions influence the probability of stable coexistence of the two populations. We finally discuss how the suitable conditions we found for character displacement are likely to be met in natural populations, and in particular in the GalΓ‘pagos finches populations. Models Secondary contact scenario We consider an initial resident (ancestral) population of N 0 individuals, monomorphic with ecological trait u 0 . Due to a simple "quantitative genetics" rule for trait inheritance (see Section 2.2), the population is no longer monomorphic after a few generations. We let the resident population reach its ecological equilibrium, determined by the interaction with its dynamic food resources. We choose an ecological model such that the trait of the population converges under directional selection to a singular point u * [START_REF] Geritz | Evolutionarily singular strategies and the adaptive growth and branching of the evolutionary tree[END_REF] where the mutant invasion gradient vanishes (assuming 0 < u * < 1). Depending on our choice of parameter values, selection becomes either stabilizing or disruptive at this point. In the first case, the singular point is a fitness maximum called a "continuously stable strategy" (CSS): all mutants in a resident population at u * have a negative fitness, so that they cannot invade the resident population. Selection thus keeps the population at u * .
2011
[ "math", "sdv" ]
[ "31591", "31591", "102", "31591" ]
[ "183783" ]
00447327
Third, the connectivity of subpopulations via migration is assumed constant over time, except in [START_REF] Whitlock | The effective size of a subdivided population[END_REF] and [START_REF] Whitlock | Fixation probability and time in subdivided populations[END_REF]. However, all components of the landscape are dynamic simultaneously in natural populations. For example, external factors can cause variations of connections between demes, to the point where connectivity either falls to its minimum (unconnected demes, e.g. vicariance) or rises to its maximum (fusion of demes, e.g. postglacial secondary contacts) [START_REF] Young | Morphological and genetic evidence for vicariance and refugium in Atlantic and Gulf of Mexico populations of the hermit crab Pagurus longicarpus[END_REF]. Climatic variations as well as volcanic events can cause sea level changes resulting in separations and fusions of islands [START_REF] Cook | Species richness in Madeiran land snails, and its causes[END_REF]. Repeated changes of the water level causing fragmentations and fusions of lakes are known in the Great African Lakes [START_REF] Owen | Major low levels of Lake Malawi and their implication for speciation rates in cichlid fishes[END_REF][START_REF] Delvaux | Age of Lake Malawi (Nyasa) and water level fluctuations. Tech. rep[END_REF][START_REF] Galis | Why are so many cichlid species[END_REF][START_REF] Stiassny | Cichlids of the rift lakes[END_REF]. At a different spatiotemporal scale, the number and size of populations can vary because of dispersal and recolonization events (establishment of new colonies and their later fusion) [START_REF] Deheer | Colony genetic organization, fusion and inbreeding in Reticulitermes flavipes from the midwestern US[END_REF][START_REF] Vasquez | Intraspecific aggression and colony fusion in the Argentine ant[END_REF]. All aspects of the spatial structure of a population can change because of new ecological interactions, e.g. the emergence or extinction of a predator or parasite [START_REF] Batzli | Dynamics of small mammal populations: a review[END_REF]. Contemporary frag-mentation of habitat due to human action is also always changing the landscape [START_REF] Davies | Human impacts and the global distribution of extinction risk[END_REF].
2009
[ "math", "sdv" ]
[ "31591", "31591", "102" ]
[ "183783" ]
03121911
We have hence characterised a new reason for the failure of spatial spread of suppression drives, in the form of opposing demographic advection. This phenomenon was expected given previous work on spatial dynamics of alleles (as reviewed in [START_REF] Dhole | Gene drive dynamics in natural populations: The importance of density dependence, space and sex[END_REF]), but we clarify conditions under which it occurs. Other models of spatial spread, and in particular individual-based models, had already identified some reasons why the spatial spread of a suppression drive may fail. If the drive suppresses the local population too much and if the density target population is spatially heterogeneous, the drive may go extinct locally with the eradication of a local subpopulation before it could spread to other locations [START_REF] North | Modelling the spatial spread of a homing endonuclease gene in a mosquito population[END_REF]. Strategies relying on the eradication of the target population are also limited by the potential recolonization of emptied locations by wild-type individuals [START_REF] Champer | Suppression gene drive in continuous space can result in unstable persistence of both drive and wild-type alleles[END_REF][START_REF] North | Modelling the spatial spread of a homing endonuclease gene in a mosquito population[END_REF][START_REF] North | Modelling the potential of genetic control of malaria mosquitoes at national scale[END_REF] (such recolonizations can also be observed in our stochastic simulations). Finally, the evolution of resistance to the drive itself, which already hinders the success of gene drives in well-mixed populations [START_REF] Unckless | Evolution of resistance against crispr/cas9 gene drive[END_REF], also affects their spatial spread [START_REF] Beaghton | Requirements for Driving Antipathogen Effector Genes into Populations of Disease Vectors by Homing[END_REF]. Our model was derived under limiting assumptions, including a 100% homing rate, and either homing taking place very early in development or the drive being dominant. Gene drives currently being designed in laboratories do not exactly match these assumptions. While we are pessimistic that analytical results can be obtained when these assumptions are relaxed, future numerical or computational (individual-based) studies will be useful to assess the generality of our findings. The results of individual-based simulations of the spatial spread of underdominance gene drive systems [START_REF] Champer | Population Dynamics of Underdominance Gene Drive Systems in Continuous Space[END_REF] are encouraging.
2021
[ "math", "sdv" ]
[ "441569", "193738", "521754", "542077" ]
[ "20905", "12882" ]
02120491
For example, relatives could make an investment in a P2P-FIT-RET project at the birth of a child (in the child's name). This investment would come to maturity and be paid back with interest on the child's 20 th birthday, similar to a Registered Educational Savings Plan (RESP) 4 . A similar investment could be made with the intention of using the repayment as a means of supplementing retirement income. To ensure that investors are repaid, a waterfall payment scheme could be combined with an Escrow account model 5 . As the solar PV panels generate income, the amount due to the lenders would be funneled into a holdings account (to earn interest) as the primary flow (Figure 4). Only when the required monthly (or yearly) payments are made into this fund would the person with the solar PV system on their roof receive payment from the panels (the overflow or secondary flow) that month (or year). All of the models introduced in Sec. 3.1 could be modified to include investment for FIT-RETs around the world. Since all of the P2P portals have a web interface, opening access to members globally should be possible. Section 5 will discuss how the FIT-RET can be modeled as an investment and a micro-entrepreneurial activity. P2P Framework Requirements for Success Modifications of loan conditions are needed to take full advantage of the earning potential, and will require long-term investment on the part of the investors.
2011
[ "shs" ]
[ "3557", "3557", "480742" ]
[ "922049" ]
02119708
ABS and PLA are both thermoplastics that can be injection-molded, each with their own benefits, as ABS is rigid and durable, while PLA is plant-based and can be recycled and composted. The melting temperatures of PLA and ABS allow for safe extrusion, while being high enough to ensure shape retention. Distributed recycling is also being developed to recycle post-consumer products into filament for a 3-D printer, which could further reduce cost and resources required for distributed manufacturing [12]. The use of 3-D printers allows for previously impossible shapes under conventional manufacturing methods (e.g. injection molding) along with the ability to manipulate the inside of an object in multiple ways, such as, fill composition or adding internal parts. The ability to manipulate shapes internally during production has the potential to reduce additional machining during processing. Holes, voids, and other needs within an object that were impossible using methods similar to injection molding previously had to be done using tools, such as, drill presses. These steps are now able to be created during the design step and automatically produced using the RepRap. The ability to change fill composition allows more complicated shapes to be produced with structural integrity with the use of less material. This property combined with the reduction in embodied energy of transportation made available by distributed manufacturing allow for the possibility that it could be less energy and emission intensive than conventional manufacturing. However, questions remain about the environmental benefits of distributed manufacturing due the potential for increases in the overall embodied energy of the manufacturing due to reduction in scale. This preliminary study explores these questions by probing the technical potential of using a distributed network of RepRaps to produce goods.
2013
[ "chim", "sde" ]
[ "186714" ]
[ "922049" ]
01854291
Since large firms' worker weights are large, the assertion of Kalantzis et al. ( 2012) is consistent with our finding here. The behavior and distribution of the individual dispatched worker ratio are shown in Figure 4. The average individual dispatched worker ratio is slightly higher than the aggregate ratio. The average ratio ranges roughly between six and ten percent. The median values are lower than the averages, with a spread between three and six percent, which is almost the same as that of the aggre gate ratios. The range between the 25th and 75th percentiles is around or less than ten percentage points and is much smaller than that for part-time workers. Additionally, the range shrinks slightly after 2007. As a result, the variation pattern of the average individual ratio is quite similar to that of the aggre gate ratio. Dispatched worker acceptance is not as heterogeneous as part-time employment is. However, note that firms that do not use dispatched workers account for a large fraction of the total, specifically, about half of the total.
2018
[ "shs" ]
[ "24516", "478540" ]
[ "1035335", "18477" ]
00357773
The stronger the SOI, the smaller is l so . At B = 0, the interference of time reversed paths leads to a reduction of the backscattering probability below its classical value [START_REF] Bergmann | [END_REF], an effect called weak anti-localization, if l so β‰ͺ l Ο• (strong SOI). It manifests itself as a positive (rather than a negative) magnetoresistance at small fields around B = 0 [7]. Weak anti-localization was experimentally observed by Bergmann in thin metallic films [8]. As the strength of SOI is increased, a transition from weak localization to weak anti-localization is observed. Weak antilocalization was subsequently observed also in semiconductor heterostructures [9,10]. A smaller zero-field anti-localization resistance minimum superimposed on a larger weak localization peak was seen in the magnetoresistance of an inversion layer of InP [9], and an n-type GaAs/AlGaAs heterostructure [10]. A fully developed anti-localization minimum was observed by Chen et al. in the magnetoresistance of an InAs quantum well [11]. Koga et al. demonstrated the transition from a zero-field weak localization maximum to a weak anti-localization minimum by tuning the symmetry of an InGaAs quantum well (QW) wih a metallic top-gate [12]. Weak anti-localization is expected to be particularly expressed in the case of p-type GaAs heterostructures due to the strong SOI in these systems. Experimental studies of weak anti-localization in Be-doped (100) p-type GaAs heterostructures are reported in Refs.
2008
[ "phys" ]
[ "548219", "1296", "548219", "548219", "548219", "1150306", "1150306" ]
[ "760824" ]
00357350
We like to note that the temperature dependence of the conductance is pronounced for all magnetic fields investigated. The curves in Fig. 2 (a) are not vertically offset. Rather the background conductance changes from about 1/70 kΞ© -1 to 1/37 kΞ© -1 when the temperature is increased from 65 mK to 340 mK. In the same temperature range the resistance around B = 0 changes from 90 kΞ© to 35 kΞ© which is consistent with the data of Fig. 3 (b). We conclude that there are two different contributions to the temperature dependence of the resistance, one which is present for the entire magnetic field regime investigated and another one which is particularly pronounced around B = 0. These experimental features are linked to the presence of In in the contact material. Motivated by the temperature and magnetic field dependence of the observed effects we discuss in the following possible relations to type II superconductivity in the In/Zn/Au contact pads. Proximity effects extending between semiconductor contacts [12] have been investigated in InAs-Nb systems where great care was taken to optimize the interface between the superconductor and the semiconductor. Indium ohmic contacts were deposited on an n-type Al-GaAs heterostructure at a distance of 1 Β΅m and the flow of a supercurrent was demonstrated [13] and explained in the framework of phase-coherent Andreev reflections [14]. Once the mobility of the electron gas was reduced by electron-beam irradiation, a zero-bias dip in the dif- ferential conductance was observed, which was strongly reduced as the magnetic field was increased above 40 mT.
2008
[ "phys" ]
[ "1296" ]
[ "760824" ]
01922760
PANEL CONSTRUCTION This section briefly describes the general method used to construct all the panels that have been used in this paper. Figure 1 schematically describes the main elements and accessories needed. The choices for the panel setup and its constitutive components were guided by simplicity, with only eight main components (parts A to H) and simple building steps (see Figure 2). Precise descriptions and additional technical details for all needed parts (A to L) are provided in [START_REF] Robin | A plane and thin panel with representative simply supported boundary conditions for laboratory vibroacoustic test[END_REF]. The frame is made of parts A, B, E and F, all made of steel. The panel (part H) and supporting blades (parts C and D) are made of aluminium. It is precised that steel is chosen for the frame partially for reasons of cost, but mainly to ensure a high 'panel-and-edges to frame' mass ratio so that the frame nearly behaves as a rigid and massive foundation from the panel's point of view. A mass ratio of approximately 0.09 is finally obtained for panel A, described in section 5.1 (with a weight of 20.7 kg for the frame, and 1.8 kg for the panel and edges). Figure 2 then gives visual instructions for assembly, which are to follow from left to right and top to bottom. It is precised that a main advantage of this setup is that the 'panel-and-blades' part can be easily disassembled and reassembled from the frame.
2018
[ "phys" ]
[ "110548", "110548", "110548", "12568", "12568", "12568", "12568", "31116", "31116" ]
[ "174243", "173419", "16476", "735233", "736260", "19932" ]
01873991
Seoi-nage assured this dominant position for a long time [START_REF] Sterkowicz | Differences in the specific movement activity of men and women practicing judo (Based on the analysis of the judo bouts during the 1996 Olympic games)[END_REF]. Two significant events characterized this category, the first was the return in force of Ashi-waza, and the second is Koshi-waza disappearance, which posed the problem of its effectiveness in competitions. Introducing new refereeing rules [START_REF]Refereeing new rules[END_REF] resulted in the activity's decline of most technical groups in Nage-waza except for Ashi-waza. The activity of medalists in Ne-waza improved, as evidenced by the increasing frequency of Osae-komi-waza and Kansetsu-waza. This increase in no way altered its share, which remained small compared to that of Nage-waza [START_REF] Sterkowicz | Techniques used by judoists during the world and Olympic tournaments 1995-1999[END_REF]. The global technical repertoire, which showed the number of techniques mastered in Nage-waza by the medalists, was large [START_REF] Boguszewski | Technical fitness training of judokas-finalists of top world tournaments in the years 2005-2008[END_REF]. The present analysis confirmed the increasing use of Kokusai-shiai-waza (innovative techniques). Finding alternative solutions to defensive systems seemed to be a concern within this weight category [START_REF] Inman | Classification of innovative international competition techniques[END_REF]. This creativity concerned Te-waza, Sutemi-waza, and to a lesser degree Ashi-waza. Te-waza techniques offered many opportunities for creativity thanks to the multiple hand placements, which explained the considerable number of variations attempted in competition.
2014
[ "shs" ]
[ "569170" ]
[ "176253" ]
03190816
Various studies showed this dominant tendency of Ashi-waza [START_REF] Miller | Throwing Technique and Efficiency in the 2013 British Judo Championships[END_REF][START_REF] Sacripanti | The increasing importance of Ashi Waza, in high level competition. (Their Biomechanics, and small changes in the form)[END_REF][START_REF] Pereira Martins | Techniques utilised at 2017 Judo World Championship and their classification: comparisons between sexes, weight categories, winners and non-winners. Ido Movement for Culture[END_REF]. From the strategic standpoint, the Ashi-waza use allows the judoka not to get too close to the opponent because of a specific safety distance. Counterattacks are difficult for the ban of hand grips below the belt. Compared to others, it is also the least risky group to attack the opponent. [START_REF] Sacripanti | The increasing importance of Ashi Waza, in high level competition. (Their Biomechanics, and small changes in the form)[END_REF] corroborated the difficulty of opponents to defend against Ashi-waza techniques. These arguments justify the choice of medalists for this group. The suggested diagnosis has shown the value of the technicaltactical indices of medalists. Coaches could use them in preparing their judokas for eventual competitions (Adam et al., 2013).
2021
[ "shs" ]
[ "569170" ]
[ "176253" ]
01133392
From our initial sample of 1,194 developers, we identify those who belong to the same development teams (i.e. those who hold commit rights on the same projects and have contributed at least one commit to those projects). We are able to identify 270 such developers, working together on 131 distinct projects. Out of the 131 teams that we identify in our sample, 93 have 2 developers, 23 have 3 developers, 12 have 4 developers and one team has 5, 6 and 8 developers, respectively. 28Based upon our above classification of developers into four cooperative types, we start by describing how diverse those 131 development teams tend to be. We compute a Herfindahl index of concentration of types at the team level. We then take one minus this quantity in order to get an indicator that grows from zero to one as teams tend to be more diverse in terms of the cooperative types of their members: D = 1 - 4 βˆ‘ t=1 p 2 t (2) where p t represents the proportion of developers who are of cooperative type t in the development team considered. Figure 5 features the distribution of this indicator of diversity of cooperative types across all 131 development teams. We can see that the distribution features two modes: one at zero (i.e. perfect homophily at the team level), and the other at 0.5, so that a significant fraction of teams are actually comprised of developers with different types. In a second step, we test for homophily at the team level. For each developer i, we compute the proportion of the other members j of his team that are of his cooperative type. We then substract from this proportion the proportion of developers who are of that particular type in the whole underlying population of developers.
2014
[ "shs" ]
[ "93713" ]
[ "932306" ]
02568253
Germany is perhaps the most distinguished example of this energy policy trend. One day after the nuclear catastrophe in Fukushima in March 2011, the German government decided, with the support of quasi-totality of German population, to accelerate the phase-out of nuclear fleet by 2022 -a policy which had been discussed since the beginning of 2000. comprised about 45% of the total production in 2011 (Figure 1). The shutdown of eight nuclear plants with a combined capacity of about 8.4 GW has reduced the electricity production from this type of energy from around 140556 GWh (22.5%) in 2010 to 107971 GWh (18%) in 2011. This closure has also reduced the market share of the big four generators. Nonetheless, they still account for about 73% of generating capacity according to the Monitoring Report 2013, Developments of the Electricity and Gas Markets in Germany, Federal Network Agency andFederal Cartel Office, 2013 (FNA and[START_REF] Fna | Developments of the Electricity and Gas Markets in Germany[END_REF]). Given the large amount of available interconnection capacity between Austria and Germany, these two markets are considered to comprise one electricity market, diluting the market share of the big four by approximately 10%. As regards electricity wholesale prices, there was a significant increase in German spot market in 2011, compared with the previous years (2009 and 2010): from 37€/MWh in 2009 to 51€/MWh in 2011 (37%) on average before a slight decrease in 2012 (figure 2). Source: EPEX Spot It is difficult to conclude about the nature of the increase in spot prices during this period without a quantitative analysis. In fact, the Energiewende policy of replacing nuclear power with extra fossil fuel capacity and vastly expanding highly-subsidized renewables has two different impacts in wholesale power prices. On the one hand, the extra fossil fuels generation was supposed to increase the wholesale spot prices due to its expensive fuel costs.
2016
[ "shs" ]
[ "559342" ]
[ "184589" ]
02568268
Intermittent generators, however, would not benefit from these high prices since they occur when their output is low. In contrast, when high demand coincides with high renewable output (this is particularly true for solar), merit order effect will drive the prices downs during these periods, lowering marginal revenue for renewables (market value of renewables) 6 . In an electricity system where intermittent generation comprises a small share of total output, the high variability of renewable will have little impact on the average base prices and market value of renewables, the gap between them is low. However, if the share of intermittent generation is significant, this gap might be significant, as illustrated in Figure 2. Measuring merit order effect in this context is of high importance. In the next section, we attempt to evaluate the magnitude of this effect. LITERATURE REVIEW ON QUANTITATIVE ANALYSIS OF THE MERIT ORDER EFFECT The merit order effect has been recently discussed in a number of articles about renewable energy. Two broad methods to estimate the merit order effect of renewables have been used in literature: electricity market modelling and econometric analysis of historical time series data. Using electricity modelling requires precise calibration of costs and especially definition of reasonable scenarios. A lot of assumptions bound to the models can negate the certitude of conclusions.
2020
[ "shs" ]
[ "163511", "451480" ]
[ "184589" ]
00982736
1 This controversy between a presiding judge and a top Belgian civil servant, illustrates the tensions that crop up when, as a consequence of having adopted managerial logics and tools, 2 it would seem the exclusive nature of justice were being contested. Typical of management is the fact that organization is done with an eye to cost, efficiency and the quality of output. 3 Such values, unheard of in the world of justice, have taken on increasing importancethough not without stirring up resistancebringing about changes at three levels: organizational, professional and institutional. Through diverse strategies, a managerial type of reasoning has progressively found its way into the justice system, particularly by reinforcing accountability, developing forms of evaluation and controlling magistrates, setting up indicators of productivity and workloads, introducing limited mandates and compulsory mobility, and changing common expectations concerning judges both in matters of deadlines and in the way citizens are received. But assessing, measuring and comparing are precisely what trivializes the missions of Justice, casting doubt on the ways it operates. There is general agreement about the need to modernize and get the best out of the judicial system, so legal professionals and political actors tend to call attention to managerial logics, all the more as they are already being implemented in a good number of Western countries (Sibony, 2002; Fabri et al., 2005, Vigour, 2005; Cavrois et al., 2002; Breen, 2002). What typifies such logics is the widespread use of a vocabulary and procedures that until recently were quite alien to the judiciary: human resources, quality management, clients… Such notions and tactics are gradually being introduced, mixed in with the logics of action with which the legal professions are familiar. In order to show the concrete forms this ongoing process has takenand the opposition it has stirred upwe will be stressing the expectations concerning the legal professions in Belgium and how they have evolved, in particular with regard to the magistracy and chief magistrates. 4 We wish to point out that introducing managerial logics into the judiciary has transformed its classical rationality as well as the ethos of the legal professions. What M. Weber termed "ethos" in The Protestant Ethic and the Spirit of Capitalism corresponds to a mind-set that confers a specific orientation to action and shapes social and professional praxis through tangible ways of relating to the world and the particular conceptions of rationalization it institutes.
2009
[ "shs" ]
[ "28721" ]
[ "181149" ]
03138649
We adapted the Flush+Reload attack of Mastik toolkit [START_REF] Yarom | Mastik: A Micro-Architectural Side-Channel Toolkit[END_REF] from x86 Instruction Set Architecture (ISA) to RISC-V ISA. In particular, the rdtime instruction was used instead of the rdtscp instruction. The cache flush instruction is not officially defined in RISC-V, nevertheless, we found that in Orca, when opcode is set to MISC-MEM, along with funct3 set to REGION and funct7 set to CACHE-FLUSH, a cache region is flushed. This special flush instruction was used instead of clflush of x86 ISA. Our Detection Module focuses on instructions that access the timer Control and Status Register (CSR), including rdtime, and the cache flush instruction described below. By looking for timer/timer or timer/flush attack pattern, we successfully detected this Flush+Reload attack. The synthesis of our Detection Module shows a maximum frequency of 271 MHz. In the fully implemented design, it occupies 235 registers and 400 LUTs. Static synchronization logic occupies additional 793 registers and 256 LUTs. V. CONCLUSION AND FUTURE WORK In this paper, we discussed the feasibility of dynamic monitoring using reconfigurable hardware to detect cache timing attacks.
2020
[ "info" ]
[ "389097", "389097", "389097" ]
[ "737263", "175135", "9967" ]
02949624
For that purpose, FPGAs offer numerous logic, routing and memory resources to the user. Taking into account this high level of flexibility, FPGAs usually require large circuits and suffer from much lower frequency than hardwired implementation for the same logic [START_REF] Kuon | Measuring the Gap Between FPGAs and ASICs[END_REF]. Using reconfigurable hardware along with hardwired processors is not a new research topic [START_REF] Compton | Reconfigurable computing: a survey of systems and software[END_REF]. Reconfigurable hardware benefits from highly parallel execution capabilities to speed up the processor's calculations, and can be reconfigured to implement different algorithms. It has been successfully used in many fields such as image processing and communication. Regarding the security domain, reconfigurable hardware has been proposed for cryptography acceleration and secret protection, for power and communication monitoring against hardware attacks [START_REF] Gogniat | Reconfigurable Hardware for High-Security/ High-Performance Embedded Systems: The SAFES Perspective[END_REF]. However, to the best of our knowledge, no research work has proposed the use of reconfigurable hardware to monitor the running software on a processor for CSCA detection. REHAD Architecture Overall architecture The REHAD architecture is shown in Fig. 1. This architecture is composed of a main processor core, a detection module made up of reconfigurable hardware, interconnected by three communication channels made up of static hardware, and a trusted software kernel located in the processor. The detection module aims to analyze data provided by the processor core in real-time, and provides hardware relevant information to the trusted software kernel for further decision. Furthermore, the detection module can be reconfigured to adapt to new threats or attacks.
2020
[ "info" ]
[ "389097", "389097", "389097" ]
[ "737263", "175135", "9967" ]
02873622
The simplest type of thimac is called a TM, as shown in Fig. 1. The flow of things in a TM refers to the conceptual movement among five operations (stages). The stages of the TM can be described as follows. Arrive: A thing flows to a new machine (e.g., packets arrive at a buffer in a router). Accept: A thing enters a TM. For simplification, we assume that all arriving things are accepted; hence, we can combine arrive and accept as the receiving stage. Release: A thing is marked as ready to be transferred outside the machine (e.g., in an airport, passengers wait to board after passport clearance). Process (change): A thing changes its form but not its "identity" (e.g., a node in the network machine processes a packet to decide where to forward it). Create: A new thing is born in a machine (e.g., a logic deduction system deduces a conclusion). Transfer: A thing is inputted into or outputted from a machine.
2020
[ "info" ]
[ "463144" ]
[ "1069014" ]
00747723
Once deployed, the DSPL operates an execute, monitor, evaluate, adapt control loop. Our focus in this paper is only on the decision-making evaluate element that takes the result of monitoring as input and triggers adaptations as output. The other elements can be provided by: I. an adaptive architecture such as that provided by the OpenCom component model [START_REF] Coulson | A generic component model for building systems software[END_REF] and GridKit middleware [START_REF] Hughes | An experiment with reflective middleware to support grid-based flood monitoring[END_REF], or by the MADAM middleware [START_REF] Khan | Architectural Constraints in the Model-Driven Development of Self-Adaptive Applications[END_REF]; II. a means to monitor claims, by collecting data about the system and its environment and interpreting it in terms of whether it supports or refutes the claims [START_REF] Welsh | Towards Requirements Aware Systems: Run-time Resolution of Design-time Assumptions[END_REF]. Constraint Modeling A constraint is a logical relationship among several unknowns (or variables), each one taking a value in a given domain of possible values, where a domain is a set of possible values that a variable can take. Constraint programming is a programming paradigm in which constraints between variables are defined declaratively and a solution is found using a solver. A constraint program is defined as a triple (X, D, C), where X is a set of variables, D is a set of domains and C is a set of constraints restricting the values that the variables can simultaneously take. Classical constraint programming deals with finite domains for the variables, which are usually mapped to ordinal values such as integers. The impact on a softgoal of a particular operationalization is represented in the constraint program by integers in the range from 0 (--) to 4 (++). Elements that take Boolean values (See Figure 2) are represented as the integers 0 and 1. Solving constraints involves first reducing the variable domains by propagation techniques [START_REF] Schulte | Efficient constraint propagation engines[END_REF] that will eliminate inconsistent values within domains, and then finding values for each constrained variable in a labeling phase.
2012
[ "info" ]
[ "89875", "74131", "74131", "74131", "17018" ]
[ "752533", "177531", "10585" ]
00707543
If type is "requires", the corresponding constraint is: A β‡’ ad. If type is "excludes", the corresponding constraint is: A * ad = 0. This means that if A is selected (equal to 1), ad must not be selected (must be equal to 0) and vice-versa. Currently, we do not take into account other types of asset dependencies (like parent or child). The conversion algorithm has two main phases presented in the following pseudo-code (Algorithm 1). First, the algorithm navigates through the decision model and then through the asset model. In both cases, we gather the relevant information of decisions and assets and translate them into constraints in CP. Relevant information means information affecting the variability as described above; for example, a description attribute does not affect the variability of the product line model. Our algorithm for converting DOPLER variability models is implemented as an Eclipse plug-in that uses the API of the DOPLER tool suite [START_REF] Dhungana | Integrated tool support for software product line engineering[END_REF]. FORMAL VERIFICATION OF DOPLER MODELS The automated verification of DOPLER variability models has the goal to find defects and its sources using automated and efficient mechanisms. As the manual verification of variability models is error-prone and tedious we propose an automated solution. Our approach offers a collection of operations which are applied on a DOPLER model and return the evaluation results intended by the operation.
2011
[ "info" ]
[ "74131", "107396", "97984", "97984", "74131", "74131" ]
[ "752533", "10585", "177531" ]
03379755
In section III, we present our methodology for obstacle detection based on convolutional autoencoders. Section IV is devoted to the evaluation methodology. Experimental results are presented in Section V. Finally, the conclusion and future work will be given in section VI. II. RELATED WORKS Existing works in the domain of this paper can be divided in two parts: A. Unsupervised models for anomaly detection In the literature, there is an important number of works that uses unsupervised models for anomaly detection. Deterministic models, such as [START_REF] Sakurada | Anomaly detection using autoencoders with nonlinear dimensionality reduction[END_REF], propose an autoencoder for anomaly detection using non linear data. The authors in [START_REF] Ke | Anomaly detection of Logo images in the mobile phone using convolutional autoencoder[END_REF] use a convolutional autoencoder to detect anomalies on image logos of mobiles. They identify the input image as negative when it exceeds a predefined threshold. The authors in [START_REF] Chow | Anomaly detection of defects on concrete structures with the convolutional autoencoder[END_REF] exploit the use of convolutional autoencoders to detect defects in concrete. Their work relies on thresholding on pixel level where the mean value of the anomalous class is supposed to be as high as possible.
2021
[ "info" ]
[ "1067790", "547473", "547473", "1067790", "1066983", "1066999" ]
[ "1295502", "8531", "921561", "747913" ]
03450168
Cloudlets and Aircraft MACE uses CORE as network emulator, and it emulates each network instance as Linux namespaces serving as minimal containers. Each aircraft client application runs inside these namespaces, and they communicate via veth network interfaces with connectivity controlled by the distance between the nodes. The cloudlets are also emulated in such namespaces with a server running the UAS endpoint and interfacing instances of etcd running in the same namespaces. The emulated scenario is run over an area of one square kilometre. Mobility The Random Waypoint mobility model was adopted for the experiment. In this model each aircraft receives a random waypoint and a random velocity to simulate a mission's objective. The movement of the aircraft is emulated in MACE, and the real-time position is injected directly in the network emulator so that it is reflected in the network connectivity. The position is made available to the applications running in the virtual aircraft via UNIX sockets. UAS Broadcasts For the payload reporting, a client running in each aircraft broadcasts a JSON object containing the position and additional data via IPV4 UDP sockets using the emulated Ad-hoc wireless links. The payload includes also an unique message ID, timestamp, an aircraft ID, velocity and status.
2022
[ "info" ]
[ "380071", "531214", "380071" ]
[ "738221", "866375", "734750" ]
03278760
The tests were performed with the same parameters as stated on their website and shown in Table 4. It is possible to see that the results are below the baseline, which is expected considering the high latency configured in CORE for the links. Reducing the latency to 300us instead of 1300us increased the average queries per second to 11172, and the average latency was reduced to 85ms. Mobility was then added with the random walk model provided by a third part library. As seen in Table 5, with mobility there is a considerable decrease in performance, with lower throughput and higher latency. The mobility can also be controlled by an external agent related to the specific application domain. To test this, the emulator was connected to an open-source UAV flight simulator. Paparazzi [START_REF] Hattenberger | Using the Paparazzi UAV System for Scientific Research[END_REF] is an autopilot developed for fixed and rotary wing UAVs, and when using Paparazzi, all the UAVs are controlled by the ground station via radio commands. However, Paparazzi is also suited with a flight simulator where the radio link between the UAVs and the ground station is replaced by a UDP sockets communicating via pprzlink 9 . MACE also includes a proxy for the pprzlink that can capture all packets exchanged between the simulated UAVs and the ground station. As a result, the emulator can capture in real-time, the simulated GPS position of the UAVs and update the emulated topology.
2021
[ "info" ]
[ "380071", "531214", "380071" ]
[ "738221", "866375", "734750" ]
03925654
To reflect this feature, we define the reward function as follows: where π‘Š ! and π‘Š * are the time and monetary weights provided by the user, and T +, -and M +, -are the time and monetary costs for executing the current operator op in query q. π‘…π‘’π‘€π‘Žπ‘Ÿπ‘‘ 𝑅 = # #"(% ! * ' "# $ )"(% % * ) "# $ ) (1 According to this reward function, the query is executed based on the user's preference which is either the user wanting to spend more money for a better query execution time or vice versa. We call these two preferences Weights. These two weights defined by the user are called Weight Profile (wp), which is a two-dimensional vector, and each dimension is a number between 0.0 to 1.0. Notice that the user only needs to specify one dimension of the weight profile, the other dimension is computed as 1-Weight automatically. The detail can be found in our previous work [START_REF] Wang | Adaptive Time-Monetary Cost Aware Query Optimization on Cloud DataBase[END_REF]. 3 The SLA-Aware Reinforcement Learning-Based Multi-Objective Query Re-Optimization Algorithm (SLAReOptRL) An SLA is a contract between cloud service providers and consumers, mandating specific numerical target values which the service needs to achieve. Considering an SLA in query processing is important for cloud databases. If an SLA violation happens, the cloud service providers need to pay a penalty to their users in a form such as money or CPU credits.
2022
[ "info" ]
[ "240165", "240165", "1003581" ]
[ "1090052", "999621", "1019835" ]
End of preview. Expand in Data Studio

HALvest-Contrastive

Contrastive triplets Harvested from HAL


Citation

@misc{kulumba2024harvestingtextualstructureddata,
      title={Harvesting Textual and Structured Data from the HAL Publication Repository}, 
      author={Francis Kulumba and Wissam Antoun and Guillaume Vimont and Laurent Romary},
      year={2024},
      eprint={2407.20595},
      archivePrefix={arXiv},
      primaryClass={cs.DL},
      url={https://arxiv.org/abs/2407.20595}, 
}

Dataset Copyright

The licence terms for HALvest strictly follows the one from HAL. Please refer to the below license when using this dataset.

Downloads last month
42