question
stringclasses
1 value
answer
stringlengths
0
6.48M
It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on the web works, but you have to simulate multi-touch for table moving and that can be a bit confusing. There’s a lot I’d like to talk about. I’ll go through every topic, insted of making the typical what went right/wrong list. Concept Working over the theme was probably one of the hardest tasks I had to face. Originally, I had an idea of what kind of game I wanted to develop, gameplay wise – something with lots of enemies/actors, simple graphics, maybe set in space, controlled from a top-down view. I was confident I could fit any theme around it. In the end, the problem with a theme like “Evolution” in a game is that evolution is unassisted. It happens through several seemingly random mutations over time, with the most apt permutation surviving. This genetic car simulator is, in my opinion, a great example of actual evolution of a species facing a challenge. But is it a game? In a game, you need to control something to reach an objective. That control goes against what evolution is supposed to be like. If you allow the user to pick how to evolve something, it’s not evolution anymore – it’s the equivalent of intelligent design, the fable invented by creationists to combat the very idea of evolution. Being agnostic and a Pastafarian, that’s not something that rubbed me the right way. Hence, my biggest dillema when deciding what to create was not with what I wanted to create, but with what I did not. I didn’t want to create an “intelligent design” simulator and wrongly call it evolution. This is a problem, of course, every other contestant also had to face. And judging by the entries submitted, not many managed to work around it. I’d say the only real solution was through the use of artificial selection, somehow. So far, I haven’t seen any entry using this at its core gameplay. Alas, this is just a fun competition and after a while I decided not to be as strict with the game idea, and allowed myself to pick whatever I thought would work out. My initial idea was to create something where humanity tried to evolve to a next level but had some kind of foe trying to stop them from doing so. I kind of had this image of human souls flying in space towards a monolith or a space baby (all based in 2001: A Space Odyssey of course) but I couldn’t think of compelling (read: serious) mechanics for that. Borgs were my next inspiration, as their whole hypothesis fit pretty well into the evolution theme. But how to make it work? Are you the borg, or fighting the Borg? The third and final idea came to me through my girlfriend, who somehow gave me the idea of making something about the evolution of Pasta. The more I thought about it the more it sounded like it would work, so I decided to go with it. Conversations with my inspiring co-worker Roushey (who also created the “Mechanical Underdogs” signature logo for my intros) further matured the concept, as it involved into the idea of having individual pieces of pasta flying around and trying to evolve until they became all-powerful. A secondary idea here was that the game would work to explain how the Flying Spaghetti Monster came to exist – by evolving from a normal dinner table. So the idea evolved more or less into this: you are sitting a table. You have your own plate, with is your “base”. There are 5 other guests at the table, each with their own plate. Your plate can spawn little pieces of pasta. You do so by “ordering” them through a menu. Some pastas are better than others; some are faster, some are stronger. They have varying costs, which are debited from your credits (you start with a number of credits). Once spawned, your pastas start flying around. Their instinct is to fly to other plates, in order to conquer them (the objective of the game is having your pasta conquer all the plates on the table). But they are really autonomous, so after being spawned, you have no control over your pasta (think DotA or LoL creeps). Your pasta doesn’t like other people’s pasta, so if they meet, they shoot sauce at each other until one dies. You get credits for other pastas your own pasta kill. Once a pasta is in the vicinity of a plate, it starts conquering it for its team. It takes around 10 seconds for a plate to be conquered; less if more pasta from the same team are around. If pasta from other team are around, though, they get locked down in their attempt, unable to conquer the plate, until one of them die (think Battlefield’s standard “Conquest” mode). You get points every second for every plate you own. Over time, the concept also evolved to use an Italian bistro as its main scenario. Carlos, Carlos’ Bistro’s founder and owner Setup No major changes were made from my work setup. I used FDT and Starling creating an Adobe AIR (ActionScript) project, all tools or frameworks I already had some knowledge with. One big change for me was that I livestreamed my work through a twitch.tv account. This was a new thing for me. As recommended by Roushey, I used a program called XSplit and I got to say, it is pretty amazing. It made the livestream pretty effortless and the features are awesome, even for the free version. It was great to have some of my friends watch me, and then interact with them and random people through chat. It was also good knowing that I was also recording a local version of the files, so I could make a timelapse video later. Knowing the video was being recorded also made me a lot more self-conscious about my computer use, as if someone was watching over my shoulder. It made me realize that sometimes I spend too much time in seemingly inane tasks (I ended up wasting the longest time just to get some text alignment the way I wanted – it’ll probably drive someone crazy if they watch it) and that I do way too many typos where writing code. I pretty much spend half of the time writing a line and the other half fixing the crazy characters in it. My own stream was probably boring to watch since I was coding for the most time. But livestreaming is one of the cool things to do as a spectator too. It was great seeing other people working – I had a few tabs opened on my second monitor all the time. It’s actually a bit sad, because if I could, I could have spent the whole weekend just watching other people working! But I had to do my own work, so I’d only do it once in a while, when resting for a bit. Design Although I wanted some simple, low-fi, high-contrast kind of design, I ended up going with somewhat realistic (vector) art. I think it worked very well, fitting the mood of the game, but I also went overboard. For example: to know the state of a plate (who owns it, who’s conquering it and how much time they have left before conquering it, which pasta units are in the queue, etc), you have to look at the plate’s bill. The problem I realized when doing some tests is that people never look at the bill! They think it’s some kind of prop, so they never actually read its details. Plus, if you’re zoomed out too much, you can’t actually read it, so it’s hard to know what’s going on with the game until you zoom in to the area of a specific plate. One other solution that didn’t turn out to be as perfect as I thought was how to indicate who a plate base belongs to. In the game, that’s indicated by the plate’s decoration – its color denotes the team owner. But it’s something that fits so well into the design that people never realized it, until they were told about it. In the end, the idea of going with a full physical metaphor is one that should be done with care. Things that are very important risk becoming background noise, unless the player knows its importance. Originally, I wanted to avoid any kind of heads-up display in my game. In the end, I ended up adding it at the bottom to indicate your credits and bases owned, as well as the hideous out-of-place-and-still-not-obvious “Call Waiter” button. But in hindsight, I should have gone with a simple HUD from the start, especially one that indicated each team’s colors and general state of the game without the need for zooming in and out. Development Development went fast. But not fast enough. Even though I worked around 32+ hours for this Ludum Dare, the biggest problem I had to face in the end was overscoping. I had too much planned, and couldn’t get it all done. Content-wise, I had several kinds of pasta planned (Wikipedia is just amazing in that regard), split into several different groups, from small Pastina to huge Pasta al forno. But because of time constraints, I ended up scratching most of them, and ended up with 5 different types of very small pasta – barely something to start when talking about the evolution of Pasta. Pastas used in the game. Unfortunately, the macs where never used Which is one of the saddest things about the project, really. It had the framework and the features to allow an endless number of elements in there, but I just didn’t have time to draw the rest of the assets needed (something I loved to do, by the way). Other non-obvious features had to be dropped, too. For example, when ordering some pasta, you were supposed to select what kind of sauce you’d like with your pasta, each with different attributes. Bolognese, for example, is very strong, but inaccurate; Pesto is very accurate and has great range, but it’s weaker; and my favorite, Vodka, would triggers 10% loss of speed on the pasta hit by it. The code for that is mostly in there. But in the end, I didn’t have time to implement the sauce selection interface; all pasta ended up using bolognese sauce. To-do list: lots of things were not done Actual programming also took a toll in the development time. Having been programming for a while, I like to believe I got to a point where I know how to make things right, but at the expense of forgetting how to do things wrong in a seemingly good way. What I mean is that I had to take a lot of shortcuts in my code to save time (e.g. a lot of singletons references for cross-communication rather than events or observers, all-encompassing check loops, not fast enough) that left a very sour taste in my mouth. While I know I used to do those a few years ago and survive, I almost cannot accept the state my code is in right now. At the same time, I do know it was the right thing to do given the timeframe. One small thing that had some impact was using a somewhat new platform for me. That’s Starling, the accelerated graphics framework I used in Flash. I had tested it before and I knew how to use it well – the API is very similar to Flash itself. However, there were some small details that had some impact during development, making me feel somewhat uneasy the whole time I was writing the game. It was, again, the right thing to do, but I should have used Starling more deeply before (which is the conundrum: I used it for Ludum Dare just so I could learn more about it). Argument and user experience One final aspect of the game that I learned is that making the game obvious for your players goes a long way into making it fun. If you have to spend the longest time explaining things, your game is doing something wrong. And that’s exactly the problem Survival of the Tastiest ultimately faced. It’s very hard for people to understand what’s going on with the game, why, and how. I did have some introductory text at the beginning, but that was a last-minute thing. More importantly, I should have had a better interface or simplified the whole concept so it would be easier for people to understand. That doesn’t mean the game itself should be simple. It just means that the experience and interface should be approachable and understandable. Conclusion I’m extremely happy with what I’ve done and, especially given that this was my first Ludum Dare. However, I feel like I’ve learned a lot of what not to do. The biggest problem is overscoping. Like Eric Decker said, the biggest lesson we can learn with this is probably with scoping – deciding what to do beforehand in a way you can complete it without having to rush and do something half-assed. I’m sure I will do more Ludum Dares in the future. But if there are any lessons I can take of it, they are to make it simple, to use frameworks and platforms you already have some absolute experience with (otherwise you’ll spend too much time trying to solve easy questions), and to scope for a game that you can complete in one day only (that way, you can actually take two days and make it cool). This entry was posted on Monday, August 27th, 2012 at 10:54 am and is filed under LD #24. You can follow any responses to this entry through the RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed. 3 Responses to ““Survival of the Tastiest” Post-mortem” darn it , knowing that I missed your livestream makes me a sad panda ;( but more to the point, the game is … well for a startup its original to say the least ;D it has some really neat ideas and more importantly its designed arround touch screens whitch by the looks of the submission is something rare ;o or that could be just me and my short memory -_-! awesum game, love et <3
<?xml version="1.0" encoding="UTF-8"?> <segment> <name>PD1</name> <description>Patient Additional Demographic</description> <elements> <field minOccurs="0" maxOccurs="0"> <name>PD1.1</name> <description>Living Dependency</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.2</name> <description>Living Arrangement</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.3</name> <description>Patient Primary Facility</description> <datatype>XON</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.4</name> <description>Patient Primary Care Provider Name &amp; ID No.</description> <datatype>XCN</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.5</name> <description>Student Indicator</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.6</name> <description>Handicap</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.7</name> <description>Living Will Code</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.8</name> <description>Organ Donor Code</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.9</name> <description>Separate Bill</description> <datatype>ID</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.10</name> <description>Duplicate Patient</description> <datatype>CX</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.11</name> <description>Publicity Code</description> <datatype>CE</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.12</name> <description>Protection Indicator</description> <datatype>ID</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.13</name> <description>Protection Indicator Effective Date</description> <datatype>DT</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.14</name> <description>Place of Worship</description> <datatype>XON</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.15</name> <description>Advance Directive Code</description> <datatype>CE</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.16</name> <description>Immunization Registry Status</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.17</name> <description>Immunization Registry Status Effective Date</description> <datatype>DT</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.18</name> <description>Publicity Code Effective Date</description> <datatype>DT</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.19</name> <description>Military Branch</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.20</name> <description>Military Rank/Grade</description> <datatype>IS</datatype> </field> <field minOccurs="0" maxOccurs="0"> <name>PD1.21</name> <description>Military Status</description> <datatype>IS</datatype> </field> </elements> </segment>
Topic: reinvent midnight madness Amazon announced a new service at the AWS re:Invent Midnight Madness event. Amazon Sumerian is a solution that aims to make it easier for developers to build virtual reality, augmented reality, and 3D applications. It features a user friendly editor, which can be used to drag and drop 3D objects and characters into scenes. Amazon … continue reading
About Grand Slam Fishing Charters As a family owned business we know how important it is that your trip becomes the best memory of your vacation, we are proud of our islands, our waters and our crew and we are desperate show you the best possible time during your stay. We can not guarantee fish every time but we can guarantee you a great time! The biggest perk of our job is seeing so many of our customers become close friends” A Great Way To Make New Friends! Our dockside parties are a great way to make new friends! Everyone is welcome! Andrea runs the whole operation, from discussing your initial needs by phone or email through to ensuring you have sufficient potato chips. Andrea has worked as concierge for many International resorts and fully understands the high expectations of international visitors. “Life’s A Game But Fishing Is Serious!” Unlike many tour operators, our crew are highly valued and have been with us since day 1. Each have their own personalities and sense of humour and understand the importance of making your day perfect, for us the saying is true, “Lifes a game but fishing is serious!” TRIP ADVISOR Plan Your Trip! AJ and Earl were excellent. My son and I did a half day deep sea trip and though the fish weren’t too cooperative, they did everything to try to get something to bite. Very knowledgeable about the waters and my son was able to land a nice barracuda. The next day my wife, daughter, son […] When we arrived the crew made us feel right at home. They made us feel comfortable and answered all questions. The crew worked hard all day to put us on fish. We were successful in landing a nice size Wahoo even though the weather did not cooperate the entire day was enjoyable. I highly recommend […]
Working Women, Special Provision and the Debate on Equality There has been considerable coverage in the media recently about the possibility of offering women in employment paid leave from work during their menstrual period. This has generated a broad range of responses relating to long-standing discussions about ‘equality’ and ‘difference’: is women’s equality best achieved by treating them the same as men or by making provisions that recognise their differences in terms of physiological constitution and biological functions? If the UK introduces such an initiative, it would not be the first country in the contemporary world to do so. Many countries in Asia already make the provision and Russia debated introducing a law in 2013. The policy also has a significant historical precedent. A whole chapter of my book Women Workers in the Soviet Interwar Economy: From ‘Protection’ to ‘Equality’ (Macmillan, 1999), based on extensive research conducted for my PhD, is devoted to ‘Provision for “Menstrual Leave”’. In the 1920s, scientific researchers and labour hygiene specialists in the Soviet Union conducted extensive investigations into the impact of menstruation on women’s capacity to work in manual and industrial jobs requiring a significant degree of physical labour. Their recommendations led to two decrees being issued that targeted specific categories of women workers: Decree ‘On the release from work during menstruation of machinists and iron press workers working on cutting machines without mechanised gears in the garment industry’, 11 January 1922 Decree ‘On the working conditions of women tractor and lorry drivers’, 9 May 1931 These decrees arose from research that suggested, amongst other things, that inadequate seating at machines and on tractors resulted in congestion and tension in the abdomen that was exacerbated during menstruation. In practice, the decrees did not provide for regular absence from work. Women seeking to benefit from the provision had to provide a doctor’s note, similar to the usual requirements for sick leave. The official research into the impact of menstruation on women’s capacity to work and the application of the decrees in practice raised a number of issues on both sides of the argument. I offer only a summary of the contemporary research findings and observer commentary here: For the provision: • employers have a responsibility to protect the health of their workers and unhealthy, poor and inadequate working environments can have a detrimental impact on women’s reproductive health • women’s labour productivity and output would rise as a result • it is essential to protect the professionalism of certain categories of workers: the debates here centred on performance artists and female theatrical employees engaged in highly physical and intensely emotional work • heavy physical labour and strenuous exercise can lead to disruptions of the menstrual cycle • women’s physical and intellectual capacities are reduced during menstruation; women lose muscular strength and powers of concentration • women’s biological constitution and reproductive functions require specific recognition in law Against the provision: • employers are less likely to appoint women if they are guaranteed paid time off work during menstruation • (often from male workers, who viewed the employment of women as competition) women should not be employed in jobs for which they lack the physical strength and mental capacity • if necessary, women could be transferred to different tasks involving easier work during menstruation • the provision would be open to uneven application and abuse • women cannot expect to be considered equal with men if they are given special treatment in the law It is worth noting also that the various research projects often revealed that the vast majority of women reported no regular problems or abnormalities with menstruation, and that men commonly reported higher levels of sickness than their female colleagues. Many of the problems experienced by women in the workplace could be mitigated by the introduction of improvements to their physical working conditions (not sitting down or standing up in the same position for long periods of time) or by the simple introduction of very short breaks that would allow women to walk around and get some exercise. Debates in the UK, on the TV and in the press, are unlikely to reach a consensus on this issue. What do you think?
Jeanette Sawyer Cohen, PhD, clinical assistant professor of psychology in pediatrics at Weill Cornell Medical College in New York City Pediatric Psychologist How to Teach Independence? How can I teach my toddler to do things independently? You’ve probably become more patient since you started this whole parenthood thing. And you’re going to have to practice patience even more as your toddler learns to become more independent. For example, she tells you she can’t finish the puzzle she’s doing. Instead of jumping right in and telling her which piece goes where, you’re going to have to tell her you’ll help a little. Go ahead and help, but let her do a lot of it herself, and make sure she’s the one to finish the job. That will give her a sense of accomplishment and the confidence to try again next time. Remember that children each progress at their own rate. It’s not always fast — and there will be setbacks along the way. But the more you can allow them to do on their own without stepping in, the more they’ll be likely to try for themselves again and again.
Major League Baseball All-Century Team In 1999, the Major League Baseball All-Century Team was chosen by popular vote of fans. To select the team, a panel of experts first compiled a list of the 100 greatest Major League Baseball players from the past century. Over two million fans then voted on the players using paper and online ballots. The top two vote-getters from each position, except outfielders (nine), and the top six pitchers were placed on the team. A select panel then added five legends to create a thirty-man team:—Warren Spahn (who finished #10 among pitchers), Christy Mathewson (#14 among pitchers), Lefty Grove (#18 among pitchers), Honus Wagner (#4 among shortstops), and Stan Musial (#11 among outfielders). The nominees for the All-Century team were presented at the 1999 All-Star Game at Fenway Park. Preceding Game 2 of the 1999 World Series, the members of the All-Century Team were revealed. Every living player named to the team attended. For the complete list of the 100 players nominated, see The MLB All-Century Team. Selected players Pete Rose controversy There was controversy over the inclusion in the All-Century Team of Pete Rose, who had been banned from baseball for life 10 years earlier. Some questioned Rose's presence on a team officially endorsed by Major League Baseball, but fans at the stadium gave him a standing ovation. During the on-field ceremony, which was emceed by Hall of Fame broadcaster Vin Scully, NBC Sports' Jim Gray questioned Rose about his refusal to admit to gambling on baseball. Gray's interview became controversial, with some arguing that it was good journalism, while others objected that the occasion was an inappropriate setting for Gray's persistence. After initially refusing to do so, Gray apologized a few days later. On January 8, 2004, more than four years later, Rose admitted publicly to betting on baseball games in his autobiography My Prison Without Bars. See also Major League Baseball All-Time Team, a similar team chosen by the Baseball Writers' Association of America in Latino Legends Team DHL Hometown Heroes (2006): the most outstanding player in the history of each MLB franchise, based on on-field performance, leadership quality and character value List of MLB awards Team of the century National Baseball Hall of Fame and Museum References External links All-Century Team Vote Totals from ESPN.com All-Century Team DVD from Amazon.com All-Century Team Information from Baseball Almanac Category:1999 Major League Baseball season Category:Major League Baseball trophies and awards Category:History of Major League Baseball Category:Awards established in 1999
{ "fpsLimit": 60, "preset": "basic", "background": { "color": "#0d47a1", "image": "", "position": "50% 50%", "repeat": "no-repeat", "size": "cover" } }
PCI Alternative Using Sustained Exercise (PAUSE): Rationale and trial design. Cardiovascular disease (CVD) currently claims nearly one million lives yearly in the US, accounting for nearly 40% of all deaths. Coronary artery disease (CAD) accounts for the largest number of these deaths. While efforts aimed at treating CAD in recent decades have concentrated on surgical and catheter-based interventions, limited resources have been directed toward prevention and rehabilitation. CAD is commonly treated using percutaneous coronary intervention (PCI), and this treatment has increased exponentially since its adoption over three decades ago. Recent questions have been raised regarding the cost-effectiveness of PCI, the extent to which PCI is overused, and whether selected patients may benefit from optimal medical therapy in lieu of PCI. One alternative therapy that has been shown to improve outcomes in CAD is exercise therapy; exercise programs have been shown to have numerous physiological benefits, and a growing number of studies have demonstrated reductions in mortality. Given the high volume of PCI, its high cost, its lack of effect on survival and the potential for alternative treatments including exercise, the current study is termed "PCI Alternative Using Sustained Exercise" (PAUSE). The primary aim of PAUSE is to determine whether patients randomized to exercise and lifestyle intervention have greater improvement in coronary function and anatomy compared to those randomized to PCI. Coronary function and anatomy is determined using positron emission tomography combined with computed tomographic angiography (PET/CTA). Our objective is to demonstrate the utility of a non-invasive technology to document the efficacy of exercise as an alternative treatment strategy to PCI.
Running Stat Dinner with people is always better than eating alone, especially when the food is good. Good food tastes even better when enjoyed with people. Tonight Amy came over to try my second attempt at the Brussels Sprouts Veggie Soup to which I have made some changes (see recipe below in previous post) for a better result, I believe. We were at the store earlier and saw some nice looking haricot verts and heirloom tomatoes, so we decide to assemble a simple salad from those. Of course while I’m at the market, I can’t not get some five peppercorn salami. Our simple dinner of soup, salami, bread, cheese, salad, and wine was on the table in 15 minutes.
TiO2 nanotubes for bone regeneration. Nanostructured materials are believed to play a fundamental role in orthopedic research because bone itself has a structural hierarchy at the first level in the nanometer regime. Here, we report on titanium oxide (TiO(2)) surface nanostructures utilized for orthopedic implant considerations. Specifically, the effects of TiO(2) nanotube surfaces for bone regeneration will be discussed. This unique 3D tube shaped nanostructure created by electrochemical anodization has profound effects on osteogenic cells and is stimulating new avenues for orthopedic material surface designs. There is a growing body of data elucidating the benefits of using TiO(2) nanotubes for enhanced orthopedic implant surfaces. The current trends discussed within foreshadow the great potential of TiO(2) nanotubes for clinical use.
In general, absorbent articles should comfortably fit the body of a wearer. Most absorbent articles include an absorbent pad formed by an absorbent core contained in a wrap comprising a barrier tissue and/or a forming tissue. The subject invention discloses an absorbent article generally having extensibility in at least one direction, preferably the cross-direction. Such extensibility permits an absorbent article to extend and expand about the wearer and thus to better conform to the body of the wearer. Such extension and expansion about the wearer is feasible because both the bodyside liner and the outer cover are extensible in at least the one direction. In conventional structures, the outer cover is typically adhesively secured to the forming tissue of the absorbent pad. In such embodiments, extending the outer cover in the cross-direction extends the forming tissue in the cross-direction. The force used to extend the outer cover, and thence the absorbent pad, can tear or otherwise damage the forming tissue or the barrier tissue of the absorbent pad. Since the absorbent pad is typically a sealed enclosure, namely an absorbent core enclosed within the combination of a forming tissue and a barrier tissue, tearing the absorbent pad, namely either the forming tissue or the barrier tissue, can release superabsorbent particles and other absorbent materials, such as cellulose fluff into contact with the body of the wearer. Superabsorbent particles can irritate the skin of the wearer. Such tearing of the absorbent pad indicates failure of the absorbent article to perform properly. Therefore, it is critical to find a way to prevent tearing or other structural failure of the absorbent pad.
jOOQ on The ORM Foundation? I am the developer of jOOQ, a Java database abstraction framework. I was wondering whether jOOQ might be an interesting tool for discussion on your website, even if it is not exactly an ORM in the classic meaning (as in mapping objects to the relational world > ORM). Instead, jOOQ uses a reverse engineering paradigm (as in mapping relational entities to objects > "ROM"). Re: jOOQ on The ORM Foundation? Object Role Modeling (the original ORM) is not the same thing as Object/Relational Mapping. Object/Relational Mapping is still kind-of relevant and interesting to us, since Object Role Modeling is used to design databases (which then will require programmatic access). But there are probably better places to discuss it :] Your query DSL looks rather like some of the DSLs available for Ruby, such as through the Sequel gem, or Arel. Interesting to see how well that can work with a statically-types language like Java. Maybe you or I should make a generator for ActiveFacts which generates your DSL from CQL queries? Re: jOOQ on The ORM Foundation? Sorry for my late reply. Apparently I had not really understood the ideas behind your foundation when I wrote my original post. I understand now, that you are concerned with broader concepts than the "common ORM". I actually came across your group because of your linking to ORM Lite (where ORM does stand for Object/Relational Mapping, correct me if I'm wrong). Yes, I have seen some examples for Ruby's Sequel. I personally find statically-typed languages much better for DSL's as the syntax can be formally defined and checked by a compiler - with the limitations an OO language imposes, of course. So if I understand this correctly now, "Object Role Modeling" and CQL are actually a more general way of expressing what SQL calls DDL. Since you can already transform CQL into SQL DDL statements (CREATE TABLE...), and jOOQ can reverse-engineer database schemata into jOOQ generated source code, I don't think there would be need for an additional generator. Does CQL also specify means of querying the data described by the Object Role Model? The examples I found here only seem to describe what SQL calls "constraints" (although with a much broader functionality-range than SQL): Re: jOOQ on The ORM Foundation? "common ORM". I actually came across your group because of your linking to ORM Lite (where ORM does stand for Object/Relational Mapping Object Role Modeling was named before Object Relational Mapping, but the latter is now the more common meaning, as you point out. But ORM Lite is actually so-named by Bryan because it is an implementation of Object Role Modeling, not because it is also an O/RM. Bryan was a student of Terry's at Neumont, where he learnt ORM. Regarding DSLs, I think internal DSLs only work well in very simple cases. I prefer external DSLs for anything complex, and that's where CQL came from. Even the extremely flexible syntax of Ruby wasn't up to the task. lukas.eder: I don't think there would be need for an additional generator The problem is that a huge amount of meaning is lost in the mapping to SQL. SQL is practically (though not theoretically) limited to representing physical models. These are almost always very different from the conceptual model, as many relationships have been condensed (absorbed) into attribute/column relationships, so the semantics of the original relationship are lost. In the process, nullable columns are usually introduced, which adds further to the confusion, as such things cannot easily be correctly constrained (uniqueness, etc) in SQL. So by reverse engineering from the relational form, you're losing most of the benefit of building a conceptual model from the start This may be hard to see for someone used to O-O modeling, and who's authored an O/RM tool. The problem is that O-O suffers from many of the same problems of loss of semantics. The apparently clear notion of "attribute" breaks down when you look at it closely. O-O, although ostensibly behaviour-oriented, introduces attributes to store state, and this attribute orientation is the source of the problem in both cases. Fact-oriented model does not use attributes. Although it may seem obvious that, for example, my surname is an attribute of myself, if the system being modeled accrues the requirement to model families, suddenly surname becomes an attribute of family, and family becomes my attribute. This kind of instability is responsible for much of the rework that's required in evolving legacy systems, as well as many of the mistakes made when they were first modeled. If you want a further example of this loss of semantics, look at my Insurance example, and ask yourself why the VehicleIncident table has a DrivingBloodTestResult column. In fact, if VehicleIncident wasn't explicitly mapped separately, its fields would be in the Claim table. What's needed is not just yet another O/RM tool (which are tuppence a dozen anyhow - I personally have written three) but a tool which supports database programming using only the conceptual model, never exposing the physical model. Surprisingly, I can't think of a single tool which has done a good job of this, but it's where I'm heading with the ActiveFacts API. It's another O/RM, but using a purely conceptual object model that preserves the domain semantics, not a typical O-O one. lukas.eder: Does CQL also specify means of querying the data described by the Object Role Model Yes, though the published implementation doesn't quite handle the full query syntax (aggregate functions are still missing), nor does it yet translate them to SQL. Some examples are given towards the end of the video presentation on the CQL Introduction page. Re: jOOQ on The ORM Foundation? Regarding DSLs, I think internal DSLs only work well in very simple cases. I prefer external DSLs for anything complex, and that's where CQL came from. Even the extremely flexible syntax of Ruby wasn't up to the task. Absolutely. The optimal way to implement SQL in Java would be by extending the Java language itself, such that SQL would be compiled natively by the java compiler, similar to Linq2SQL in C#, or PL/SQL in Oracle databases. So for the complexity of CQL, CQL is certainly the right solution. Clifford Heath: The problem is that a huge amount of meaning is lost in the mapping to SQL. SQL is practically (though not theoretically) limited to representing physical models. You are right. I guess though, that in everyday work, this limitation is not really a problem. Personally, I think if your business rules become so complex that you cannot map them to a relational model easily anymore, then maybe your business rules could be simplified before changing/extending technologies. But that depends on the business, of course. I guess with insurance companies' businesses, I'd be pretty lost, personally ;-) In any case, I don't see jOOQ as a means to solve modelling issues, or the O/R impedance mismatch (which is even bigger when it comes to mapping your understanding of ORM, with CQL). jOOQ should simply make using the full power of SQL in Java as simple as possible. In that way, jOOQ is not really an ORM because it does not map from objects to the relational world, or try to solve any other high-level abstraction issues. It's really a low-level tool to make a developer's life a lot easier, seeing that unfortunately, JPA CriteriaQuery didn't meet the community's expectations. Clifford Heath: What's needed is not just yet another O/RM tool (which are tuppence a dozen anyhow - I personally have written three) but a tool which supports database programming using only the conceptual model, never exposing the physical model. Surprisingly, I can't think of a single tool which has done a good job of this, but it's where I'm heading with the ActiveFacts API. It's another O/RM, but using a purely conceptual object model that preserves the domain semantics, not a typical O-O one. I think you're on the right track with this. I hope for you, that this will soon show nice results with a practical implementation. I'm curious to see how you'll tackle performance issues, too, with all the abstraction. Among all attempts to overcome the old and proven relational models (XML databases, NoSQL databases), this one seems the most promising and focused to me!
Standardised protocol for primate faecal analysis. Macroscopic analysis of primate faeces as a way to study diet is well established, but lack of standardisation of methods may handicap comparative studies of the resulting data. Here we present a proven technique, including equipment and supplies, protocol and procedure, that yields quantitative data suitable for systematic investigation within and across primate taxa. As the problems of habituation become more obvious, the application of such indirect methods may increase in usefulness.
Examination of factors affecting gait properties in healthy older adults: focusing on knee extension strength, visual acuity, and knee joint pain. Gait properties change with age because of a decrease in lower limb strength and visual acuity or knee joint disorders. Gait changes commonly result from these combined factors. This study aimed to examine the effects of knee extension strength, visual acuity, and knee joint pain on gait properties of for 181 healthy female older adults (age: 76.1 (5.7) years). Walking speed, cadence, stance time, swing time, double support time, step length, step width, walking angle, and toe angle were selected as gait parameters. Knee extension strength was measured by isometric dynamometry; and decreased visual acuity and knee joint pain were evaluated by subjective judgment whether or not such factors created a hindrance during walking. Among older adults without vision problems and knee joint pain that affected walking, those with superior knee extension strength had significantly greater walking speed and step length than those with inferior knee extension strength (P < .05). Persons with visual acuity problems had higher cadence and shorter stance time. In addition, persons with pain in both knees showed slower walking speed and longer stance time and double support time. A decrease of knee extension strength and visual acuity and knee joint pain are factors affecting gait in the female older adults. Decreased knee extension strength and knee joint pain mainly affect respective distance and time parameters of the gait.
I've learned the nitrogen vacancies used in Memristors are for "switching", between excited states and inhibited states, akin to our neurons and SYNAPSES abilities to generate EPSPs and IPSPs, this is the entire point to Memristors and DARPAs SyNAPSE program, emulating Neurons.. So in the memristor, NVs (which are truly Ancillas), Return to "resting states", just like Neurons do, hence Inhibitory states versus excited states, when a neuron reaches an action potential and fires.. So the ancillas use prepared/ known states, and are the equivalent of the ancillas ground state, which is equal to a neurons resting potential... So by weakly measuring certain aspects of living neurons, it is possible to superbroadcast/ teleport the wavefunction non-classically to the memristors vacancies, correlating each memristor with its neuron statistical ensemble counterpart, sharing the quantum state of the resting potential. the ground state of the ancilla. The type of measurement determines which property is shown. However the single and double-slit experiment and other experiments show that some effects of wave and particle can be measured in one measurement. Hence Mach-Zehnder interferometry, which also involves ANCILLAS Quote: When for example measuring a photon using a Mach-Zehnder interferometer, the photon acts as a wave if the second beam-splitter is inserted, but as a particle if this beam-splitter is omitted. The decision of whether or not to insert this beam-splitter can be made after the photon has entered the interferometer, as in Wheeler’s famous delayed-choice thought experiment. In recent quantum versions of this experiment, this decision is controlled by a quantum ancilla, while the beam splitter is itself still a classical object. and the no-cloning theorem is about pure states.. But an ensemble of particles in a neuron would make it a mixed state.. The no-cloning theorem is normally stated and proven for pure states; the no-broadcast theorem generalizes this result to mixed states. And thats why PHASE works for quantum metrology and its ability to harness non classical states Apparently, worrying about measuring both position and momentum works differently for particles than it does waves. It may actually be possible using phase. Quote: Niels Bohr apparently conceived of the principle of complementarity during a skiing vacation in Norway in February and March 1927, during which he received a letter from Werner Heisenberg regarding the latter's newly discovered (and not yet published) uncertainty principle. Upon returning from his vacation, by which time Heisenberg had already submitted his paper on the uncertainty principle for publication, he convinced Heisenberg that the uncertainty principle was a manifestation of the deeper concept of complementarity.[6] Heisenberg duly appended a note to this effect to his paper on the uncertainty principle, before its publication, stating: Quote: Bohr has brought to my attention [that] the uncertainty in our observation does not arise exclusively from the occurrence of discontinuities, but is tied directly to the demand that we ascribe equal validity to the quite different experiments which show up in the [particulate] theory on one hand, and in the wave theory on the other hand. And "quadratures" is about position and momentum.. Which are apparently always orthogonal to each other. There is obviously something to all of this. Counterfactual Communication was recently used to transmit information without sending any PARTICLES. the information was sent in the phase.. of a wavefunction? and it used MachZenhder Interferometry.. which is part of Quantum Metrology and its ability to harness non-classical states.. and all of this can teleport non-classical light.. and it all uses ANCILLAS... which store VALUES, and WAVEFUNCTIONS.. because they are Qubits/ Nitrogen vacancies.. and are used in WEAK MEASUREMENT... which was used to measure a wavefunction.. something most would argue is impossible.. because of the uncertainty principle.. Quote: An interpretation of quantum mechanics can be said to involve the use of counterfactual definiteness if it includes in the statistical population of measurement results, any measurements that are counterfactual because they are excluded by the quantum mechanical impossibility of simultaneous measurement of conjugate pairs of properties. For example, the Heisenberg uncertainty principle states that one cannot simultaneously know, with arbitrarily high precision, both the position and momentum of a particle Quote: The word "counterfactual" does not mean "characterized by being opposed to fact." Instead, it characterizes values that could have been measured but, for one reason or another, were not and its the Ancillas that store values.. and may or may not be part of the measurement apparatus... / interferometer.. In 2015, Counterfactual Quantum Computation was demonstrated in the experimental context of "spins of a negatively charged Nitrogen-vacancy color center in a diamond".[5] Previously suspected limits of efficiency were exceeded, achieving counterfactual computational efficiency of 85% with the higher efficiency foreseen in principle Quote: The quantum computer may be physically implemented in arbitrary ways but the common apparatus considered to date features a Mach–Zehnder interferometer. The quantum computer is set in a superposition of "not running" and "running" states by means such as the Quantum Zeno Effect. Those state histories are quantum interfered. After many repetitions of very rapid projective measurements, the "not running" state evolves to a final value imprinted into the properties of the quantum computer. Measuring that value allows for learning the result of some types of computations such as Grover's algorithm even though the result was derived from the non-running state of the quantum computer. NV CENTERS can also be used asQUANTUM SPIN PROBES, QUBITS & AS, ANCILLAS in devices such as BIOMEMs scanners QUANTUM REPEATERS PHOTONIC NETWORKING and.. MEMRISTORS.. where the vacancies are used for switching between inhibited and excited states, thus simulating NEURONS MEMRISTORS utilize wavefunctions. Wavefunctions can be weakly measured by ANCILLAS ANCILLAS hold "values" ie : wavefunctions and have GROUND STATES which measured particles are "cooled" into for measurement techniques. a literal form of "photon counting".. "This de-excitation is called ‘fluorescence’, and it is characterized by a lifetime of a few nanoseconds of the lowest vibrational level of the first excited state. De-excitation from the excited singlet state to the ground state also occurs by other mechanisms, such as non-radiant thermal decay or ‘phosphorescence’. In the latter case, the chromophore undergoes a forbidden transition from the excited singlet state into the triplet state (intersystem crossing, ISC, Fig 2.4), which has a non-zero probability, for example because of spin orbit coupling of the electrons’ magnetic moments" its a type of INTERSYSTEM CROSSING doing a search for Intersystem crossing, memristor brings up this link.. A composite optical microcavity, in which nitrogen vacancy (NV) centers in a diamond nanopillar are coupled to whispering gallery modes in a silica microsphere, is demonstrated. Nanopillars with a diameter as small as 200 nm are fabricated from a bulk diamond crystal by reactive ion etching and are positioned with nanometer precision near the equator of a silica microsphere. The composite nanopillar-microsphere system overcomes the poor controllability of a nanocrystal-based microcavity system and takes full advantage of the exceptional spin properties of NV centers and the ultrahigh quality factor of silica microspheres. We investigate the construction of two universal three-qubit quantum gates in a hybrid system. The designed system consists of a flying photon and a stationary negatively charged nitrogen-vacancy (NV) center fixed on the periphery of a whispering-gallery-mode (WGM) microresonator, with the WGM cavity coupled to tapered fibers functioning as an add-drop structure. These gate operations are accomplished by encoding the information both on the spin degree of freedom of the electron confined in the NV center and on the polarization and spatial-mode states of the flying photon, respectively Now Somewhere in this is evidence of a memristor holding a wavefunction The shown SPICE implementation (macro model) for a charge controlled memristor model exactly reproduces the results from [2]. However, these simulation results do not have a good compliance - not even qualitatively - with the characteristic form of I/V curves of manufactured devices. Therefore the following equations (3) to (9) try to approach memristor modeling from a different point of view to get a closer match to the measured curves from [2],[6],[7],[8],[10] or [11] even with a simple linear drift of w. Besides the charge steering mechanism of a memristor modelled in [2], [1] also defined a functional relationship for a memristor which explains the memristive behavior in dependence on its magnetic flux: i(t) = W φ(t) · v(t) . (3) Variable W (φ) represents the memductance which is the reciprocal of memristance M. Here a mechanism is demanded that maps the magnetic flux as the input signal to the current that is flowing through the memristor. The magnetic flux φ is the integral of voltage v(t) over time: φ = R v(t) dt. We can assume that an external voltage which is applied to the previously described two-layer structure has an influence on the movable 2+-dopants over time. The width w(t) of the semiconductor layer is depending on the velocity of the dopants vD(t) via the time integral: w(t) = w0 + Z0t vD(τ)dτ . (4) The drift velocity vD in an electric field E is defined via its mobility µD: vD(t) = µD · E(t) (5) and the electric field E is connected with the voltage via E(t) = v(t) D(6)with D denoting the total thickness of the two-layer structure (D = tOX + tSEMI). Due the good conductance of the semiconductor layer the electric field is applied to the time depending thickness of the insulator layer tOX for the most part (due to v(l) = R E dl). However, this was neglected for reasons of simplification. If we combine (4), (5) and (6), we obtain: n(t) = w0 + µDD· Z0t v(τ)dτ = w0 + µDD · φ(t) . (7) This equation shows a proportional dependence of the width w from the magnetic flux φ. Since the thickness of the insulator layer is in the low nanometer region a tunnel current or equivalent mechanism is possible. The magnetic flux slightly decreases the thickness of the insulator layer wich is the barrierfor the tunnel current.This current rises exponentially with a reduction of the width tOX(φ) (the exponential dependenceis deducible from the quantum mechanic wave function) which must become the GROUND STATE of the ANCILLA upon non-classical correlation.. because a wavefunction is essentially the "master equation" (which describe wave equations) We investigate theoretically how the spectroscopy of an ancillary qubit can probe cavity (circuit) QED ground states containing photons. We consider three classes of systems (Dicke, Tavis-Cummings and Hopfield-like models), where non-trivial vacua are the result of ultrastrong coupling between N two-level systems and a single-mode bosonic field. An ancillary qubit detuned with respect to the boson frequency is shown to reveal distinct spectral signatures depending on the type of vacua. In particular, the Lamb shift of the ancilla is sensitive to both ground state photon population and correlations. Back-action of the ancilla on the cavity ground state is investigated, taking into account the dissipation via a consistent master equation for the ultrastrong coupling regime. The conditions for high-fidelity measurements are determined. \\ Notice BACK-ACTION, which goes right back to DARPAs Nanodiamond Biosensors and their ability to overcome the standard quantum limit, because of the known/ prepared states in the ancillas/NITROGEN VACANCIES Quote: (Quantum) back action refers (in the regime of Quantum systems) to the effect of a detector on the measurement itself, as if the detector is not just making the measurement but also affecting the measured or observed system under a perturbing effect. Back action has important consequences on the measurement process and is a significant factor in measurements near the quantum limit, such as measurements approaching the Standard Quantum Limit (SQL). Back action is an actively sought-after area of interest in present times. There have been experiments in recent times, with nanomechanical systems, where back action was evaded in making measurements, such as in the following paper : When performing continuous measurements of position with sensitivity approaching quantum mechanical limits, one must confront the fundamental effects of detector back-action.Back-action forces are responsible for the ultimate limit on continuous position detection, can also be harnessed to cool the observed structure[1,2,3,4], and are expected to generate quantum entanglement. Back-action can also be evaded, allowing measurements with sensitivities that exceed the standard quantum limit, and potentially allowing for the generation of quantum squeezed states. So the NV centers are used as ancillas in the measurement process.. which weakly measure wavefunctions of particles in neurons, most likely singlet and triplet states occurring in ATP and phosphase... then those same wavefunctions are transfered and produce a correlation at the ground state.. where the ancilla takes on the new value/wavefunction.. and here we find all these ideas.. minus the switching which I can explain Memristors use NV centers to switch between inhibited and excited states singlet and triplet states thus producing/simulating/ EMULATING, living neurons and action potentials and it may just BE the network and its computing speed, that even allows the wavefunction to be "found" Artificial Neural Network. A pair of physicists with ETH Zurich has developed a way to use an artificial neural network to characterize the wave function of a quantum many-body system. [14]. A team of researchers at Google's DeepMind Technologies has been working on a means to increase the capabilities of computers by ... While there are lots of things that artificial intelligence can't do yet—science being one of them—neural networks are proving themselves increasingly adept at a huge variety of pattern recognition ... That's due in part to the description of a quantum system called its wavefunction. ... Neural network chip built using memristors. https://books.google.ca/books?isbn=9814434809Andrew Adamatzky, ‎Guanrong Chen - 2013 - ‎Computers Global and local symmetries In quantum physics, all the properties of a system can be derived from the state or wave function associated with that system. The absolute phase of a wave function cannot be measured, and has no practical meaning, as it cancels out the calculations of the probability distribution. Only relative ... The las vegas shooting left 58 INNOCENT PEOPLE DEAD. The gunmans brother was later arrested for possession of child porn. This technology was developed to defend against terrorism and child abuse. Connect the dots. I bet the brothers were sharing files and one of them ended up a "targeted individual" So he began to stockpile weapons and plan the only way out of his nightmare. There has been no mentioning of him."hearing voices" But the fact his brother was later arrested for such a crime paints a picture worth looking into. Those vibrations, are the result of this assumed BIOMEMS "deployable biosensor" And its use of excitation techniques made to single out single neurons to measure the WAVEFUNCTIONS during a tomographic scan. which makes such possible Quantum-assisted Nano-imaging of Living Organism Is a First Quote: “In QuASAR we are building sensors that capitalize on the extreme precision and control of atomic physics. We hope these novel measurement tools can provide new capabilities to the broader scientific and operational communities,” said Jamil Abo-Shaeer, DARPA program manager. “The work these teams are doing to apply quantum-assisted measurement to biological imaging could benefit DoD’s efforts to develop specialized drugs and therapies, and potentially support DARPA’s work to better understand how the human brain functions.” "Nuclear spin imaging at the atomic level is essential for the under-standing of fundamental biological phenomena and for applicationssuch as drug discovery. The advent of novel nano-scale sensors hasgiven hope of achieving the long-standing goal of single-protein, highspatial-resolution structure determination in their natural environ-ment and ambient conditions. In particular, quantum sensors basedon the spin-dependent photoluminescence of Nitrogen Vacancy (NV)centers in diamond have recently been used to detect nanoscale en-sembles of external nuclear spins. While NV sensitivity is approachingsingle-spin levels, extracting relevant information from a very com-plex structure is a further challenge, since it requires not only theability to sense the magnetic field of an isolated nuclear spin, butalso to achieve atomic-scale spatial resolution. Here we propose amethod that, by exploiting the coupling of the NV center to an intrin-sic quantum memory associated with the Nitrogen nuclear spin, canreach a tenfold improvement in spatial resolution, down to atomic scales." So what its all doing essentially, is mapping the phase of atoms/SINGLETS in ATP, onto a NV center based CCD and at the singlet level, correlations occur.. creating entanglement so the particles in the neuron are being correlated with the ancillas, the nitrogen vacancies, where they take on the "target" state.. not only is the above imaging done to obtain a correlation to living neurons, via the singlet states within, but once the connection is established, the MEMRISTOR NETWORK itself can be used to RECONSTRUCT VISION IN REAL TIME Now add the above method, a direct connection using correlated states shared from neurons TO Memristors... and imagine the reconstruction aided by the AI within the memristor network, as it works on so.. (note, this example is done MERELY using fMRI information) now Imagine statistical ensembles being observed in real time via non-classical entanglement But what I'm trying to show, is hows its this assumed entanglement based BCI technology, plus the memristor network it is coupled to, that is responsible for the TI communities complaints that "they (the government) can see through my own eyes" The nitrogen vacancies in the scanners hold values, wavefunctions, which are prepared states aka Ancilla bits, and are the time domain/reference frequency, which carrries the "quantum event/wavefunction" which causes the singlet pairs to form up in the scanned biology.. and correlates with them at the ground state as the relaxation occurs.. Quote: It is important to realize that particles in singlet states need not be locally bound to each other. For example, when the spin states of two electrons are correlated by their emission from a single quantum event that conserves angular momentum, the resulting electrons remain in a shared singlet state even as their separation in space increases indefinitely over time, provided only that their angular momentum states remain unperturbed and that weakly measured value, the wavefunction is sent through the optical cavity, teleported to identical nitrogen vacancies in memristors.. so the ground states in both system are correlated and thus the neural activity can be monitored in real time in the memristors
Volunteer Services Volunteer Services As Charleston Area Medical Center volunteers, our mission is to serve as support for patients, families and hospital staff, and to provide a caring, comforting and courteous environment. Volunteers at CAMC bring their unique personalities and skills to our hospital. They range in age from 15 to 99. Our ranks are made up of men and women; students and retirees; homemakers and business people. Last year, 334 volunteers contributed over 36,000 hours to our hospitals and Cancer Center. We are looking for volunteers who exemplify CAMC's core values of respect, integrity, stewardship, quality, service with compassion and safety. These volunteers will help us with our mission of "striving to provide the best health care to every patient, every day."
Formulation and application of a biosurfactant from Bacillus methylotrophicus as collector in the flotation of oily water in industrial environment. The present study describes the formulation of a biosurfactant produced by Bacillus methylotrophicus UCP1616 and investigates its long-term stability for application as a collector in a bench-scale dissolved air flotation (DAF) prototype. For formulation, the conservative potassium sorbate was added to the biosurfactant with or without prior heat treatment at 80 °C for 30 min. After formulation, the biosurfactant samples were stored at room temperature for 180 days and the tensioactive properties of the biomolecule were determined with different pH values, temperatures and concentrations of salt. Then, a central composite rotatable design was used to evaluate the influence of the independent variables (effluent flow rate and formulated biosurfactant flow rate) on the oil removal efficiency in the DAF prototype. The formulated biosurfactant demonstrated good stability in both conservation methods, with tolerance to a wide pH range, salinity and high temperatures, enabling its use in environments with extreme conditions. The efficiency of the formulated biomolecule through heating and addition of sorbate was demonstrated by the 92% oil removal rate in the DAF prototype. The findings demonstrate that the biosurfactant from Bacillus methylotrophicus enhances the efficiency of the DAF process, making this technology cleaner. This biosurfactant can assist in the mitigation and management of industrial effluents, contributing toward a reduction in environmental pollution caused by petroleum-based hydrocarbons.
Playing back a meeting recording …Let me show you how to locate and play back a meeting that you have recorded.…First, let's understand how WebEx Meetings store and prepare your meeting recordings.…The meetings are recorded on the WebEx server.…WebEx will post the recording to their…server within 24 hours of the meeting completion.…When your recording is ready, you'll receive an update on…your dashboard homepage with the playback link and the recording information.…Let me show you how that looks.…When you get this notification, you can click the link that says Play Recording.…And WebEx will play back the video for you with the WebEx network recording player.… To locate your meeting recording manually, if…you miss the notification, the easiest thing…to do is look at the meetings space for the meeting that you recorded.…First, find the meeting in your meetings list by clicking the Meetings tab.…Click the Recent tab.…You'll note, in the list, whether it's recorded or not.…Click on the meeting title to visit the meeting space page for that meeting.… Resume Transcript Auto-Scroll Author Released 6/9/2014 Connect and collaborate across the globe with WebEx Meetings. In this course, author and webinar specialist Sally Norred shows you how to use WebEx Meetings to host, run, and record online meetings. Discover how to set up an online meeting and invite attendees, work with interactivity, let attendees participate and present, and save and record a meeting. Also check out the quick tips sheets (free to all members) for a list of handy shortcuts for hosts, presenters, and attendees alike.
During my pregnancy, I tried to gather as much information on how painful labor might actually be. I would often hear “mine was horrible, but everyone’s pregnancy is different” or “it was the worst pain I’ve ever felt in my life!” I heard many horror stories which often ended with, “well, don’t worry. You’ll forget about the pain as soon as your child is born.” Not the most reassuring for a first-time mother, but something I definitely kept in mind the entire time. I had feared the unknown, but on the other hand, I knew there was no turning back and that my baby was coming one way or another! Two weeks before my due date, I noticed some blood. My water didn’t break and I saw no mucous plug, but it seemed that something was happening earlier than expected. Soon after, at 1 a.m. I woke up from a notably different type of cramping. It began to occur every 5 minutes. It wasn’t that painful (yet), but uncomfortable. I felt as if I had to go diarrhea every five minutes. If this is labor, I could handle it for sure I thought, but I knew this was only the beginning. My husband nervously drove us to the hospital as if the baby would pop out any second. I had to remind him to not worry. Things usually didn’t happen that fast for first-time moms (or at least I hoped it wouldn’t). I had to go by instinct although in the back of my mind, I wasn’t sure what would happen next. We finally got to the birthing center after an hour of driving and the nurses confirmed I wasn’t even dilated. I couldn’t believe it. We were turned away and had to find a hotel because returning home wasn’t an option. It would take two hours just to return again! The diarrhea-like cramps were painful and uncomfortable; I couldn’t sleep. I was bleeding slightly and started to actually have these cramps and stomach aches over a 10 hour period. I started googling my symptoms (never a good thing!) and discovered there are people who have this uncomfortable feeling for days and weeks! “Fake labor” would not be in my cards, I had hoped. Fortunately, I had an appointment with a midwife in the afternoon and was checked again for any cervical changes. I had finally dilated 3 cm and was 90% effaced. What a relief I thought! I welcomed the pain because I wanted things to progress. I couldn’t imagine having diarrhea cramps for weeks. However, 3 cm isn’t enough to be admitted, we were told, so back to the hotel we went. “When your cramps become more regular, every 3 minutes a part, and you become more snippy, check in again” the midwife suggested. In the mean time, I tried to walk around, pausing multiple times to catch my breath. A couple hours later, I was FINALLY admitted. My husband kept asking me questions non-stop about what I wanted, needed, and more. All I could say was “if I need something, I’ll let you know. Thanks.” I literally couldn’t talk. I felt like vomiting and had heart burn for the first time in my life. As my labor progressed, I felt the urge to push before I was even 10 cm dilated. I would have a cramp, then a couple of minutes later, one that made me yell out in pain as it forced my body to push. A gush of blood would come out as this happened and I felt extremely uncomfortable because the pain was in my back and butt! It would take my breath away. However, the pain was still tolerable, believe it or not. I had a volunteer doula come in that night who helped me breathe, rubbed my back, and encouraged me. She helped me be aware of my voice and how I could use it to save energy and get through the pain. Unfortunately, she couldn’t stay the whole night, but the time she spent with me truly made a difference. Even though labor was hard work and painful, the right breathing technique and support helped ease the pain. This is probably the number one thing that helped me get through labor! As I started heading towards my second night of labor, I wondered how much longer I could go on … I questioned if it was even worth it to continue without an epidural? I went into labor without a plan. I wanted to go with the flow and make decisions as they came. I didn’t want to be tied to a bed or deliver on my back or disappointed if my perfect labor didn’t come true, so I left any expectations open. But after my second sleepless night, I started to inquire about pain medications (although deep down inside I knew I could handle more because the pain was still manageable). I was exhausted and sleep would have been nice especially if I didn’t have to feel any pain with an epidural. There were no walking epidurals available though and I didn’t want to take narcotics (which could make me dizzy), so I continued along, breathing away. A bath was an option too and this I requested and wanted. I was so uncomfortable as things progressed. I couldn’t get in the shower to relax my muscles, but somehow a bath sounded soothing and worth the effort. As soon as the bath was ready, however, I suddenly felt a pop down below as if major pressure had been released from my insides. Immediately, there was a shift. The back and butt pressure/pain I felt was no longer there. It was time to push! I knew as soon as I felt it. As the baby descended, I felt the burning sensation of the babies head crowning – a temporary stretching sting. The cramps were still there and I had no control over my own pushing. I let my body do its own work and took the breaks my body provided in between each wave of labor. I was standing up giving birth because I couldn’t get onto the bed as I would have liked and was given a stool to put my right foot onto in order to widen my pelvis. Gravity certainly assisted me. However, I never expected to be standing for 50 minutes! My legs were becoming tired and shaky, but I couldn’t move. My energy was sapped and I regretted not exercising more. Standing up was the most comfortable thing to do though and I listened to my body’s cues. I started to go along with my body’s signals to push, but after a while I felt as if the baby would never come out because things weren’t progressing fast enough. After his head came out, I thought it was all over until I heard my husband say “push, his body is stuck!” I ended up pushing as hard as I could and a gush of fluids came spewing onto the floor. It was the best sense of relief. The midwives held my baby from under me and told me to grab him. He was screaming, kicking, and punching his way into this life. He was so slippery, I was terrified to grab him. I had never held a baby before. He would be my first. I held my son and put him on my chest. I couldn’t stop looking at him in awe. He was so beautiful to me and I felt overwhelmed with love and joy. When the umbilical cord finally stopped pulsating, which happened surprisingly quick, my husband carefully snipped it. At this point, I’m glad my husband didn’t pass out. I always joked that he would get queasy and faint, but my husband did amazing! While holding my son, I had to deliver my placenta which did not hurt at all. In fact, I couldn’t even feel much down below because of the adrenaline pumping throughout my veins. Looking into my son’s eyes and holding him for the first time was the most incredible thing in the world. The pain that I felt earlier in labor vanished and I felt ecstatic to have made it through. It’s true what they say … After your baby is born, you forget the pain of labor and birth. At least most of it. Hello! Hello! It's nice to meet you! I'm Mary. Thank you for stopping by Stirring Up Delight. I hope you'll find some useful tips, recipes, and reviews or maybe a story or two that you might enjoy. Read More… Follow me Subscribe to Blog via Email Enter your email address to subscribe to this blog and receive notifications of new posts by email. Email Address Stirringupdelight.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com For those of you who were told you might need an endometrial biopsy, here’s my experience, so you can sleep a little better at night. Although I do not know what yours will be like, I can tell you that not all of them turn out horrific like you might’ve read online. Why You Might […] Kalua pig is a dish from Hawaii that may be intimidating to make if it’s done traditionally, but modern technology has its benefits. You don’t have to roast a pig underground, but instead you can use your slower cooker to make it. How easy is it? Buy some pork butt at the store and toss […] I’m so excited to be planning my niece’s 1st birthday party this fall! For anyone who really knows me, I absolutely love planning. It is one of my obsessions. After making numerous planning mistakes, however, I would like to share with you some tips I’ve learned along the way. If you are planning your child’s […] One of my favorite drinks when I return home is plantation iced tea. Last year, when I spent a week back home in Hawaii, I ended up drinking it as often as I found it on the menu. Now that the heat of summer is here, I’m dreaming of returning home to visit again. I really […]
Computer assisted learning: the potential for teaching and assessing in nursing. This article discusses computer assisted learning (CAL) and the importance of applying it in nurse education. The articles recognizes the general technological developments as exemplified by the Teaching and Learning Technology Programme (TLTP) from which ideas about application and benefits came. The ideas from TLTP are hereby used in CAL and applied to nursing and health-care undergraduate programmes in one university. In the light of this experience the main intention of this article is to consider the benefits and costs of introducing computer programmes as part of the teaching provision for nurses and other health-care professionals both at beginner and advanced level. The article further argues that CAL can also be used for patient teaching thus providing transferable skills and benefits for teachers as well as learners, be they students or patients. To support such multiple uses of CAL selected examples will be offered and appropriate conclusions will be drawn.
Inorganic phosphate uptake in intact vacuoles isolated from suspension-cultured cells of Catharanthus roseus (L.) G. Don under varying Pi status. Inorganic phosphate (Pi) uptake across the vacuolar membrane of intact vacuoles isolated from Catharanthus roseus suspension-cultured cells was measured. Under low Pi status, Pi uptake into the vacuole was strongly activated compared to high Pi status. Since Pi uptake across the vacuolar membrane is correlated with H+ pumping, we examined the dependency of H+ pumping on plant Pi status. Both H+ pumping and the activities of the vacuolar H+-pumps, the V-type H+-ATPase and the H+-PPase were enhanced under low Pi status. Despite this increase in H+ pumping, Western blot analysis showed no distinct increase in the amount of proton pump proteins. Possible mechanisms for the activation of Pi uptake into the vacuole under low Pi status are discussed.
2017 XIXO Ladies Open Hódmezővásárhely – Doubles Laura Pigossi and Nadia Podoroska were the defending champions, but both players chose not to participate. Kotomi Takahata and Prarthana Thombare won the title after Ulrikke Eikeri and Tereza Mrdeža retired in the final at 1–0. Seeds Draw References Main Draw XIXO Ladies Open Hódmezővásárhely - Doubles
POV: Henry vs Martin + a poll I won’t make claims as to their gifts and charms, but H & M do resemble me in various ways :) I usually like to write stories from a single point of view. It’s obviously a limited perspective, but I enjoy the constraints. As far as I’m concerned, there’s no such thing as a reliable narrator. Characters misinterpret things, miss things, draw the wrong conclusions, and it can be tricky and fun to work the “truth” into a story alongside the character’s perceptions. For instance, I think it’s obvious to the reader that Martin is DTF from the get-go, but Henry, equipped with the same amount of information, simply doesn’t get it. When I started writing the Ganymede Quartet books, it seemed obvious to me that the story needed to be told from the master’s point of view. Whether or not he’s actually prepared to take responsibility, the fact remains that Henry’s the one in charge and he sets the tone. It’s Martin’s job to adapt and respond and accommodate and serve. Obviously, Martin is better-equipped to steer this particular ship, but, unfortunately for Henry, the roles in this relationship weren’t assigned based on fitness or merit. If you’ve read A Most Personal Property (GQ Book 1), you know that when the opportunity finally arises for Martin to take charge, he does so with great effect, but he does wait for Henry to create the opportunity. He’s very well-trained. I think it’s apparent that Martin is miserable for most of AMPP, and writing weeks of self-doubt and misery even greater than Henry’s, from the perspective of a character who has even less power to effect change…I don’t think anyone wants to read that book, actually. Henry also needed to be the POV character for the main books because Henry is the one who has the most growing to do. They’re both young, both immature, but Martin is less immature, his sense of self is more solid and, well, he’s a lot smarter. Henry learns a lot over the course of the series, which is not to say that Martin doesn’t, but as the one nominally in charge, Henry’s growth has a greater impact on both of them. It was possibly something of a risk, but I left out or delayed certain trains of thought because Henry isn’t necessarily considering all aspects and implications of the master/slave dynamic from early on in their relationship. He’s very loving, but he’s not the most insightful person, and it takes him awhile to consider things that a savvier fellow might have questioned from the beginning. It really does take Henry a long time to wonder how Martin’s position and training impact the way Martin responds to him. I anticipate going a little deeper into Martin’s background, in a way, for the story that will accompany Book 3. I also have a pretty good idea which aspect of Book 4 I’ll present from Martin’s perspective. So far, the Martin stories have been really fun to write, and I definitely look forward to doing them. I think they’re so easy and enjoyable to work on because they revisit territory that I’ve already covered from Henry’s perspective to some extent, and when I’m writing Henry, I’m always considering how Martin might view a given situation, as well. Offering Martin’s POV at all was actually a pretty late development. It occurred to me shortly before publishing A Most Personal Property that the stories I was busy telling myself about Martin’s past would probably be of interest to anyone who was interested in AMPP, and so I quickly wrote A Superior Slave. I hoped that people who enjoyed reading ASS (ugh, that acronym!) for free might be interested in paying for AMPP, and I think that did happen to some small extent. I’ve gotten the impression (whether it’s true or not) that Martin might be the reader favorite by a small margin, so it just seems like a nice idea to continue offering Martin POV stories alongside the main books. While I think a person can enjoy the main books and Henry’s POV without side stories, I like to think Martin’s perspective is a valuable addition. I plan on adding additional points of view from other characters in the universe. I’ve got stories written about a couple of Henry’s friends to show how slave ownership works in private for other people. I’ve got at least two stories I want to write about Henry’s cousin Jesse. I think Tom gets his own novella :D With A Proper Lover (GQ Book 2) and A Master’s Fidelity (GQ Book 2.5) released, I’m just going immediately into editing Book 3 and fleshing out the notes I have for the Martin story. I’d had vague ideas about taking a break, but I honestly don’t know what that would mean at this point. I don’t know what I’d be doing during a break! Right now, the idea of downtime just makes me cranky. Knowing that there are people eager for the next books makes me want to work on getting them out. Besides, working on Martin’s POV is a treat :)
The terrifying 38-minute ordeal suffered by Hawaii residents on Saturday, when the state’s emergency-management agency sent out a false alert warning of an imminent ballistic-missile strike amid rising tensions with North Korea, seems to have sparked an unusually rapid response on Capitol Hill. Hawaii’s Sen. Brian Schatz, a Democrat on the Senate Commerce Committee, told National Journal that he is working with other Senate Democrats on a bill that would implement a federal best-practice framework for the ballistic-missile-alert systems administered by U.S. states, localities, and territories. And while Republicans don’t appear to be involved in the process, relevant GOP chairs in both chambers have expressed a willingness to work with Schatz on the issue. Initial reports indicate that Hawaii’s screwup—which sent people across the archipelago scrambling for shelter before the all-clear was called more than a half-hour later—was because of an employee mistakenly pressing the wrong link on a confusingly designed interface. But for something as serious as a ballistic-missile alert, Schatz suggested that the potential for human error can, and should, be mitigated through federal safeguards. “You want a system that accounts for the fact that somebody may be sleepy or careless, or an interface may not be the most user-friendly, and yet it all works anyway,” Schatz said. “We have best practices for disaster notifications for natural disasters, for terrorism events. We just don’t have it for this.” On Wednesday, Schatz said he had convened a phone call with officials from the Federal Communications Commission, the Homeland Security Department, the Pentagon, and other relevant agencies to address the inconsistency. “We think it should be done legislatively, but I don’t know that for sure yet,” he told reporters, explaining that the ultimate goal is to craft “a federal law to establish a framework that states can use.” The way America’s missile-alert system operates is fundamentally different from how citizens are alerted to most other catastrophes, when local authorities often possess the best information. While states and cities are ultimately responsible for alerting civilians of an imminent attack, they lack the ability to detect and track incoming missiles. In the seconds and minutes after a launch, details of the threat would have to cascade through phone calls from the Pentagon to DHS. From there, officials at the Federal Emergency Management Agency would send the warning to at-risk states and localities, whose own alert systems would only then spring into life. That chain of causation was disrupted on Saturday. But David Simpson, a former admiral in the U.S. Navy who ran the FCC’s Public Safety and Homeland Security Bureau from November 2013 to January 2017, said federal legislation should seek to dismantle that outdated process altogether. “That’s a 1950s kind of structure,” Simpson said, arguing that machine-to-machine communication technology should be utilized to eliminate lag time and cut down on human error. One way to do that could be for the FCC to create, at the direction of Congress, a unique wireless-alert category for ballistic-missile threats. “That would then ensure that the machine elements of this system could be built around that narrow bucket,” Simpson said. But that still wouldn’t solve the problem entirely. “The machine-to-machine piece of that, so it could be really useful, would require DHS and [Defense Department] plumbing changes that would be beyond the authorities of the FCC,” Simpson said. Simpson largely endorsed Schatz’s plan for a uniform federal missile-alert framework that states and localities can follow. “There’s over 1,000 alert originators at the state and local level, and I would say five, six, seven vendors for the user-interface systems,” he said. In a bid to improve innovation, DHS gave state governments broad leeway to design their own missile-alert interfaces. But Simpson said that decision has clearly come with a cost. “That variation is fine for notification about fire, notification about a tsunami coming in,” Simpson said. “But ballistic-missile warnings ought to be consistent, reliable, secure—because we don’t want it cyberattacked—across the entire country.” Republicans seem receptive to Schatz’s plan for missile-alert legislation. Schatz said he plans to introduce his bill through the Senate Commerce Committee, which is chaired by Republican John Thune. Frederick Hill, a Thune spokesman, told National Journal that the chairman “is considering convening a full committee hearing which would help inform legislative efforts.” House Republicans are further along than their Senate counterparts, with plans to hold an Energy and Commerce hearing on Hawaii’s false missile alert in the coming weeks. On Wednesday, committee chairman Greg Walden said he would be “happy to work” with Schatz on legislation, if needed. “We just haven’t got into the weeds on it,” Walden said. As long as lawmakers can work out issues surrounding committee and agency jurisdiction, Simpson said the chances for bipartisan support are high. But stakeholders from Homeland Security and the Pentagon—as well as the congressional committees that oversee them—will also need to weigh in. And Simpson worries those agencies may be loath to take responsibility for what’s widely viewed as a state-level mistake. “It’s a perfect bipartisan issue, as long as we don’t let the various lobbies and the competition between agencies pervert and potentially dilute the ultimate outcome,” Simpson said. "Two more House Republicans have joined the discharge petition to force votes on immigration, potentially leaving centrists just two signatures short of success. Reps. Tom Reed (R-N.Y.) and Brian Fitzpatrick (R-Pa.) signed the discharge petition Thursday before the House left town for the Memorial Day recess. If all Democrats endorse the petition, just two more GOP signatures would be needed to reach the magic number of 218." Source: FIRED FROM RUSSIAN LAUNCHER Investigators Pin Destruction of Malaysian Airliner on Russia 3 hours ago THE DETAILS "A missile that brought down Malaysia Airlines Flight 17 in eastern Ukraine in 2014 was fired from a launcher belonging to Russia's 53rd anti-aircraft missile brigade, investigators said Thursday. The announcement is the first time the investigative team has identified a specific division of the Russian military as possibly being involved in the strike. Russia has repeatedly denied involvement in the incident." Source: THREE INTERVIEWS PLANNED FOR JUNE House GOP Will Conduct New Interviews in Clinton Probe 3 hours ago THE LATEST "House Republicans are preparing to conduct the first interviews in over four months in their investigation into the FBI’s handling of the Clinton email probe. A joint investigation run by the Judiciary and Oversight Committees has set three witness interviews for June, including testimony from Bill Priestap, the assistant director of the FBI’s counterintelligence division, and Michael Steinbach, the former head of the FBI’s national security division." Source: IN OPEN LETTER TO KIM JONG UN Trump Cancels North Korea Summit 5 hours ago THE LATEST GANG OF EIGHT WILL GET SEPARATE MEETING Briefings at White House Will Now Be Bipartisan 7 hours ago THE LATEST "The White House confirmed Wednesday it is planning for a bipartisan group of House and Senate leaders, known as the 'Gang of 8,' to receive a highly-classified intelligence briefing on the FBI's investigation into Russian meddling, reversing plans to exclude Democrats altogether. ABC News first reported the plans to hold a separate briefing for Democrats, citing multiple administration and congressional sources. While details of the bipartisan meeting are still being worked out, a Republican-only briefing will go on as scheduled Thursday."
This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2000-159163, filed Mar. 31, 2000, the entire contents of which are incorporated herein by reference. The present invention relates to a method of forming a composite member, in which a conductive portion is formed in an insulator, the composite member being used in, for example, a wiring board in the fields of electric appliances, electronic appliances and electric and electronic communication. The present invention also relates to a photosensitive composition and an insulating material that can be suitably used in the manufacturing method of the composite member. Further, the present invention relates to a composite member manufactured by the manufacturing method of the present invention and to a multi-layer wiring board and an electronic package including the particular composite member. In recent years, increase in the degree of integration and miniaturization of various electric and electronic parts including a semiconductor device are being promoted. The particular tendency will be further promoted in the future without fail. In this connection, various measures are being proposed and tried in an attempt to apply a high density mounting to a printed circuit board including formation of a fine pattern and a fine pitch of a metal wiring and formation of a steric wiring. Particularly, the steric wiring is indispensable to a high density mounting and, thus, various methods are being proposed in an attempt to manufacture a wiring board having a steric wiring. In general, the steric wirings are of a multi-layered structure such as a built-up wiring board prepared by laminating two dimensional printed wiring boards and a multi-layered wiring board. It is difficult to form a steric wiring having a free three dimensional shape. The built-up wiring board or the multi-layered wiring board has a structure that adjacent wiring layers are connected to each other by a conductive column called via. The via is formed by processing an insulating layer by a photolithography process using a photosensitive polyimide or resist, followed by selectively applying a plating to the via or by filling the via with a conductive paste. For forming a via by such a method, it is necessary to repeat a plurality of times the steps of resist coating, light exposure and etching, making the via formation highly laborious. In addition, it is difficult to improve the yield. It is also possible to form the via by forming a through-hole (via hole) of a predetermined size in an insulating substrate constituting a printed wiring board by using a drill or a CO2 laser, followed by applying plating to the via hole or by filling the via hole with a conductive paste. In these methods, however, it is difficult to form freely a fine via having a size of scores of microns or less at a desired position. In the method disclosed in Japanese Patent Disclosure No. 7-207450, a compound having a hydrophilic group is introduced into pores of three dimensional porous film such as a PTFE film. Under this condition, the film is subjected to a light exposure in a predetermined pattern by using a low pressure mercury lamp (wave lengths of 185 nm and 254 nm), thereby forming the hydrophilic group on the three dimensional porous film. Further, a metal plating is applied to the three dimensional porous film. In the conventional method described above, however, the material forming the three dimensional porous film is deteriorated because a light beam having a short wavelength is used for the light exposure. Also, the light for the light exposure is absorbed by the three dimensional porous film and, thus, fails to reach the inner region of the porous body, resulting in failure to form fine vias. Further, in the conventional method described above, the PTFE forming the three dimensional porous film reacts with the light for the light exposure so as to selectively form hydrophilic groups. However, PTFE is defective in that the molding workability is low and that PTFE is costly. Another method of forming a via is disclosed in Japanese Patent Disclosure No. 11-24977. In this method, the entire surface of a porous insulating member is impregnated with a photosensitive composition containing, for example, a photosensitive reducing agent and a metal salt. Then, a light exposure is applied in a predetermined pattern to the impregnated insulating member so as to reduce the cation of the metal salt in the light exposed portion to a metal nucleus, followed by removing by washing the photosensitive composition in the non-light exposed portion. Further, an electroless plating or a soldering is applied to the residual metal nuclei so as to form vias of a predetermined pattern. In the method described above, however, the entire surface of the porous insulating member is impregnated with a photosensitive composition containing a metal salt as described above, making it difficult to remove completely the metal salt adsorbed on the portion corresponding to the non-exposed portion after the light exposure step. As a result, a difficulty is brought about that the metal nuclei are precipitated on undesired portions in the subsequent reducing step. Such an abnormal deposition of the metal nuclei gives rise to a problem in terms of the insulating properties between adjacent vias and between adjacent wiring layers with progress in the fine pulverization of the pattern. Also, in the via formed in the insulating substrate by the conventional method of manufacturing a wiring board, the insulating body and the conductive portion are brought into a direct contact. In this case, since the adhesion between the insulating body and the conductive portion is poor, a problem is generated that the conductive portion is peeled off the insulating substrate during the use. Further, where a multi-layered wiring board is prepared by laminating a plurality of wiring boards manufactured by the conventional method of manufacturing a wiring board, it is required to further improve the electrical connection between the wiring layers of the wiring boards and the conductivity of the wiring. An object of the present invention is to provide a method of manufacturing a composite member, which has a high degree of freedom in the design of a conductive circuit, in which deterioration of the insulating body is not brought about by the light exposure, and which is free from an abnormal deposition of a metal on the insulating body so as to form a conductive portion having a fine pattern. Another object of the present invention is to provide a method of manufacturing a composite member, which has a high degree of freedom in the design of a conductive circuit, which permits manufacturing a composite member at a low manufacturing cost without giving adverse effects to the selectivity of the material of the insulating portion and to the molding workability, and which is free from an abnormal deposition of a metal on the insulating body so as to form a conductive portion having a fine pattern. Another object of the present invention is to provide a photosensitive composition and an insulating material used for the manufacturing method of a composite member described above. Another object of the present invention is to provide a composite member manufactured by the method described above. Another object of the present invention is to provide a multi-layered wiring board comprising a composite member manufactured by the method described above. Still another object of the present invention is to provide an electronic package using a composite member or a multi-layered wiring board manufactured by the method described above. According to a first aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising: (1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound forming an ion-exchange group upon irradiation with light having a wavelength not shorter than 280 nm; (2) exposing selectively the photosensitive composition layer to light having a wavelength not shorter than 280 nm so as to form ion-exchange groups in the light exposed portion; and (3) forming the conductive portion by bonding a metal ion or metal to the ion-exchange group formed in the light exposed portion by the exposing. According to a second aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising: (1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound having an ion-exchange group; (2) exposing selectively the photosensitive composition layer to light having a wavelength not shorter than 280 nm so as to cause ion-exchange groups in the light exposed portion to disappear and to cause the ion-exchange groups to remain in the unexposed portion; and (3) forming the conductive portion by bonding a metal ion or metal to be bonded to the ion-exchange group remaining in the unexposed portion after the exposing. According to a third aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising: (1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound forming an ion-exchange group upon irradiation with light, and said compound being selected from the group consisting of an onium salt derivative, a sulfonium ester derivative, a carboxylic acid derivative and a naphthoquinone diazide derivative; (2) exposing selectively the photosensitive composition layer to light so as to form ion-exchange groups in the light exposed portion; and (3) forming the conductive portion by bonding a metal ion or metal to the ion-exchange group formed in the light exposed portion by the exposing. According to a fourth aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising: (1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound having an ion-exchange group; (2) exposing selectively the photosensitive composition layer to light so as to cause ion-exchange groups in the light exposed portion to disappear and to cause the ion-exchange groups to remain in the unexposed portion; and (3) forming the conductive portion by bonding a metal ion or metal to the ion-exchange group remaining in the unexposed portion after the light exposure in a pattern. According to a further aspect of the present invention, there is provided a method of manufacturing a composite member in which a conductive portion is selectively formed in an insulating body, comprising: (1) forming a photosensitive composition layer within or on the surface of said insulating body, said photosensitive composition containing a compound forming an ion-exchange group in the presence of acid and a photo acid generating agent; (2) exposing selectively to light and heating the photosensitive composition layer so as to form ion-exchange group in the light exposed portion; and (3) forming the conductive portion by bonding a metal ion or metal to the ion-exchange group formed in the light exposed portion by the exposing. It is desirable for the method of the present invention to further comprise the step of applying an electroless plating to the surface of the conductive portion formed in the third step. According to another embodiment of the present invention, there is provided a photosensitive composition used for manufacturing a composite member, the composition containing a naphthoquinone diazide derivative and a polycarbodiimide derivative. According to another embodiment of the present invention, there is provided a porous insulating body having the inner surface of the pore covered with a photosensitive composition containing a naphthoquinone diazide derivative. According to another embodiment of the present invention, there is provided a composite member having a conductive portion formed on at least one of the surface and the inner region of a porous insulating body via an organic compound, wherein the amount of the organic compound, which is present between the insulating body and the conductive portion, per unit area of the surface of the insulating body is larger than the amount of the organic compound that is not in contact with the conductive portion. According to another embodiment of the present invention, there is provided a multi-layered wiring board including a plurality of substrates that are laminated one upon the other, wherein the substrate comprises a porous insulating body having fine pores and a conductive portion formed on at least one of the surface and the inner region of the fine pore of the porous insulating body, and a layer formed of a conductive body that does not contain the component of the insulating body is formed on the outermost surface of the conductive portion of each substrate. Further, according to still another embodiment of the present invention, there is provided an electronic package comprising a wiring board consisting of the composite body described above or a multi-layered wiring board described above and an electronic part electrically connected to the wiring board.
Dorsomedial hypothalamic lesions alter intake of an imbalanced amino acid diet in rats. Within 3 h of ingesting an imbalanced amino acid diet (IAAD), rats show attenuated intake. The associated conditioned taste aversion can be ameliorated by giving the serotonin3 receptor blocker, tropisetron (TROP). A recent c-fos study indicated that the dorsomedial hypothalamic nucleus (DMN) may be activated 2-3 h after ingestion of IAAD. In Experiment 1, DMN-lesioned rats (DMNL) or sham-operated (SHAM) rats were injected with saline (SAL) or TROP just before introduction of IAAD. By 3 h, SAL-DMNL rats consumed more (P < 0.01) of the IAAD than did the SAL-SHAM rats. Thereafter, over the next 21 h, the intake of the SAL-DMNL group returned to control levels. TROP treatment enhanced the intake of the treated groups; the TROP and the lesion effect were additive (P < 0.01). By d 4 of receiving the IAAD, the DMNL groups were eating less than SHAM rats (P < 0.05). The data suggest that the DMN may be involved in the early detection of the amino acid deficiency induced by IAAD, is not involved in the TROP effect and is necessary for proper long-term adaptation to an IAAD.
Tag: Eloy Casados Original US release date: December 5, 2008 Production budget: $25,000,000 Worldwide gross: $27,426,335 There are timely films and then there are films that are before their time. Ron Howard is probably seen by most as a director who frequently makes good or very good films and occasionally makes a great one. Most recently, a lot... Continue Reading →
The present invention relates generally to improved means and methods for processing documents using electronic imaging, and more particularly, to the use of electronic imaging for processing financial documents, such as checks and related documents in a banking environment. Today's financial services industry is facing the immense challenge of processing huge amounts of documents efficiently. Predictions that document payment methods would decline have not been realized. In fact, document payment methods have grown worldwide and are expected to continue increasing. There is thus a vital need to devise improved means and methods for processing such documents. The use of imaging technology as an aid to document processing has been recognized as one way of significantly improving document processing, as disclosed, for example, in U.S. Pat. Nos. 4,205,780, 4,264,808, and 4,672,186. Generally, imaging involves optically scanning documents to produce electronic images that are processed electronically and stored on high capacity storage media (such as magnetic disc drives and/or optical memory) for later retrieval and display. It is apparent that document imaging provides the opportunity to reduce document handling and movement, since these electronic images can be used in place of the actual documents. However, despite technological advances in imaging in recent years, prior art document processing systems employing imaging, such as disclosed in the aforementioned patents, do not realized sufficient improvements to justify the added implementations costs.
The summaries of the Colorado Court of Appeals published opinions constitute no part of the opinion of the division but have been prepared by the division for the convenience of the reader. The summaries may not be cited or relied upon as they are not the official language of the division. Any discrepancy between the language in the summary and in the opinion should be resolved in favor of the language in the opinion. SUMMARY February 8, 2018 2018COA12 No. 14CA0144, People v. Trujillo — Criminal Law — Sentencing — Probation — Indeterminate Sentence A division of the court of appeals considers whether a Colorado statute authorizes imposition of a sentence to an indeterminate term of probation and whether the defendant was entitled to the benefit of amendments to the statute criminalizing theft. Relying on People v. Jenkins, 2013 COA 76, 305 P.3d 420, the division concludes that section 18-1.3-202(1), C.R.S. 2017, provides statutory authority for the imposition of an indeterminate probation sentence. Following People v. Stellabotte, 2016 COA 106, ___ P.3d ___ (cert. granted Feb. 6, 2017), the majority further concludes that the defendant is entitled to the benefit of amendments to the theft statute. The partial dissent concludes that the amendments to the theft statute do not apply retroactively, and would therefore affirm the sentence in full. Additionally, the division rejects the defendant’s contentions that reversal is required due to the trial court’s rejection of defense-tendered jury instructions, wrongfully admitted character evidence, and prosecutorial misconduct. However, the division remands for the trial court to make findings of fact concerning the assessment of the costs of prosecution. Accordingly, the division affirms the conviction, affirms the sentence in part, vacates the sentence in part, and remands the case with directions. COLORADO COURT OF APPEALS 2018COA12 Court of Appeals No. 14CA0144 Mesa County District Court No. 11CR447 Honorable Valerie J. Robison, Judge The People of the State of Colorado, Plaintiff-Appellee, v. Michael Floyd Trujillo, Defendant-Appellant. JUDGMENT AFFIRMED, SENTENCE AFFIRMED IN PART AND VACATED IN PART, AND CASE REMANDED WITH DIRECTIONS Division I Opinion by JUDGE TAUBMAN Richman, J., concurs Furman, J., concurs in part and dissents in part Announced February 8, 2018 Cynthia H. Coffman, Attorney General, Joseph G. Michaels, Assistant Attorney General, Denver, Colorado, for Plaintiff-Appellee Douglas K. Wilson, Colorado State Public Defender, James S. Hardy, Deputy State Public Defender, Denver, Colorado, for Defendant-Appellant ¶1 Defendant, Michael Floyd Trujillo, appeals his judgment of conviction entered on a jury verdict finding him guilty of one count of theft of more than $20,000 and one count of criminal mischief of $20,000 or more. He also appeals his sentence. We perceive no basis for reversing his convictions, but remand for the trial court to make findings of fact regarding the assessment of the costs of prosecution and to reclassify his theft conviction as a class 4 felony. I. Background ¶2 In 2007, Trujillo began building a home, doing much of the labor himself and initially using his own money to fund the project. He later took out a construction loan from the victim, a bank, for just under $255,000. After construction was completed on the house, Trujillo stopped making his monthly loan payments. The bank declined to restructure the loan and initiated foreclosure proceedings in September 2010. ¶3 Before the foreclosure sale, Trujillo removed or destroyed property in the house, including kitchen cabinets, countertops, interior and exterior doors, doorjambs and casings, flooring, baseboards, light fixtures, bathroom fixtures, the fireplace, handrails, the boiler, the air conditioner, and the garage door. 1 Because of this damage, the house was appraised at $150,000; however, the appraiser estimated that if the house were in good repair, it would have been worth $320,000. ¶4 Trujillo was charged with defrauding a secured creditor, theft of $20,000 or more, but less than $100,000, and criminal mischief of $20,000 or more, but less than $100,000. The jury found him not guilty of defrauding a secured creditor and guilty of theft and criminal mischief. ¶5 On appeal, Trujillo raises six contentions: (1) the trial court erred in rejecting defense-tendered jury instructions; (2) the trial court erred in allowing evidence of a prior foreclosure against Trujillo; (3) prosecutorial misconduct during direct examination of a witness and closing rebuttal argument warrants reversal; (4) the trial court imposed an illegal sentence of indeterminate probation; (5) the trial court erred in awarding the People costs of prosecution; and (6) an amendment to the theft statute applies to his conviction. We perceive no basis for reversal with respect to the first four contentions, but agree with Trujillo’s final two contentions. We therefore affirm the convictions and the sentence in part but vacate the sentence in part and remand with directions. 2 II. Jury Instructions ¶6 Trujillo asserts that the trial court erred in rejecting various jury instructions regarding his theory of the case. We disagree. A. Additional Facts ¶7 Throughout trial, the defense’s theory of the case was that Trujillo lacked the requisite intent to commit the charged offenses because he believed that the property he removed from the house belonged to him. The defense tendered five jury instructions related to this theory of the case. ¶8 Trujillo’s tendered jury instructions detailed property law concepts. For example, the first tendered instruction stated that “the person who has title to real property is still the owner of the property even if there is a lien or secured interest on the property.” Another tendered instruction defined “title,” “deed of trust,” and “holder of a certificate of purchase[].” One instruction described the lien theory detailed in section 38-35-117, C.R.S. 2017, and another instructed that title to property “does not vest with the purchaser until eight days after [a] foreclosure sale.” ¶9 The trial court declined to give these instructions as tendered. However, portions of the defense-tendered instructions were 3 included in a final definitional jury instruction. The final instructions defined “deed of trust” and stated that the title to property is transferred to the holder of the certificate of purchase eight days after a foreclosure sale. Though it rejected other portions of the defense-tendered instructions, the trial court permitted defense counsel to argue the issues raised in the instructions during closing argument. ¶ 10 The defense also tendered an instruction which the trial court modified and gave as a theory of the case instruction. That instruction stated, “Trujillo contends that the items removed from the home . . . were his; purchased by him and installed by him. . . . Trujillo conten[d]s that the items that he took and damaged were his sole property.” B. Standard of Review ¶ 11 We review jury instructions de novo to determine whether, as a whole, they accurately informed the jury of the governing law. Riley v. People, 266 P.3d 1089, 1092-93 (Colo. 2011). If the jury instructions properly inform the jury of the law, the district court has “broad discretion to determine the form and style of jury instructions.” Day v. Johnson, 255 P.3d 1064, 1067 (Colo. 2011). 4 Accordingly, we review a trial court’s decision concerning a proposed jury instruction for an abuse of discretion and will not disturb the ruling unless it is manifestly arbitrary, unreasonable, or unfair. Id. ¶ 12 When a defendant objects to the trial court’s ruling on a jury instruction, we review for nonconstitutional harmless error and will thus affirm if “there is not a reasonable probability that the error contributed to the defendant’s conviction.” People v. Garcia, 28 P.3d 340, 344 (Colo. 2001) (quoting Salcedo v. People, 999 P.2d 833, 841 (Colo. 2000)). C. Applicable Law ¶ 13 “[A]n instruction embodying a defendant’s theory of the case must be given by the trial court if the record contains any evidence to support the theory.” People v. Nunez, 841 P.2d 261, 264 (Colo. 1992). Moreover, a trial court has “an affirmative obligation” to work with counsel to correct a tendered theory of the case instruction “or to incorporate the substance of such in an instruction drafted by the court.” Id. at 265; see also People v. Tippett, 733 P.2d 1183, 1195 (Colo. 1987) (a trial court may refuse to give an instruction already embodied in other instructions). 5 ¶ 14 In considering whether a jury was adequately informed of a defendant’s theory of the case, a reviewing court can take into account whether defense counsel’s closing argument “fairly represented” the theory to the jury. People v. Dore, 997 P.2d 1214, 1222 (Colo. App. 1999). D. Analysis ¶ 15 Trujillo contends that the trial court abused its discretion in rejecting the tendered instructions. We disagree. ¶ 16 Trujillo asserts that the tendered instructions were essential because they communicated his theory of the case. However, the trial court instructed the jury on his theory of the case in an instruction that clearly stated that he believed the property he took from the house was “his sole property.” To the extent that the trial court had a duty to work with the defense in crafting a proper theory of defense instruction, we conclude that the trial court fulfilled that duty here by giving an alternative theory of the case instruction that encompassed Trujillo’s tendered instructions. See Nunez, 841 P.2d at 265 n.9. Moreover, the trial court specifically stated that defense counsel would be allowed to incorporate the 6 property law concepts into her closing argument, which defense counsel did. ¶ 17 Trujillo asserts that the instructions he tendered were accurate statements of property law. In contrast, the People argue that the instructions misstated the law as it applies in criminal prosecutions for theft and criminal mischief. Because we conclude that the trial court did not abuse its discretion in drafting a theory of defense instruction that encompassed the defense’s tendered instructions, we do not address whether the rejected instructions were accurate statements of the law. ¶ 18 The jury instructions, as a whole, “fairly and adequately cover[ed] the issues presented.” People v. Pahl, 169 P.3d 169, 183 (Colo. App. 2006). Thus, we conclude that the trial court did not abuse its discretion in rejecting in part the defense-tendered jury instructions. III. Evidence of Prior Foreclosure ¶ 19 Trujillo next asserts that the trial court erred in allowing the People to introduce evidence that another property of his had been foreclosed. We disagree. 7 A. Additional Facts ¶ 20 Before trial, Trujillo filed a motion to exclude evidence of other acts or res gestae evidence. Trujillo’s motion addressed several categories of other acts evidence, including evidence related to any “financial and/or legal problems” unrelated to the charged offenses. During a motions hearing, the People stated that they did not intend to introduce any other acts or res gestae evidence. In a written ruling, the trial court granted Trujillo’s motion to exclude evidence of his unrelated financial and legal problems “unless the prosecution fe[lt] that the ‘door ha[d] been opened.’” The trial court further ordered that, if the People felt Trujillo introduced evidence of his other financial and legal problems, the People could request a bench conference during trial. ¶ 21 On the first day of trial, defense counsel stated that she was withdrawing her motion to exclude other acts evidence insofar as it pertained to evidence of Trujillo’s bankruptcy proceedings. During her opening statement, defense counsel then mentioned those proceedings. ¶ 22 Later, the People called the bank’s former vice president as an expert witness. During direct examination, the prosecutor asked 8 the witness why the bank had declined to restructure Trujillo’s loan. The prosecutor also asked about Trujillo’s demeanor during interactions with the bank. Trujillo objected. After a bench conference, the trial court allowed the witness to testify on both matters. ¶ 23 Specifically, the witness testified that, during a conversation about restructuring the loan, Trujillo “seemed like he was very upset.” The witness recalled, “He got into [that] he had a piece of property that [another bank] had foreclosed on and it sounded like they had sold it for what [Trujillo] believed was a lot less, leaving him a large deficiency balance.” ¶ 24 During closing argument, the People alluded to the witness’s testimony and referred several times to Trujillo’s general animosity against banks. B. Standard of Review ¶ 25 We review a trial court’s decision to admit other acts or res gestae evidence for an abuse of discretion. People v. Jimenez, 217 P.3d 841, 846 (Colo. App. 2008). A court abuses its discretion if its decision to admit such evidence is manifestly arbitrary, unreasonable, or unfair. Id. 9 ¶ 26 We review a preserved claim of nonconstitutional error for harmless error, reversing only if any error “substantially influenced the verdict or affected the fairness of the trial proceedings.” Hagos v. People, 2012 CO 63, ¶ 12, 288 P.3d 116, 119 (quoting Tevlin v. People, 715 P.2d 338, 342 (Colo. 1986)). C. Applicable Law ¶ 27 Evidence is relevant if it has “any tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence.” CRE 401. Generally speaking, “[t]he Colorado Rules of Evidence strongly favor the admission of relevant evidence.” People v. Brown, 2014 COA 155M-2, ¶ 22, 360 P.3d 167, 172. However, relevant evidence is nevertheless inadmissible when “its probative value is substantially outweighed by the danger of unfair prejudice, confusion of the issues, or misleading the jury.” CRE 403. Similarly, evidence of “other crimes, wrongs, or acts” is inadmissible to prove a person’s character “in order to show that he acted in conformity therewith,” though it may be admissible for other purposes, including proving intent. CRE 404(b). 10 ¶ 28 “Res gestae is a theory of relevance which recognizes that certain evidence is relevant because of its unique relationship to the charged crime.” People v. Greenlee, 200 P.3d 363, 368 (Colo. 2009). However, “there is no need to consider an alternative theory of relevance, such as res gestae, where the evidence is admissible under general rules of relevancy.” Id. D. Analysis ¶ 29 Trujillo contends that the evidence of the prior foreclosure action portrayed him as a “serial defaulter” and was impermissible under CRE 404(b) and 403. The People assert that the evidence was admissible as “directly relevant” to Trujillo’s intent and motive. In the alternative, the People argue that the evidence was res gestae evidence. We agree with the People’s first argument that the evidence was admissible under CRE 401, and was not barred by CRE 403.1 1 During the bench conference, the trial court allowed the bank’s former vice president to testify after conducting an abbreviated CRE 404(b) analysis that did not specifically address the four-factor test set forth in People v. Spoto, 795 P.2d 1314, 1318 (Colo. 1990). The trial court did not admit the evidence under the res gestae doctrine. However, we can affirm a trial court’s evidentiary ruling on any ground supported by the record, “even if that ground was not 11 ¶ 30 The evidence of the prior foreclosure was probative of the interactions between Trujillo and the bank — it made it more probable that Trujillo had the requisite intent to commit theft. It was therefore relevant under CRE 401. Further, the risk of unfair prejudice did not substantially outweigh the probative value of the evidence, especially where the prior foreclosure was referenced only in passing and the details of that foreclosure were not revealed. Thus, the evidence was not barred by CRE 403. ¶ 31 Because we conclude that the evidence of the prior foreclosure was relevant under CRE 401 and admissible under CRE 403, we need not address whether the evidence was res gestae evidence or “other acts” evidence under CRE 404(b). See Greenlee, 200 P.3d at 368-69. Accordingly, we conclude that the trial court did not err in allowing the testimony concerning the prior foreclosure action. IV. Prosecutorial Misconduct ¶ 32 Trujillo argues that the prosecutor improperly commented on the district attorney’s screening process for bringing charges and articulated or considered by the trial court.” People v. Phillips, 2012 COA 176, ¶ 63, 315 P.3d 136, 153. 12 Trujillo’s right not to testify, and improperly denigrated defense counsel. We perceive no basis for reversal. A. Additional Facts ¶ 33 During redirect examination of one of the People’s expert witnesses, an attorney who worked at the bank, the prosecutor asked whether the bank played a role in charging Trujillo. The prosecutor asked if the witness himself made the decision to file a criminal case, to which the witness replied, “No.” The prosecutor then asked, “[W]ho is it, according to your understanding, that makes those decisions on whether a case gets filed criminally?” The witness responded, “A complaint’s made to a police department or sheriff’s department and they make that decision in conjunction with I believe you.” The prosecutor clarified that “you” meant the district attorney’s office. The defense did not object. ¶ 34 During rebuttal closing argument, the prosecutor said, Did you hear all that? [Defense counsel]’s talking about all of this stuff, about what Trujillo’s intent was. And then did you hear her towards the end what she did? She says, and correct – this part was correct of what she said. My job is to prove intent, right. That is my burden. And she’s absolutely right. The Defendant has every right to remain silent, 13 and he exercised that right and that is something that you cannot use against him. But it is completely ridiculous for [defense counsel] to get up here and say that [Trujillo] didn’t testify to what his intent was and then to go on and talk about what his intent actually was. We don’t know what his intent was because he never testified to that, which he has every right to do. But did you hear her? She’s up here saying his intent was this. ¶ 35 Trujillo objected on the basis that the prosecutor was denigrating defense counsel. The trial court sustained the objection as to the prosecutor’s tone, but overruled it as to content. The prosecutor then argued, “[I]f you go out and run somebody over and – and think that you had the right to do that, is that gonna be a legitimate defense by saying, well, I thought I could do that. I didn’t – nobody ever told me. Nobody put it in writing. When I bought my car, in the instruction manual, nothing said that about that. That’s preposterous.” Trujillo did not renew his objection. B. Standard of Review ¶ 36 In reviewing alleged prosecutorial misconduct, an appellate court engages in a two-step analysis. First, we determine whether the prosecutor’s conduct was improper based on the totality of the circumstances. Wend v. People, 235 P.3d 1089, 1096 (Colo. 2010). 14 Second, we determine whether any misconduct warrants reversal under the proper standard of review. Id. ¶ 37 When the alleged misconduct is objected to at trial and is of constitutional magnitude, we review for constitutional harmless error. Id. When the alleged misconduct is not of a constitutional magnitude, and when the defense objected at trial, we subject the prosecutorial misconduct to harmless error review. Id. at 1097. Such prosecutorial misconduct will be considered harmless “whenever there is no reasonable probability that it contributed to the defendant’s conviction.” Crider v. People, 186 P.3d 39, 42 (Colo. 2008). When the defense did not object to the misconduct, we review for plain error. Wend, 235 P.3d at 1097-98. C. Applicable Law ¶ 38 A prosecutor cannot comment on a “screening process” for charging cases “because it both hints that additional evidence supporting guilt exists and reveals the personal opinion of the prosecutor.” Domingo-Gomez v. People, 125 P.3d 1043, 1052 (Colo. 2005). It is also improper for a prosecutor to make remarks “for the obvious purpose of denigrating defense counsel.” People v. Jones, 832 P.2d 1036, 1038 (Colo. App. 1991). It is similarly improper for 15 a prosecutor to comment on a defendant’s decision not to testify. Griffin v. California, 380 U.S. 609, 614 (1965); see also People v. Martinez, 652 P.2d 174, 177 (Colo. App. 1981) (noting that a prosecutor’s comment on a defendant’s silence constitutes reversible error when “the prosecution argued that such silence constituted an implied admission of guilt”). ¶ 39 Nevertheless, “[a] prosecutor is allowed considerable latitude in responding to the argument made by opposing counsel.” People v. Ramirez, 997 P.2d 1200, 1211 (Colo. App. 1999), aff’d, 43 P.3d 611 (Colo. 2001). Further, “[a]lthough it is improper for a prosecutor to assert that opposing counsel knows that the accused’s case is not meritorious,” the prosecutor may permissibly argue “that the evidence in support of defendant’s innocence lacked substance.” Id. at 1211; see also People v. Samson, 2012 COA 167, ¶ 31, 302 P.3d 311, 317 (stating that a prosecutor may permissibly “comment on the absence of evidence to support a defendant’s contentions”). ¶ 40 Appellate courts consider several factors in determining whether prosecutorial misconduct was prejudicial, including the nature of the error, the pervasiveness of the misconduct, the 16 context, and the overall strength of the evidence supporting the conviction. People v. McBride, 228 P.3d 216, 225 (Colo. App. 2009); see also Crider, 186 P.3d at 43. For example, a reviewing court may consider whether proper jury instructions mitigated the prejudicial effect of prosecutorial misconduct. See People v. Castillo, 2014 COA 140M, ¶ 78, ___ P.3d ___, ___ (concluding prosecutor’s misstatements were harmless in light of instructions from the trial court and the defense’s closing argument) (cert. granted in part Nov. 23, 2015). D. Analysis ¶ 41 Trujillo contends that three instances of prosecutorial misconduct require reversal. We disagree. ¶ 42 Trujillo first contends that the prosecutor improperly referred to a screening process while examining the expert witness. We perceive no prosecutorial misconduct. The prosecutor here did not imply that he had engaged in a screening process to “weed out the weaker cases and, implicitly, that the State d[id] not consider this a weak case.” Domingo-Gomez, 125 P.3d at 1052 (concluding the prosecutor’s comment that “it takes a lot more than somebody saying that person did it” to bring charges was improper). Rather, 17 the prosecutor clarified that the bank did not bring criminal charges and that the witness himself did not stand to gain as a result of Trujillo’s conviction. The People assert, and we agree, that the prosecutor’s question merely elicited testimony to establish that the district attorney’s office was responsible for pursuing the criminal charges against Trujillo. ¶ 43 Second, Trujillo asserts that the prosecutor impermissibly commented on his decision not to testify. We disagree. Even if we assume the comment on Trujillo’s decision not to testify was improper, not every comment on a defendant’s choice not to testify requires reversal. See Martinez, 652 P.2d at 177. “The determining factor is whether the defendant’s silence was used by the prosecution as a means of creating an inference of guilt,” id., and we conclude that the prosecutor’s comments here did not raise such an inference. ¶ 44 Finally, Trujillo contends that the prosecutor impermissibly denigrated defense counsel and the defense’s theory of the case during rebuttal closing argument. We agree that the prosecutor improperly denigrated defense counsel and the defense’s theory of 18 the case when he characterized her arguments as “completely ridiculous” and “preposterous.” ¶ 45 However, we perceive no basis for reversal as a result of these improper remarks. The comments were limited to the People’s rebuttal closing argument. Moreover, significant evidence corroborated the jury’s finding of guilt — specifically, the undisputed evidence that Trujillo had removed an extensive amount of property from the house. Viewing the record as a whole, we cannot say that there was a “reasonable probability” that the prosecutor’s remarks denigrating defense counsel contributed to Trujillo’s convictions. See Crider, 186 P.3d at 42. Thus, we determine the error was harmless. ¶ 46 In sum, though we agree that the prosecutor improperly denigrated defense counsel, we perceive no basis for reversal. V. Indeterminate Probation ¶ 47 Trujillo contends that the trial court did not have the statutory authority to sentence him to indeterminate probation. We disagree. A. Additional Facts ¶ 48 During the sentencing hearing, the People requested that Trujillo be placed on a “long period of probation . . . somewhere in 19 the neighborhood of eight to ten years” because they anticipated that Trujillo would be ordered to pay substantial restitution.2 Trujillo requested unsupervised probation with a collections investigator monitoring his restitution payments. ¶ 49 The trial court imposed an “indefinite probation sentence” because of the substantial restitution that Trujillo was expected to owe. In imposing an indeterminate probation sentence, the trial court stated, “There is case law that talks about whether [indeterminate probation] is something that can or should be imposed and it’s certainly something that is allowed regardless of the type of conviction that has been entered.” ¶ 50 The mittimus states that the sentence imposed was a term of probation for seven years to life. B. Standard of Review ¶ 51 The People contend that we should not consider this claim because a sentence to probation is not ordinarily subject to 2 The trial court ultimately ordered Trujillo to pay $171,421.97 in restitution. Trujillo separately appealed that order, and a division of this court affirmed in part, reversed in part, and remanded for reconsideration. People v. Trujillo, (Colo. App. No. 14CA2486, Oct. 5, 2017) (not published pursuant to C.A.R. 35(e)). 20 appellate review. However, “where, as here, a defendant contends that ‘a court has exceeded its statutory authority’ in imposing a probationary sentence, appellate review is warranted.” People v. Jenkins, 2013 COA 76, ¶ 10, 305 P.3d 420, 423 (quoting People v. Rossman, 140 P.3d 172, 174 (Colo. App. 2006)). ¶ 52 “We review sentencing decisions that are within the statutory range for an abuse of discretion.” People v. Torrez, 2013 COA 37, ¶ 71, 316 P.3d 25, 37. However, where the defendant contends that a court exceeded its statutory sentencing authority, our inquiry involves statutory interpretation. Jenkins, ¶ 12, 305 P.3d at 423. We review such issues of statutory interpretation de novo. Id. C. Applicable Law ¶ 53 Under section 18-1.3-202(1)(a), C.R.S. 2017, a trial court “may grant the defendant probation for such period and upon such terms and conditions as it deems best.” Further, “[t]he length of probation shall be subject to the discretion of the court and may exceed the maximum period of incarceration authorized for the classification of the offense of which the defendant is convicted.” Id. ¶ 54 In Jenkins, a division of this court concluded that section 18- 1.3-202(1) “authorizes a trial court to impose an indeterminate term 21 of probation.” Jenkins, ¶ 38, 305 P.3d at 426. The Jenkins division bolstered its conclusion by looking to the plain language of the statute — which the division noted “contemplate[s] both determinate and indeterminate terms of probation” — and to the provision’s legislative history. Id. at ¶¶ 40, 42, 46, 305 P.3d at 426- 28. Finally, the division noted that section 18-1.3-202(1) “generally pertains to a broad class of cases, and it simply allows a trial court to elect an indeterminate term if it sentences an offender who has been convicted of a felony to probation.” Id. at ¶ 50, 305 P.3d at 428 (upholding probationary sentence of ten years to life); see also People v. Martinez, 844 P.2d 1203, 1206 (Colo. App. 1992) (concluding that a trial court has authority to impose a term of probation that exceeds the sentence to imprisonment in the statutory aggravated range for an offense). D. Analysis ¶ 55 Trujillo asserts that the trial court exceeded its statutory authority in imposing an indeterminate probationary sentence. We disagree. ¶ 56 Like the Jenkins division, we conclude that section 18-1.3- 202(1) gives a trial court the authority to sentence a defendant 22 convicted of a felony to an indefinite probationary period. Trujillo urges that the statute limits a trial court’s authority to impose an indeterminate probation sentence. Under Trujillo’s logic, a sentence to probation for 100 years is permissible, but an indeterminate probation sentence is outside the trial court’s statutory authority. The statute offers no basis for reaching this conclusion. ¶ 57 Trujillo asserts that Jenkins is distinguishable because that case concerned whether a defendant convicted of a sex offense not falling under the supervision scheme of the Colorado Sex Offender Lifetime Supervision Act of 1998 (SOLSA), see §§ 18-1.3-1001 to -1012, C.R.S. 2017, could nevertheless be sentenced to indeterminate probation. Jenkins, ¶ 1, 305 P.3d at 422. Trujillo contends that Jenkins was limited to the particular circumstances of that case, and does not widely apply to all offenses and defendants. However, the Jenkins division made clear that section 18-1.3-202(1) “establishes a general rule as far as the possibility of an indeterminate probationary term for felonies” and “authorizes a trial court to impose an indeterminate term of probation.” Id. at ¶¶ 38, 50, 305 P.3d at 426, 428. In fact, Jenkins explicitly rejected the argument that a sentence of indeterminate probation could be 23 imposed only in sex offense cases subject to SOLSA. Id. at ¶¶ 49- 50, 305 P.3d at 428. Thus, Trujillo’s argument that Jenkins is limited to sex offenses is unavailing. ¶ 58 In sum, we conclude that the trial court did not exceed its statutory authority in imposing the probation sentence here. VI. Costs of Prosecution ¶ 59 Trujillo next asserts that the trial court erred in awarding the full costs of prosecution requested by the People without making a finding on whether any portion of the costs was attributable to the charge on which he was acquitted. We agree. A. Additional Facts ¶ 60 Before sentencing, the People moved for reimbursement of the costs of prosecution pursuant to section 18-1.3-701, C.R.S. 2017. The People requested $768.70. Trujillo opposed the motion on the basis that the People bore responsibility for the costs incurred to prove the defrauding a secured creditor charge, of which Trujillo was acquitted. ¶ 61 During the sentencing hearing, the trial court awarded the requested costs of prosecution, ordering Trujillo to pay $768.70. 24 B. Standard of Review ¶ 62 The trial court, in its discretion, may assess reasonable and necessary costs of prosecution against a convicted defendant. See § 18-1.3-701(2)(j.5). Thus, we review an assessment of costs of prosecution for an abuse of discretion, reversing if the trial court’s determination is manifestly arbitrary, unreasonable, or unfair, People v. Palomo, 272 P.3d 1106, 1110 (Colo. App. 2011), or if the trial court misapplied the law, People v. Jefferson, 2017 CO 35, ¶ 25, 393 P.3d 493, 499. C. Applicable Law ¶ 63 Under section 16-18-101(1), C.R.S. 2017, the state bears the costs of prosecution when a defendant is acquitted. Such costs may include witness fees, mileage, lodging expenses, transportation costs, and other reasonable and necessary costs that directly result from prosecuting the defendant. § 18-1.3-701(2); see also People v. Sinovcic, 2013 COA 38, ¶¶ 15-16, 304 P.3d 1176, 1179. If a defendant is convicted of fewer than all of the charged counts, the court may assess only those costs attributable to the counts for which the defendant was convicted, if an allocation is practicable. Palomo, 272 P.3d at 1112. 25 D. Analysis ¶ 64 Trujillo asserts that the trial court erred in not making a finding as to whether some portion of the requested costs of prosecution were allocable to the acquitted charge. We agree. ¶ 65 As Trujillo concedes, it is possible that the costs cannot be allocated between the charge on which he was acquitted and the two charges on which he was convicted. However, the trial court did not find that such an allocation was impracticable. Because the trial court was required to consider whether some portion of the requested costs was practicably attributable to the acquitted charge, the trial court abused its discretion. See DeBella v. People, 233 P.3d 664, 667 (Colo. 2010) (failure to exercise discretion constitutes an abuse of the court’s discretion). ¶ 66 Accordingly, we vacate the order awarding the People costs of prosecution and remand for the trial court to make appropriate findings of fact and “assess only those costs that are related to the prosecution of the . . . counts of which [Trujillo] was convicted, to the extent an allocation is practicable.” Palomo, 272 P.3d at 1113. 26 VII. Amendment to Theft Statute ¶ 67 Trujillo contends that he should have benefited from an amendment to the theft statute reclassifying theft between $20,000 and $100,000 as a class 4 felony. We agree. A. Additional Facts ¶ 68 The General Assembly amended the theft statute on June 5, 2013. See Ch. 373, sec. 1, § 18-4-401, 2013 Colo. Sess. Laws 2196. Under the amended statute, theft between $20,000 and $100,000 constitutes a class 4 felony. See § 18-4-401(2)(h), C.R.S. 2017. Prior to the amendment, theft over $20,000 constituted a class 3 felony. § 18-4-401(2)(d), C.R.S. 2011. ¶ 69 Trujillo was charged with theft of $20,000 or more in April 2011. He was convicted in October 2013 and sentenced in December 2013. His theft conviction was recorded on the mittimus as a class 3 felony. B. Standard of Review ¶ 70 The People assert that, because Trujillo did not make this argument before the trial court, we should review only for plain error. However, the division in People v. Stellabotte rejected this argument. 2016 COA 106, ¶ 42, ___ P.3d ___, ___ (noting that plain 27 error review was inappropriate because “a defendant may raise a claim at any time that his or her sentence was not authorized by law”) (cert. granted Feb. 6, 2017). Following Stellabotte, we review the legality of the sentence de novo. Id. at ¶ 4, ___ P.3d at ___. C. Applicable Law ¶ 71 In determining whether to apply amendments to legislation, we first look to the plain language of the statute. People v. Summers, 208 P.3d 251, 253-54 (Colo. 2009). If a statute explicitly states that it applies only to offenses committed after the effective date, it must be applied accordingly. See People v. McCoy, 764 P.2d 1171, 1174 (Colo. 1988). ¶ 72 As a general rule, “[a] statute is presumed to be prospective in its operation.” § 2-4-202, C.R.S. 2017. However, if a statute is silent as to whether it applies only prospectively, a defendant may seek retroactive application if he or she benefits from a significant change in the law. § 18-1-410(1)(f)(I), C.R.S. 2017; see also People v. Thornton, 187 Colo. 202, 203, 529 P.2d 628, 628 (1974) (allowing defendant to seek relief on direct appeal under statute). ¶ 73 In Stellabotte, a division of this court concluded that the amendatory theft legislation “applies retroactively to cases pending 28 in the trial court when the amendment was enacted.” Stellabotte, ¶ 45, ___ P.3d at ___; People v. Patton, 2016 COA 187, ¶ 32, ___ P.3d ___, ___; see also People v. Patton, (Colo. App. No. 14CA2359, Aug. 11, 2016) (not published pursuant to C.A.R. 35(e)) (cert. granted Feb. 6, 2017). D. Analysis ¶ 74 Trujillo contends that the amendment to the theft statute requires that we vacate his sentence and remand for the trial court to enter his theft conviction as a class 4 felony. We agree. ¶ 75 As the division noted in Stellabotte, the theft amendment does not explicitly state that it is either retroactive or prospective. Stellabotte, ¶ 45, ___ P.3d at ___. In the face of this legislative silence, the division held that a defendant who committed theft prior to the statutory amendment but was not convicted until after its passage was entitled to the benefit retroactively. See id. at ¶¶ 39, 45, ___ P.3d at ___. The same is true here. ¶ 76 Trujillo was charged with theft before the statute was amended, but was not convicted or sentenced until after the General Assembly lowered the classification for theft between 29 $20,000 and $100,000.3 Thus, like the defendant in Stellabotte, Trujillo is entitled to the benefit of the amendment. As a result, we vacate the sentence for the theft conviction and remand for the conviction to be entered as a class 4 felony. ¶ 77 The partial dissent looks to several statutory provisions in support of its conclusion that Trujillo is not entitled to the benefit of the amendatory legislation. First, the partial dissent cites section 2-4-202, which states the general presumption that statutes apply prospectively. However, as the division noted in Stellabotte, section 18-1-410 is a specific exception to the general rule expressed in section 2-4-202. Stellabotte, ¶ 47 n.4, ___ P.3d at ___ n.4. We agree with that analysis. Thus, the general presumption that statutes apply prospectively does not apply here where Trujillo seeks the benefit of a “significant change in the law, . . . allowing in 3 Trujillo asserts that the theft was between $20,000 and $100,000 based on testimony from trial. The People do not contest the value of the stolen property in this case. We therefore assume that Trujillo’s offense properly fell within the value range set forth in section 18-4-401(2)(h), C.R.S. 2017. 30 the interests of justice retroactive application of the changed legal standard.”4 § 18-1-410(1)(f)(I). ¶ 78 The partial dissent also invokes section 2-4-303, C.R.S. 2017, in support of its conclusion. Section 2-4-303 states: The repeal, revision, amendment, or consolidation of any statute or part of a statute or section or part of a section of any statute shall not have the effect to release, extinguish, alter, modify, or change in whole or in part any penalty, forfeiture, or liability, either civil or criminal, which shall have been incurred under such statute, unless the repealing, revising, amending, or consolidating act so expressly provides. ¶ 79 However, the supreme court has noted that the “general saving” provision codified in this statute is not applicable to criminal cases; instead, the court noted in dictum that it “has 4 The partial dissent also asserts that section 18-1-410(1)(f)(I), C.R.S. 2017, does not provide any relief to Trujillo because that provision requires that “there has been significant change in the law, applied to the [defendant’s] conviction or sentence.” The partial dissent asserts that the phrase “applied to” requires that the legislation expressly state that it applies retroactively. We disagree with that interpretation, and believe that our view finds authority in supreme court case law. See People v. Thomas, 185 Colo. 395, 397, 525 P.2d 1136, 1137 (1974) (noting that “[t]he legislature intended the changed legal standards to apply wherever constitutionally permissible” but making no mention of whether the amendatory legislation reclassifying attempted second degree burglary explicitly stated that it applied retroactively). 31 consistently adhered to the principle . . . that a defendant is entitled to the benefits of amendatory legislation when relief is sought before finality has attached to the judgment of conviction.” Noe v. Dolan, 197 Colo. 32, 36 n.3, 589 P.2d 483, 486 n.3 (1979). ¶ 80 In People v. Boyd, a division of the court of appeals concluded that section 2-4-303 did not prevent the retroactive effect of an amendatory constitutional provision. 2015 COA 109, ¶ 27, 395 P.3d 1128, 1134, aff’d, 2017 CO 2, 387 P.3d 755.5 The division noted the supreme court’s language in Noe. Id. at ¶ 28, 395 P.3d at 1134. To the extent that other supreme court cases included contrary statements, the Boyd division concluded that such statements were dicta and that the supreme court had not overruled or disapproved of either Noe or People v. Thomas, 185 Colo. 395, 398, 525 P.2d 1136, 1138 (1974) (holding that “amendatory legislation mitigating the penalties for crimes should be applied to any case which has not received final judgment”). 5 The supreme court in Boyd affirmed the Court of Appeals decision on different grounds, concluding that the marijuana criminal offense statute had been rendered inoperative by Amendment 64. Neither the majority nor the dissent in Boyd cited section 2-4-303, C.R.S. 2017. 32 Boyd, ¶¶ 29-30, 395 P.3d at 1134-35. Finally, the Boyd division concluded that section 18-1-410(1)(f)(I) controls over section 2-4- 303 because the former sets forth a specific exception to the latter, which codifies a “general rule[] of construction regarding prospective effect for amendatory legislation.” Id. at ¶¶ 31-32, 395 P.3d at 1135. We agree with the Boyd division’s analysis and therefore do not perceive section 2-4-303 as a bar to the relief Trujillo seeks. ¶ 81 In making its statutory arguments, the partial dissent relies on the plain meaning of both section 2-4-303 and section 18-1- 410(1)(f)(I). However, as discussed, the supreme court has not given either provision its plain meaning. Despite express reference in section 2-4-303 to civil and criminal penalties, the supreme court has indicated that the provision does not apply to criminal cases. Noe, 197 Colo. at 36 n.3, 589 P.2d at 486 n.3. Similarly, while section 18-1-410(1)(f)(I) by its express terms applies to defendants seeking postconviction relief, the supreme court has held that the statute also extends to defendants seeking relief on direct appeal. Thornton, 187 Colo. at 203, 529 P.2d at 628. In light of the 33 supreme court’s interpretation of these statutes, we cannot give them the meanings that the partial dissent ascribes to them. ¶ 82 Finally, the partial dissent also relies on Riley v. People, in which the supreme court noted that it has “emphasized that a defendant is not entitled to the ameliorative effects of amendatory legislation if the General Assembly has not clearly indicated its intent to require such retroactive application.” 828 P.2d 254, 258 (Colo. 1992). However, we do not consider this statement to have the controlling effect the partial dissent gives it. In Riley, the defendant committed a crime in April 1988 and sought relief under two sentencing provisions that expressly stated they applied to acts “committed on or after” July 1, 1988. Id. at 255-56. The Riley court held the defendant there was not entitled to relief because applying the statutes retroactively would require the court to ignore the “clear legislative determination” that the amended sentencing provisions would apply only to acts after that date. Id. at 257. ¶ 83 Thus, Riley is readily distinguishable from the present case, where the amendments to the theft statute do not expressly provide an effective date, and the language relied on by the partial dissent is dicta. Accord McCoy, 764 P.2d at 1174 (noting that, where 34 legislation expressly stated it applied to acts committed on or after its effective date, a “defendant does not receive any ameliorative benefit” because “retroactive application of the amendatory legislation is clearly not intended by its own terms”); People v. Macias, 631 P.2d 584, 587 (Colo. 1981) (same). ¶ 84 Thus, we conclude, in accordance with Stellabotte, that Trujillo should receive the benefit of the amendment to the theft statute reclassifying theft between $20,000 and $100,000 as a class 4 felony. See Stellabotte, ¶ 40, ___ P.3d at ___. VIII. Conclusion ¶ 85 Accordingly, the judgment of conviction is affirmed. The sentence is affirmed in part and vacated in part, and the case is remanded for further proceedings consistent with the views expressed in this opinion. JUDGE RICHMAN concurs. JUDGE FURMAN concurs in part and dissents in part. 35 JUDGE FURMAN, concurring in part and dissenting in part. ¶ 86 I respectfully dissent from the majority’s opinion only as to the effect of the 2013 amendments to the theft statute. I conclude that the 2013 amendments to the theft statute do not apply retroactively to Trujillo’s case. I reach this conclusion for several reasons. ¶ 87 First, the General Assembly has made it clear that a “statute is presumed to be prospective in its operation.” § 2-4-202, C.R.S. 2017. The 2013 amendments to the theft statute are silent as to whether they apply prospectively or retroactively. Therefore, I presume that the 2013 amendments are prospective in operation and do not apply to Trujillo’s offense, which occurred before 2013. See id. ¶ 88 Second, an amendment to a criminal statute does not change the penalty for crimes already committed under the statute unless the amendatory legislation expressly provides for such a change. See § 2-4-303, C.R.S. 2017. Section 2-4-303 provides, in relevant part: The . . . amendment . . . of any statute or part of a statute . . . shall not have the effect to release, extinguish, alter, modify, or change in whole or in part any penalty, forfeiture, or liability, either civil or criminal, which shall 36 have been incurred under such statute, unless the . . . amending . . . act so expressly provides, and such statute or part of a statute . . . so . . . amended . . . shall be treated and held as still remaining in force for the purpose of sustaining any and all proper actions, suits, proceedings, and prosecutions, criminal as well as civil, for the enforcement of such penalty, forfeiture, or liability, as well as for the purpose of sustaining any judgment, decree, or order which can or may be rendered, entered, or made in such actions, suits, proceedings, or prosecutions imposing, inflicting, or declaring such penalty, forfeiture, or liability. Because the 2013 amendments to the theft statute do not expressly provide that they apply retroactively, and Trujillo committed his crime before 2013, he is liable for theft as it was defined when he committed the offense. See id. ¶ 89 Third, in Riley v. People, 828 P.2d 254, 258 (Colo. 1992), our supreme court “emphasized that a defendant is not entitled to the ameliorative effects of amendatory legislation if the General Assembly has not clearly indicated its intent to require such retroactive application.” Id. I consider this statement by the supreme court about its own jurisprudence on this issue to be controlling. 37 ¶ 90 Fourth, section 18-1-410(1)(f)(I), C.R.S. 2017, does not allow Trujillo, on direct appeal, to seek retroactive application of the 2013 amendments to his case. Section 18-1-410(1)(f)(I) allows a defendant to seek retroactive application of a “significant change in the law, applied to” a defendant’s “conviction or sentence.” I believe that the phrase “applied to” reflects the General Assembly’s intent that, for amendatory legislation to apply retroactively to a defendant’s conviction or sentence, the legislation must state that it applies retroactively. Thus, because, as noted, the 2013 amendments do not state that they apply retroactively to Trujillo’s conviction and sentence, he may not seek retroactive application under section 18-1-410(1)(f)(I). ¶ 91 Finally, and with all due respect, I decline to follow People v. Stellabotte, 2016 COA 106 (cert. granted Feb. 6, 2017). Indeed, I agree with Judge Dailey’s dissent in Stellabotte. See id. at ¶¶ 62-70 (Dailey, J., concurring in part and dissenting in part). 38
Introduction {#sec1-1} ============ Infliximab (IFX), a chimeric anti-TNFα antibody, is effective in inducing and maintaining remission in a considerable proportion of IBD patients refractory to any other treatments \[[@ref1],[@ref2]\]. However, 8-12% of adult and/or pediatric patients fail to respond to the induction regimen (known as primary non responders) and approximately 40% of patients who respond initially and achieve clinical remission inevitably lose response over time\[[@ref3],[@ref7]\]. Lack of response to IFX is a stable trait and suggests that the differences in response might be in part genetically determined. Considering the high cost and safety profile of this drug, genetic targeting of patients responding to this therapy is certainly of great interest \[[@ref8]\]. So far, limited candidate gene association studies with response to IFX have been reported \[[@ref9]-[@ref11]\]. Recently, a genome-wide association study (GWAS) in paediatric IBD patients has revealed that the 21q22.2/BRWDI loci were associated with primary non response \[[@ref12]\]. Furthermore, although TNFa gene is of great interest as a candidate gene for pharmacogenetic approaches few studies have been performed to date and some have led to contradictory results \[[@ref10],[@ref11],[@ref13]-[@ref15]\]. All anti-TNF agents share an IgG1 Fc fragment, but the contribution of the Fc portion to the response to treatment among currently used TNF blockers remains unknown. Receptors for IgG-Fc portion (FcR) are important regulatory molecules of inflammatory responses. FcR polymorphisms alter receptor function by enhancing or diminishing the affinity for immunoglobulins \[[@ref16]\]. Three major classes of FcR that are capable of binding IgG antibodies are recognised: FcγRΙ (CD64), FcγRΙΙ (CD32), and FcγRΙΙΙ (CD16). FcγRΙΙ and FcγRΙΙΙ have multiple isoforms (FcγRΙΙΙA/C and B; FcγRΙΙΙA and B) \[[@ref16]\]. The most frequent polymorphism of *FcγRΙΙΙA* is a point mutation affecting amino acids in codon 158 in the extracellular domain. This results in either a valine (V158) or a phenylalanine (F158) at this position. Recently, it has been reported that CD patients with *FcγRΙΙΙA* -158V/V genotype had a better biological and possibly better clinical response to IFX \[[@ref17]\]. However, further studies did not confirm this observation \[[@ref18]\]. The aim of this study was to assess whether the *TNF* and/ or *FcγRΙΙΙA* gene polymorphisms are genetic predictors of response to IFX, in a cohort of Greek patients with adult or paediatric onset of CD. Patients - Methods {#sec1-2} ================== Patients {#sec2-1} -------- We enrolled 106 consecutive patients with newly diagnosed CD attending the outpatient IBD Clinic at the 1^st^ Department of Gastroenterology, "Evangelismos" Hospital (79 adults) or the 1^st^ Department of Pediatrics, University Hospital of Athens "Aghia Sophia"(27 children). The diagnosis of CD was based on standard clinical, endoscopic, radiological, and histological criteria \[[@ref1],[@ref19]\]. Eligible patients should have inflammatory (luminal) disease and be naive to IFX. IFX was administered intravenously at a dose of 5mg/kg at weeks 0, 2, 6 and then every 8 weeks. Clinical and serological responses were assessed using the Harvey-Bradshaw Index (HBI) \[[@ref20]\] and the serum levels of C-reactive protein (CRP), respectively, at baseline (before the 1st infusion of IFX), the day before each subsequent IFX infusion and after 12 weeks of treatment. Ileocolonoscopy was performed by a single endoscopist (GJM) at baseline and after 12-20 weeks of therapy to assess mucosal healing. Any changes in endoscopic appearance compared to baseline endoscopy were classified in four categories \[[@ref21],[@ref22]\] \[[Table 1](#T1){ref-type="table"}\]. Patients were classified in accordance to response to IFX therapy as shown in [table 2](#T2){ref-type="table"}. The ethical committee of the participating hospitals approved the study. Research was carried out according to Helsinki Convention (1975) and written inform consent was obtained in advance from each patient. ###### Grading of endoscopic mucosal lesions \[[@ref21],[@ref22]\] ![](AnnGastroenterol-24-35-g001) ###### Classification of the study population due to response to infliximab therapy ![](AnnGastroenterol-24-35-g002) Genotyping {#sec2-2} ---------- Genomic DNA from whole blood containing EDTA was extracted using standard techniques (NucleoSpin Blood kit, Macherey-Nagel, Germany). All polymerase chain reactions (PCRs) were run under conditions previously described \[[@ref23]\]. Primer sequences for the gene polymorphism at --308 were forward 5′-GGG ACA CAC AAG CAT CAA GG-3′ and reverse 5′-GGG ACA CAC AAG CAT CAA GG-3′, for the polymorphism at −238 forward 5′-ATC TGG AGG AAG CGG TAG TG-3′ and reverse 5′-AGA AGA CCC CCC TCG GAA CC-3′. The PCR products were digested at 37 °C with NcoI to detect the SNP in the −308 gene allele and MspI to detect the polymorphism of the −238 nucleotide. The -857 C/T polymorphism was analyzed by allele-specific PCR method24 using the primers TNF857-C: 5′-aag gat aag ggc tca gag ag-3′, TNF857-N: 5′-cta cat ggc cct gtc ttc g-3′ and TNF857-M: 5′-t cta cat ggc cct gtc ttc a-3′. The --158V/F polymorphism of FcγRΙΙΙA gene was detected as described by Leppers-van de Straat et al \[[@ref25]\] using the primers 5′-CTG AAG ACA CAT TTT TACT CC CAA (A/C)-3′ and 5′-TCC AAA AGC CAC ACT CAA AGA C-3′. The PCR products were then subjected to 3% agarose-gel electrophoresis. "No target" controls were included in each PCR batch to ensure that reagents had not been contaminated. Statistical Analysis {#sec2-3} -------------------- Genotype frequencies were compared with the chi-square with Yate's correction using S-Plus (v. 6.2Insightful, Seattle, WA). Odds ratios (ORs) and 95 confidence intervals (CIs) were obtained with GraphPad (v. 3.00, GraphPad Software, San Diego, CA). The p values are all two-sided. Correction for multiple testing was not applied in this study. *P* values of \< 0.05 were considered to be significant. Results {#sec1-3} ======= Patient demographic and clinical characteristics are given in [Table 3](#T3){ref-type="table"}. There were 68 (64.15%) complete responders, 25 (23.58%) partial responders and 13 (12.26%) non responders to IFX in this study. There were no statistical differences in the mean age, gender, disease duration, location and behavior and smoking habits between complete or partial responders and primary non-responders. There was no disagreement between HBI scores and serum CRP levels. Although, the post-treatment CRP levels were significantly lower in complete responders compared to partial and non-responders, the decrease in CRP levels did not differ significantly between the three groups. Post-treatment CRP levels and mean HBI score were significantly lower in complete responders compared to pre-treatment values in contrast to partial and/or non-responders where the CRP levels and the mean HBI score did not differ significantly. ###### Demographic, clinical and biological characteristics of the study population ![](AnnGastroenterol-24-35-g003) The -238 G/A, -308 G/A, and -857 C/T polymorphisms of the TNF gene and the -158 V/F polymorphism in the *FcγRΙΙΙA* gene were successfully determined in all subjects. The genotype distribution in complete, partial and non-responders were presented in [Table 4](#T4){ref-type="table"}. No significant difference was observed for the polymorphism tested. In addition, although there may be genetic differences in early (paediatric)-onset and late (adult)-onset CD we were unable to detect any such differences although the number of paediatric patients included in the current study did not allow firm conclusions. ###### Genotype frequency in complete responders, partial responders and non responders ![](AnnGastroenterol-24-35-g004) In the present study, we could not correlate the decrease in serum CRP levels with the genotypes tested in any particular group of patients since in most of the cases serum CRP levels dropped by more than 25% after 12 weeks of treatment. However, no significant decrease in CRP was observed between the TNF genotypes tested. Regarding the -158 V/F polymorphism in the *FcγRΙΙΙA* gene, the relative decrease in serum CRP levels was greatest in VV homozygotes (78.15 ± 33.68%) and lowest in FF homozygotes (69.84 ± 28.7%) but this difference was not significant. Due to the small number of cases we did not stratify the genotype frequencies according to age. Discussion {#sec1-4} ========== The mechanism of IFX action in IBD seems to be multifactorial and the response to IFX is a complex phenomenon influenced by several parameters \[[@ref1]\]. Interestingly, a certain proportion of patients do not respond to IFX at all whereas a significant proportion will lose response over time \[[@ref3]-[@ref7]\]. This is the first Greek study aiming at identifying any significant associationbetween the -238 G/A, -308 G/A, and -857 C/T polymorphisms in the promoter region of the TNF gene and the -158V/F polymorphism in *FcγRΙΙΙA* gene and response to IFX in a cohort of adult and paediatric patients with CD and it was negative. Efficacy of IFX was assessed by clinical, serological and endoscopic parameters. Clinical response to IFX was evaluated using the HBI, which has been used in many clinical trials, is simple to use and has shown good correlation with the Crohn's Disease Activity Index (CDAI) \[[@ref26]\]. Serological evaluation of response to IFX was based on changes in serum levels of CRP, which has shown a good correlation with clinical activity and to a certain degree with endoscopic activity of CD \[[@ref27]\]. Finally, endoscopic activity of disease was assessed before and after IFX therapy using a simple description of healing of ulcerative and non ulcerative lesions \[[Table 1](#T1){ref-type="table"}\] as has been previously described \[[@ref21],[@ref22]\]. Endoscopic healing was assessed after 12-20 weeks of IFX treatment. It is conceivable that 12 weeks may be early to assess mucosal healing induced by biologic therapies \[[@ref27]\] but the vast majority of patients underwent endoscopy at least 16 weeks after initiation of IFX therapy (average time 17.6 weeks) and therefore it is unlikely that we have not obtained an objective view of the intestinal mucosal at follow up ileocolonoscopy. Regarding the *TNF* genotypes, our results are in agreement with Louis et al \[[@ref11]\] who did not find any significant difference between response groups when genotyped CD patients for the TNF -308G/A polymorphism and compared response rates after IFX treatment. The same results were reported by Mascheretti et al \[[@ref10]\] and Dideberg et al \[[@ref13]\]. Moreover, our results are in agreement with Tomita et al \[[@ref28]\] who reported no significant difference on *TNFa*, *FcgammaRIIA* and *FcgammaRIIIA* between responders and non responders 8 weeks after IFX treatment as well as with results of ACCENT I study where the relative decrease in serum CRP levels after IFX treatment was greatest in -158 VV homozygotes and lowest in FF homozygotes \[[@ref18]\]. In contrast, Louis et al \[[@ref17]\] observed a significant association between the -158V/F polymorphism in *FcγRΙΙΙA* and both the proportion of patients who had a drop in serum CRP levels after IFX treatment and the magnitude in decrease of serum CRP levels. This may account for the relatively small population of patients in our study, genetic differences in the studied populations and/or methodological differences between studies. Although it would be useful to genetically differentiate 'responders' from 'non-responders', there are not enough data on TNF polymorphisms in IBD and often only selected polymorphisms are genotyped. Small studies have shown possible associations between poor response to IFX and increasing mucosal levels of activated NF-kappaB, homozygosity for the polymorphism in exon 6 of TNFR2 (genotype Arg196Arg), positivity for perinuclear antineutrophil cytoplasmic antibodies and with the presence of increased numbers of activated lamina propia mononuclear cells producing interferon-gamma and TNFa \[[@ref29]\]. In conclusion, our study did not detect any associations between three TNFα gene polymorphisms or the -158 V/F polymorphism in the *FcγRΙΙΙA* gene and response to IFX in CD. However, in view of discrepant results in the literature large-scale pharmacogenetic studies in different populations, with similar baseline disease phenotypes and treatment protocols are needed to adequately estimate associations between genetic polymorphisms and treatment outcomes. Conflict of interest: None ^a^Evangelismos Hospital, ^b^Laboratory of Biology, School of Medicine, ^c^1^st^ Department of Pediatrics, School of Medicine, University of Athens, Greece
---------------------- Forwarded by Benjamin Rogers/HOU/ECT on 10/19/2000 03:13 PM --------------------------- [email protected] on 10/18/2000 06:18:51 PM To: [email protected] cc: Subject: (no subject) Ben- This is a lengthy info/doc request - please give me feedback on how best we can close the loop. Thanks Susan Flanagan - DocReq 001013b.doc
The two classes `KinesisRecorder` and `KinesisFirehoseRecorder` allow you to interface with Amazon Kinesis Data Streams and Amazon Kinesis Data Firehose to stream analytics data for real-time processing. ## What is Amazon Kinesis Data Streams? [Amazon Kinesis Data Streams](http://aws.amazon.com/kinesis/) is a fully managed service for real-time processing of streaming data at massive scale. Amazon Kinesis can collect and process hundreds of terabytes of data per hour from hundreds of thousands of sources, so you can write applications that process information in real-time. With Amazon Kinesis applications, you can build real-time dashboards, capture exceptions and generate alerts, drive recommendations, and make other real-time business or operational decisions. You can also easily send data to other services such as Amazon Simple Storage Service, Amazon DynamoDB, and Amazon Redshift. The Kinesis Data Streams `KinesisRecorder` client lets you store your Kinesis requests on disk and then send them all at once using the [PutRecords](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecords.html) API call of Kinesis. This is useful because many mobile applications that use Kinesis Data Streams will create multiple requests per second. Sending an individual request under `PutRecord` action could adversely impact battery life. Moreover, the requests could be lost if the device goes offline. Thus, using the high-level Kinesis Data Streams client for batching can preserve both battery life and data. ## What is Amazon Kinesis Data Firehose? [Amazon Kinesis Data Firehose](http://aws.amazon.com/kinesis/firehose/) is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3) and Amazon Redshift. With Kinesis Data Firehose, you do not need to write any applications or manage any resources. You configure your data producers to send data to Firehose and it automatically delivers the data to the destination that you specified. The Amazon Kinesis Data Firehose `KinesisFirehoseRecorder` client lets you store your Kinesis Data Firehose requests on disk and then send them using the [PutRecordBatch](https://docs.aws.amazon.com/firehose/latest/APIReference/API_PutRecordBatch.html) API call of Kinesis Data Firehose. For more information about Amazon Kinesis Data Firehose, see [Amazon Kinesis Data Firehose](http://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html). ## Integrating Amazon Kinesis Set up AWS Mobile SDK components by including the following libraries in your `app/build.gradle` dependencies list. ```groovy dependencies { implementation 'com.amazonaws:aws-android-sdk-kinesis:2.15.+' implementation ('com.amazonaws:aws-android-sdk-mobile-client:2.15.+@aar') { transitive = true } } ``` * `aws-android-sdk-kinesis` library enables sending analytics to Amazon Kinesis. * `aws-android-sdk-mobile-client` library gives access to the AWS credentials provider and configurations. Add the following imports to the main activity of your app. ```java import com.amazonaws.mobileconnectors.kinesis.kinesisrecorder.*; import com.amazonaws.mobile.client.AWSMobileClient; import com.amazonaws.regions.Regions; ``` To use Kinesis Data Streams in an application, you must set the correct permissions. The following IAM policy allows the user to submit records to a specific data stream, which is identified by [ARN](http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html). ```json { "Statement": [{ "Effect": "Allow", "Action": "kinesis:PutRecords", "Resource": "arn:aws:kinesis:us-west-2:111122223333:stream/mystream" }] } ``` The following IAM policy allows the user to submit records to a specific Kinesis Data Firehose delivery stream. ```json { "Statement": [{ "Effect": "Allow", "Action": "firehose:PutRecordBatch", "Resource": "arn:aws:firehose:us-west-2:111122223333:deliverystream/mystream" }] } ``` This policy should be applied to roles assigned to the Amazon Cognito identity pool, but you need to replace the `Resource` value with the correct ARN for your Amazon Kinesis or Amazon Kinesis Data Firehose stream. You can apply policies at the [IAM console](https://console.aws.amazon.com/iam/). To learn more about IAM policies, see [Using IAM](http://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_Introduction.html). To learn more about Amazon Kinesis Data Streams policies, see [Controlling Access to Amazon Kinesis Data Streams Resources with IAM](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-using-iam.html). To learn more about Amazon Kinesis Data Firehose policies, see [Controlling Access with Amazon Kinesis Data Firehose](http://docs.aws.amazon.com/firehose/latest/dev/controlling-access.html). ## Working with the API You can use `AWSMobileClient` to setup the Cognito credentials that are required to authenticate your requests with Amazon Kinesis. ```java AWSMobileClient.getInstance().initialize(getApplicationContext(), new Callback<UserStateDetails>() { @Override public void onResult(UserStateDetails userStateDetails) { Log.i("INIT", userStateDetails.getUserState().toString()); } @Override public void onError(Exception e) { Log.e("INIT", "Initialization error.", e); } } ); ``` Once you have credentials, you can use `KinesisRecorder` with Amazon Kinesis. The following snippet creates a directory and instantiates the `KinesisRecorder` client: ```java String kinesisDirectory = "YOUR_UNIQUE_DIRECTORY"; KinesisRecorder recorder = new KinesisRecorder( myActivity.getDir(kinesisDirectory, 0), Regions.<YOUR-AWS-REGION>, AWSMobileClient.getInstance() ); // KinesisRecorder uses synchronous calls, so you shouldn't call KinesisRecorder methods on the main thread. ``` To use `KinesisFirehoseRecorder`, you need to pass the object in a directory where streaming data is saved. We recommend you use an app private directory because the data is not encrypted. ```java KinesisFirehoseRecorder firehoseRecorder = new KinesisFirehoseRecorder( context.getCachedDir(), Regions.<YOUR-AWS-REGION>, AWSMobileClient.getInstance()); ``` Configure Kinesis: You can configure `KinesisRecorder` or `KinesisFirehoseRecorder` through their properties: You can configure the maximum allowed storage via the `withMaxStorageSize()` method of `KinesisRecorderConfig`. You can retrieve the same information by getting the `KinesisRecorderConfig` object for the recorder and calling `getMaxStorageSize():` ```java KinesisRecorderConfig kinesisRecorderConfig = recorder.getKinesisRecorderConfig(); Long maxStorageSize = kinesisRecorderConfig.getMaxStorageSize(); // Do something with maxStorageSize ``` To check the number of bytes currently stored in the directory passed in to the `KinesisRecorder` constructor, call `getDiskBytesUsed()`: ```java Long bytesUsed = recorder.getDiskBytesUsed(); // Do something with bytesUsed ``` To see how much space the `KinesisRecorder` client is allowed to use, you can call `getDiskByteLimit()`. ```java Long byteLimit = recorder.getDiskByteLimit(); // Do something with byteLimit ``` With `KinesisRecorder` created and configured, you can use `saveRecord()` to save records and then send them in a batch. ```java recorder.saveRecord( "MyData".getBytes(), "MyStreamName"); recorder.submitAllRecords(); ``` For the `saveRecord()` request above to work, you would have to have created a stream named `MyStreamName`. You can create new streams in the [Amazon Kinesis console](https://console.aws.amazon.com/kinesis). If `submitAllRecords()` is called while the app is online, requests will be sent and removed from the disk. If `submitAllRecords()` is called while the app is offline, requests will be kept on disk until `submitAllRecords()` is called while online. This applies even if you lose your internet connection midway through a submit. So if you save ten requests, call `submitAllRecords()`, send five, and then lose the Internet connection, you have five requests left on disk. These remaining five will be sent the next time `submitAllRecords()` is invoked online. Here is a similar snippet for Amazon Kinesis Data Firehose: ```java // Start to save data, either a String or a byte array firehoseRecorder.saveRecord("Hello world!\n"); firehoseRecorder.saveRecord("Streaming data to Amazon S3 via Amazon Kinesis Data Firehose is easy.\n"); // Send previously saved data to Amazon Kinesis Data Firehose // Note: submitAllRecords() makes network calls, so wrap it in an AsyncTask. new AsyncTask<Void, Void, Void>() { @Override protected Void doInBackground(Void... v) { try { firehoseRecorder.submitAllRecords(); } catch (AmazonClientException ace) { // handle error } } }.execute(); ``` To learn more about working with Kinesis Data Streams, see the [Amazon Kinesis Data Streams resources](http://aws.amazon.com/kinesis/developer-resources/). To learn more about the Kinesis Data Streams classes, see the [class reference for KinesisRecorder](https://aws-amplify.github.io/aws-sdk-android/docs/reference/com/amazonaws/mobileconnectors/kinesis/kinesisrecorder/KinesisRecorder.html). To learn more about the Kinesis Data Firehose classes, see the [class reference for KinesisFirehoseRecorder](https://aws-amplify.github.io/aws-sdk-android/docs/reference/com/amazonaws/mobileconnectors/kinesis/kinesisrecorder/KinesisFirehoseRecorder.html).
Retrieval of blade implants with piezosurgery: two clinical cases. In this work an ultrasound device was used to perform an ostectomy for the removal of blade implants in order to save as much bone tissue as possible, so that root form implants might later be inserted. Two patients underwent surgery for the removal of two blade implants (one maxillary, the other mandibular) that were no longer functional. The peri-implant ostectomy was carried out with a piezoelectric surgery device. The instrument demonstrated to be effective and precise during ostectomy, providing an extremely thin cutting line. During the course of the operation and at controls after 7 and 30 days, patients did not show any relevant complications and both still had sufficient alveolar bone to be treated with root form implants. The piezosurgery device proved to be an effective instrument in interventions requiring a significant saving of bone tissue, extreme precision in cutting, and respect of soft tissues.
Sun aims powerful flares at Earth Top: Two large sunspot groups are visible in this image of the sun obtained by the Solar and Heliospheric Observatory (SOHO). Below: This SOHO image shows a large filament eruption that occurred February 26. The disk in the center is a mask that blocks out direct sunlight. By Richard Stenger CNN Interactive Staff Writer March 1, 2000 Web posted at: 3:24 p.m. EST (2024 GMT) (CNN) -- The sun should place the Earth squarely in its sights this week as it aims its solar ray gun. Astronomers tell terrestrial dwellers not to sweat it too much, despite the fact that solar activity is approaching an 11-year peak. Two large sunspots moving across the surface of the sun are expected to directly face the Earth soon for up to several days, according to solar scientists. Such sunspots often herald powerful coronal mass ejections and solar flares, space storms that can disrupt weather and electrical systems on Earth. Solar flares are the largest explosions in the solar system. A typical one can release the energy equivalent of millions of 100-megaton hydrogen bombs exploding at once. Highly charged particles from large flares can overload power grids and damage satellites. In 1989, one space storm knocked out a major power plant in Canada, leaving millions without power for hours. Solar activity generally waxes and wanes during an 11-year cycle and astronomers expect it to peak either this or next year. But so far, the sun has produced only a "disappointing" level of fireworks, said Joseph Gurman, a solar physicist who analyzes data from the Solar and Heliospheric Observatory. Coronal mass ejections are much more likely to produce effects, Gurman said. Like flares, they send streams of highly charged particles, but they also can emit a billion tons of plasma, or ionized gas. Fortunately the Earth's magnetosphere usually bears the brunt of plasma particles. "If we were exposed to them, we literally would be fried," Gurman said.
Facebook has hired the Patriot Act's co-author as a general counsel - Jerry2 https://boingboing.net/2019/04/22/mass-surveillance-r-us.html ====== javagram “Jennifer Newstead helped craft the Patriot Act, a cowardly work of treasonous legislation foisted on the American people in the wake of the 9/11 attacks;” Source seems a little biased. Treasonous? That’s gotta require a lot of cortortion around the definition of treason. Patriot Act provisions have been repeatedly reauthorized by the democratically elected legislature since it was originally passed. This isn’t a case of foisting anything upon the people, the people are perfectly happy to vote in supporters of the Patriot Act. [https://en.wikipedia.org/wiki/Patriot_Act#Reauthorizations](https://en.wikipedia.org/wiki/Patriot_Act#Reauthorizations) ~~~ thundergolfer It's well known that many members of congress passed through the act _without having read it_. Given the enormity of the act's effects on the country, this is quite a problematic thing. I don't it was democracy that saw that bill through. It was crisis politics. Democracy requires a well-informed public, and capable representatives. With the USA PATRIOT act there was neither. ~~~ foxyv With the current state of campaign finance, congress is essentially two corporations with congressmen/women as employees. If you don't vote the party line or you don't secure funding for the party you get defunded on your next election. Surprising they don't bother to read the bills they are told to pass. ------ canada_dry A perfect fit really. This guy figures it's ok to allow personal records like telephone, e-mail, financial, and business records to be surreptitiously captured without full due process/transparency. Facebook would love to push the (no-)privacy envelope much further: a complete data free-for-all for their commercial gain. ------ Jerry2 It's unfortunate that mods decided sink this story. Any explanation as to why? ------ tuxxy What exactly... do they think is going to happen when news outlets hear this? ~~~ joshmn The 30 minute news cycle we've had for the last 3 years of course. ~~~ isoskeles Yeah unlike when the Patriot Act passed, and the news media spoke truth to power or whatever, and saved us all from that treasonous law. Apologies for the snark but it’s been like this for more than 20 years. ~~~ thundergolfer To add to your comment. _Manufacturing Consent_ came out in 1988, 31 years ago. That book manfully built the case that this stuff has been going on for well over a century, but that it really kicked up in the post WW2 era with the erosion of labour-class news media. Today 6 US media companies control 90% of US media, and any hope one has of the internet disarming them dims more than a little at the sight of a P.A.T.R.I.O.T act author crossing over into the arms of a tech giant.
1. Field of the Invention The present invention relates to a motor drive apparatus which is, for example, used for driving an X-Y table of a monolithic wire bonder or a die bonder serving as one of IC manufacturing apparatus, and a method of controlling the same. 2. Description of the Related Art There is known a method of accurately stopping a motor at a target position, as disclosed in Unexamined Japanese Patent Application No. 55-77384/1980. In this prior art, after the motor passes through the target position, an error extreme point is obtained in order to determine a current value to be supplied to the motor to correct the error. Then, a rectangular current is supplied to the motor so as to eliminate the error and stop the motor at the target position. Hereinafter, a background technology of the present invention will be explained. FIG. 10 is a block diagram showing one example of a motor drive apparatus controlling a typical three-phase synchronous motor. FIG. 11 is a detailed view showing a motor 1 of FIG. 10. FIG. 12 is a view showing inductive voltages of the motor 1 of FIG. 10. FIG. 13 is a view showing output signals from an encoder 2 shown in FIG. 10. FIG. 14 is a view showing an operation of a pulse converter 3 shown in FIG. 10. And, FIG. 15 is a detailed view showing a magnetic pole detector 4 of FIG. 10. In FIG. 10, a reference numeral 1 represents a three-phase synchronous motor equipped with 9 slots and 6 poles. More specifically, as shown in FIG. 11, this three-phase synchronous motor comprises a stator 5 and a rotor 6. The stator 5 is associated with three coils of U-phase 7, V-phase 8, and W-phase 9 windings. This motor 1 has nine slots 10 disposed on an inside surface of the stator 5 which are spaced at intervals of 40 degrees. These nine slots 10 are wound by the coil windings in the order of U-phase, V-phase, and W-phase repetitively so as to form a star connection. On the other hand, the rotor 6 has six permanent magnet poles 11 disposed on the outer circumferential surface thereof. An operational principle of the motor 1 will be explained below. The rotor 8 causes a magnetic field corresponding to its rotational position, which interacts with three, U-phase 7, V-phase 8, and W-phase 9, windings on the stator 5. Therefore, these three windings 7, 8, and 9 generate voltages due to Lorentz's force. Namely, three, U-phase 12, V-phase 13, and W-phase 14, inductive voltages of sine waveform are generated at intervals of 120 degrees as shown in FIG. 12 because a magnetic field to each winding is cyclically increased and decreased in response to spatial positioning of the permanent magnet 11 which cyclically approaches to and departs from each winding during one complete revolution of the rotor 6. If sine-wave currents being in-phase with these inductive voltages of FIG. 12 are supplied to the U-phase 7, V-phase 8, and W-phase 9 windings, respectively, the rotor 6 generates a torque in a clockwise (abbreviated as CW) direction due to Fleming's left-hand rule. The magnitude of the torque generated is proportional to an amplitude of the current supplied. Moreover, if the above currents are further multiplied with -1 and delayed 180 degrees in phase before being supplied to respective windings, the rotor 6 generates a torque in a counterclockwise (abbreviated as CCW) direction. In FIG. 10, a reference numeral 2 represents an optical encoder having three channels and installed on a rotor shaft of the motor 1. When the motor i rotates in the clockwise (CW) direction, the encoder 2 generates an A-phase signal 15 and a B-phase signal 18 having a mutual phase difference of 90 degrees therebetween as shown in FIG. 12, together with a Z-phase pulse signal 17 corresponding to one of zero-crossing 20 points of the U-phase inductive voltage 12. If the motor 1 rotates in the counterclockwise (CCW) direction, the phase relationship between the A-phase signal 15 and B-phase signal 16 are reversed. Therefore, the rotational direction of the motor 1 is easily judged by checking the phase relationship between the A-phase signal 15 and the B-phase signal 18. A reference numeral 3 represents a pulse converter connected to the encoder 2. This pulse converter 3 converts the A-phase and B-phase signals 15 and 18 into a CW pulse signal 18 as shown in FIG. 14 when the motor 1 rotates in the clockwise direction. On the contrary, this pulse converter 3 converts the A-phase and B-phase signals 15 and 16 into a CCW pulse signal 19 as shown in FIG. 14 when the motor 1 rotates in the counterclockwise direction. A reference numeral 4 represents a magnetic pole detector comprising a counter 20, a U-phase current phase command table 21, and a W-phase current phase command table 22. As shown in FIG. 15, the counter 20 receives the signals fed from the pulse converter 3 so as to effect its count-up and count-down operations in response to the CW pulse 18 and the CCW pulse 19, respectively. Furthermore, the counter 20 is connected to the encoder 2 so as to effect its clear operation in response to the Z-phase signal 17. The U-phase current phase command table 21 memorizes the phase of the U-phase inductive voltage 12 with respect to the Z-phase signal 17 of the encoder 2. The W-phase current phase command table 22 memorizes the phase of the W-phase inductive voltage 14 with respect to the Z-phase signal 17. An operation of the magnetic pole detector 4 will be explained below. The counter 20 is cleared at the zero-cross point of the U-phase inductive voltage 12 in response to the Z-phase signal 17 fed from the encoder 2. When the motor 1 rotates, a rotational displacement or shift amount from the above zero-cross point of the U-phase inductive voltage 12 is counted by the counter 20. The counted value becomes a pointer 23 of the U-phase current phase command table 21 for outputting a phase value of the U-phase inductive voltage 12 corresponding to the present rotational position of the motor 1. In the same manner, the counted value of the counter 20 becomes a pointer 23 of the W-phase current phase command table 22 for outputting a phase value of the W-phase inductive voltage 14 corresponding to the present rotational position of the motor 1. The magnetic pole detector 4 is connected to two multipliers 24U, 24W so that the phase values of the U-phase and W-phase inductive voltages 12 and 14 can be multiplied with an output of a speed control calculator 25. The speed control calculator 25 outputs a torque command value, i.e. a current amplitude command value. The multipliers 24U, 24W, therefore, multiply the current amplitude command value with the U-phase and W-phase current phase command values. The resultant two outputs from respective multipliers 24U, 24W are, then, fed to two D/A converters 28U, 28W so as to generate U-phase and W-phase current commands, respectively. These U-phase and W-phase current commands are, subsequently, fed to current amplifiers 27U, 27W in which drive currents to be supplied to the U-phase winding 7 and the W-phase winding 9 are generated in response to the U-phase and W-phase current commands, respectively. The U-phase winding 7, the V-phase winding 8, and the W-phase winding 9 are connected with each other so as to constitute a star connection; therefore, the sum of currents flowing through these three-phase windings 7, 8, and 9 becomes 0. A current command for the V-phase winding 8 is, accordingly, identical with -(U-phase current command +W-phase current command). A subtracter 28 is therefore provided to obtain a V-phase current command equal to -(U-phase current command +W-phase current command). Thus obtained V-phase current command is, thereafter, fed to another current amplifier 27V in which a drive current to be supplied to the V-phase winding 8 is generated in response to the V-phase current command. A reference numeral 29 represents a speed detector connected to the pulse converter 3. This speed detector 29 detects the speed of the motor 1 by counting the number of pulses generated during a time measured by a timer 38 when the motor 1 rotates at a high speed and measuring an interval between successive pulses generated when the motor 1 rotates at a low speed. Reference numerals 31 and 32 represent a positive-direction position command pulse and a negative-direction position command pulse, respectively, fed from an external device. Reference numerals 33 and 34 represent subtracters. A reference numeral 35 represents a positional deviation reading sampler which is open-or-close controlled at predetermined intervals in response to an output signal from a timer 37. A reference numeral 38 represents a speed deviation reading sampler which is open-or-close controlled at predetermined intervals in response to an output signal from the timer 38. If these samplers 35 and 38 are closed, the speed control calculator 25, the magnetic pole detector 4, the multipliers 24U, 24W, and the D/A converters 28U, 28W are activated to renew the current commands to be supplied to the current amplifiers 27U, 27W. The subtracter 34, constituted by an up-down counter, is counted up in response to the positive-direction position command pulse S1 and is counted down in response to the negative-direction position command pulse 32. The subtracter 34 is further counted down in response to the CW pulse 18 fed from the pulse converter S and is counted up in response to the CCW pulse 19. The subtracter 34 calculates a positional deviation through these count-up and count-down operations. A reference numeral 39 represents a position control calculator which amplifies the positional deviation obtained. The speed control calculator 25 amplifies a value supplied from the speed deviation reading sampler 38 to obtain a torque command, i.e. a current amplitude command. An operation of the above-described motor drive apparatus will be explained below. First of all, the subtracter 34, constituted by an up-down counter, is counted up in response to the positive-direction position command pulse 31 and counted down in response to the negative-direction position command pulse 32, and is further counted down in response to the CW pulse 18 fed from the pulse converter 3 and counted up in response to the CCW pulse 19, in order to obtain the positional deviation. Furthermore, the position control calculator 39 inputs the positional deviation through the positional deviation reading sampler 35 being open-or-close controlled by the timer 37. The position control calculator 39 amplitudes this positional deviation and outputs a speed command so as to reduce the positional deviation. Next, the subtracter 33 subtracts this speed command by a feedback speed obtained from the speed detector 29 to generate a speed deviation. The speed control calculator 25 inputs the speed deviation through the speed deviation reading sampler 36 being-open-or-close controlled by the timer 38. The speed control calculator 25 amplitudes this speed deviation and generates a torque command, i.e. a current amplitude command. On the other hand, when the motor 1 rotates in the clockwise (CW) direction, the encoder 2 generates the A-phase signal 15 and the B-phase signal 16 having a mutual phase difference of 90 degrees therebetween as shown in FIG. 12, together with the Z-phase pulse signal 17 corresponding to one of zero-crossing points of the U-phase inductive voltage 12. This A-phase signal 15 and B-phase signal 16 are, then, inputted into the pulse converter 3. These A-phase signal 15 and B-phase signal 16 are converted into the CW pulse 18 when the motor 1 rotates in the clockwise (CW) direction, and are converted into the CCW pulse 19 when the motor 1 rotates in the counterclockwise (CCW) direction. Next, the CW pulse signal 18 and the CCW pulse signal 19 outputted from the pulse converter 3, and the Z-phase signal 17 outputted from the encoder 2 are supplied to the magnetic pole detector 4. The counter 20 shown in FIG. 15 is counted up by the CW pulse signal 18 and counted down by the CCW pulse signal 19. Furthermore, the counter 20 is cleared by the Z-phase signal 17 fed from the encoder 2 to be 0. Namely, an arrival of the designated zero-cross point of the U-phase inductive voltage 12 is known by checking the Z-phase signal 17. And, a displacement or shift amount of the motor 1 from the designated zero-cross point of the U-phase inductive voltage 12 is known from the count value of the counter 20. The count value of the counter 20 becomes the pointer 23 of the U-phase current phase command table 21 for outputting the phase value of the U-phase inductive voltage 12 corresponding to the present rotational position of the motor 1. Moreover, the count value of the counter 20 becomes the pointer 23 of the W-phase current phase command table 22 for outputting the phase value of the W-phase inductive voltage 14 corresponding to the present rotational position of the motor 1. In the multipliers 24U, 24W, the phase values of the U-phase and W-phase inductive voltages 12 and 14 are multiplied with the torque command outputted from the speed control calculator 25. Namely, the multipliers 24U, 24W multiply the current amplitude command value with the U-phase and W-phase current phase command values, respectively. The resultant two outputs from respective multipliers 24U, 24W are, then, fed to two D/A converters 26U, 26W so as to generate U-phase and W-phase current commands, respectively. These U-phase and W-phase current commands are, subsequently, fed to current amplifiers 27U, 27W in which the drive currents to be supplied to the U-phase winding 7 and the W-phase winding 9 are generated in response to the U-phase and W-phase current commands, respectively. On the other hand, the subtracter 28 obtains the current command for the V-phase winding 8 by calculating the value identical with -(U-phase current command +W-phase current command). Thus obtained V-phase current command is, thereafter, fed to the current amplifier 27V in which the drive current to be supplied to the V-phase winding 8 is generated in response to the V-phase current command. If the torque command is a positive value, the motor 1 generates a torque in the clockwise (CW) direction. On the contrary, if the torque command is a negative value, the motor 1 generates a torque in the counterclockwise (CCW) direction because the multipliers 24U and 24W generate U-phase and W-phase current commands having 180-degree phase difference with respect to respective U-phase and W-phase current phase commands. Thus, the speed deviation is decreased. In accordance with the reduction of the speed deviation, the positional deviation becomes small. FIG. 9(A) shows a sampling interval of the speed deviation reading sampler 36 applied to both moving and stationary conditions of the motor 1. FIG. 9(B) shows a sampling interval of the positional deviation reading sampler 35 applied to both moving and stationary conditions of the motor 1. When the motor 1 is in a moving condition, in order to stabilize the motor drive operation by the above-described motor drive apparatus, the speed control must be performed by using three times or more sampling with respect to the calculated speed command as shown in FIG. 9. The reason why three times or more sampling are required when the motor 1 is in a moving condition is as follows. If the speed command sampling interval is identical with the control sampling interval in the speed control operation, the motor 1 will not be able to sufficiently follow up the speed command because, even if the speed of the motor 1 is controlled to coincide with the speed command value, the speed command value itself may vary at the next coming control sampling timing. Thus, the speed of the motor 1 cannot be stabilized. Especially, as the positional command varies widely when the motor 1 is in a moving condition, the speed command will correspondingly cause wide variation. Hence, three times or more sampling are required for allowing the motor 1 to follow up the speed command. For this reason, the speed of the timer 37 is set 1/3 or less compared with that of the timer 38. In accordance with the above motor drive apparatus, the sampling interval of the positional deviation reading sampler 35 will be sufficiently extended or elongated so as to stabilize the motor speed control during the moving condition of the motor. However, when the motor 1 is in a stationary condition, the sampling interval of the positional deviation reading sampler 35 will be too long to accurately detect a small positional deviation if this small positional deviation varies at a period smaller than that of the positional deviation reading sampler 35. Consequently, there is a problem that the positioning control cannot be accurately and responsively performed when the motor is in a stationary condition.
Allele-specific wild-type blocker quantitative PCR for highly sensitive detection of rare JAK2 p.V617F point mutation in primary myelofibrosis as an appropriate tool for the monitoring of molecular remission following therapy. Screening of JAK2 V617F point mutation becomes more and more important in monitoring of JAK2 positive MPN following stem cell transplantation. In an attempt to achieve the required high sensitivity (1:10(5)), specifity and robustness we created an approach applicable on bone marrow biopsies where we adapted the principle of wild-type blocker PCR with allele-specific Q-PCR. The significance of the assay was demonstrated on a retrospective series of sequential bone marrow biopsies as diagnosis of molecular relapse now preceded the diagnosis of clinical relapse by far. This method offers the urgently needed tool for a systematic molecular analysis of sequential biopsies in the course of stem cell transplantation to develop guidelines for the management of these patients.
/// /// Copyright (c) 2016 Dropbox, Inc. All rights reserved. /// /// Auto-generated by Stone, do not modify. /// #import <Foundation/Foundation.h> #import "DBSerializableProtocol.h" @class DBTEAMPOLICIESSharedFolderJoinPolicy; NS_ASSUME_NONNULL_BEGIN #pragma mark - API Object /// /// The `SharedFolderJoinPolicy` union. /// /// Policy governing which shared folders a team member can join. /// /// This class implements the `DBSerializable` protocol (serialize and /// deserialize instance methods), which is required for all Obj-C SDK API route /// objects. /// @interface DBTEAMPOLICIESSharedFolderJoinPolicy : NSObject <DBSerializable, NSCopying> #pragma mark - Instance fields /// The `DBTEAMPOLICIESSharedFolderJoinPolicyTag` enum type represents the /// possible tag states with which the `DBTEAMPOLICIESSharedFolderJoinPolicy` /// union can exist. typedef NS_CLOSED_ENUM(NSInteger, DBTEAMPOLICIESSharedFolderJoinPolicyTag){ /// Team members can only join folders shared by teammates. DBTEAMPOLICIESSharedFolderJoinPolicyFromTeamOnly, /// Team members can join any shared folder, including those shared by users /// outside the team. DBTEAMPOLICIESSharedFolderJoinPolicyFromAnyone, /// (no description). DBTEAMPOLICIESSharedFolderJoinPolicyOther, }; /// Represents the union's current tag state. @property (nonatomic, readonly) DBTEAMPOLICIESSharedFolderJoinPolicyTag tag; #pragma mark - Constructors /// /// Initializes union class with tag state of "from_team_only". /// /// Description of the "from_team_only" tag state: Team members can only join /// folders shared by teammates. /// /// @return An initialized instance. /// - (instancetype)initWithFromTeamOnly; /// /// Initializes union class with tag state of "from_anyone". /// /// Description of the "from_anyone" tag state: Team members can join any shared /// folder, including those shared by users outside the team. /// /// @return An initialized instance. /// - (instancetype)initWithFromAnyone; /// /// Initializes union class with tag state of "other". /// /// @return An initialized instance. /// - (instancetype)initWithOther; - (instancetype)init NS_UNAVAILABLE; #pragma mark - Tag state methods /// /// Retrieves whether the union's current tag state has value "from_team_only". /// /// @return Whether the union's current tag state has value "from_team_only". /// - (BOOL)isFromTeamOnly; /// /// Retrieves whether the union's current tag state has value "from_anyone". /// /// @return Whether the union's current tag state has value "from_anyone". /// - (BOOL)isFromAnyone; /// /// Retrieves whether the union's current tag state has value "other". /// /// @return Whether the union's current tag state has value "other". /// - (BOOL)isOther; /// /// Retrieves string value of union's current tag state. /// /// @return A human-readable string representing the union's current tag state. /// - (NSString *)tagName; @end #pragma mark - Serializer Object /// /// The serialization class for the `DBTEAMPOLICIESSharedFolderJoinPolicy` /// union. /// @interface DBTEAMPOLICIESSharedFolderJoinPolicySerializer : NSObject /// /// Serializes `DBTEAMPOLICIESSharedFolderJoinPolicy` instances. /// /// @param instance An instance of the `DBTEAMPOLICIESSharedFolderJoinPolicy` /// API object. /// /// @return A json-compatible dictionary representation of the /// `DBTEAMPOLICIESSharedFolderJoinPolicy` API object. /// + (nullable NSDictionary<NSString *, id> *)serialize:(DBTEAMPOLICIESSharedFolderJoinPolicy *)instance; /// /// Deserializes `DBTEAMPOLICIESSharedFolderJoinPolicy` instances. /// /// @param dict A json-compatible dictionary representation of the /// `DBTEAMPOLICIESSharedFolderJoinPolicy` API object. /// /// @return An instantiation of the `DBTEAMPOLICIESSharedFolderJoinPolicy` /// object. /// + (DBTEAMPOLICIESSharedFolderJoinPolicy *)deserialize:(NSDictionary<NSString *, id> *)dict; @end NS_ASSUME_NONNULL_END
477 F.2d 598 Zukowskiv.State Bar Grievance Board, State Bar ofMichigan 73-1072 UNITED STATES COURT OF APPEALS Sixth Circuit 4/18/73 1 E.D.Mich. AFFIRMED
Primary care for women. Comprehensive assessment and management of common mental health problems. This article emphasizes the importance of the role of the certified nurse-midwife (CNM) in the primary care assessment of, and appropriate referral for women with mental health problems, especially in cases of psychiatric emergencies. Essential aspects of assessment, diagnosis, and treatment of the more common psychiatric problems are included, and the treatment modalities that are considered when referral results in psychiatric intervention are reviewed. In addition, the overall prevalence of mental health problems in women, the frequency with which primary care providers may encounter mental health problems, and issues of mental health care utilization are discussed.
When Rudy Gay left the game with a left knee injury late in the first quarter, memories of the Sacramento Kings’ (16-22) recent poor play minus a star resurfaced. The thought came to fruition as DeMarcus Cousins joined him on the sidelines in the waning seconds of regulation, and the short-handed Kings fell to the visiting Dallas Mavericks (27-12), 108-104. The Kings are currently 2-2 on their six-game home stand and return to action on Friday in a contest against the Miami Heat. Join Cowbell Kingdom’sJames Ham as he recaps the action from the floor of Sleep Train Arena. Golden State Warriors Projected Starters (31-22) What to watch 1. Can the Kings win without DeMarcus Cousins? The Kings are 0-7 without their starting center and it looks like Cousins will miss another game on Wednesday with a strained left hip flexor. Andrew Bogut is questionable for the Warrior with left shoulder inflammation, as is reserve Jermaine O’Neal (sore back). This game might turn into a track meet, which doesn’t bode well for Sacramento. 2. Can the Kings defend the 3-point line? Sacramento ranks 28th in the league against the long ball. The Warriors starting backcourt of Curry and Thompson have already shot close to 800 3-pointers on the season. If the Kings don’t stay with Golden State’s shooters, they have very little chance of pulling off the upset. 3. How do the Kings players handle the trade rumors? The trade deadline is 12pm PST on Thursday and the rumors are swirling. Do the Kings players crumble under the pressure or do they come out swinging in what might be their last game in Sacramento? According to an NBA source, Sacramento Kings point guard Isaiah Thomas underwent an MRI earlier Tuesday on his left wrist. Counter to other media reports, the results of the tests were negative and Thomas is not expected to miss any time with the injury. Since taking over the starting position 35 games ago, Thomas is averaging 21.5 points, 6.9 assists and 1.3 steals per game in 37.5 minutes. But rumors that he was having some discomfort in his wrist began a few weeks back. Recently, his shooting numbers have taken a dramatic dip, beginning in January when he shot just 41.2 percent from the field and 32.7 percent from long range. Thomas’s overall field goal percentage has bounced back in the month of February, but his 3-point percentage for the seven games this month is 24.1 percent. Thomas and rookie guard Ben McLemore were the subject of a trade rumor on Monday, but coach Michael Malone and general manager Pete D’Alessandro refuted the reports following practice on Tuesday afternoon. “The report that was, I think on Yahoo!, about our offer to Boston was so erroneous and I don’t know where it came from,” Malone told reporters on Tuesday. “We dispel the rumors that are out there that we know are not true, but at the same time, this is a business and you have no idea what can happen up until trade deadline. I think all of our players realize that.” With injuries and possible trade rumors swirling, it should be a wild couple of days in Sacramento. DeMarcus Cousins Injury Update Thomas wasn’t the only Kings player to undergo an MRI today. For the second straight day, center DeMarcus Cousins made a trip to the doctors office for testing. Results of the first MRI were inconclusive, but a second test confirmed the Kings medical staff’s earlier diagnosis of a strained left hip flexor. Cousins has been unable to participate in practice since returning from the All-Star break. He is listed as day-to-day, but considered doubtful for Wednesday’s match-up against the Golden State Warriors. Hamady Ndiaye out of Rutgers and DeQuan Jones out of Florida are the only late additions. Ndiaye was in camp last season with Sacramento and left a solid impression. After being waived by the Kings, the 26-year old center spent last season playing for Tianjin Ronggang Golden Lions of the Chinese Basketball Association. Jones played in 63 games last season with the Orlando Magic, including 17 starts. He averaged 3.7 points per game in a little under 13 minutes a game. Last season it was high ropes courses in Colorado Springs, Co. This year, the Sacramento Kings open training camp away from home again, but instead of the Team USA practice facility in Colorado, it will be on the sandy beaches of Santa Barbara, CA. Camp will run from Oct. 1-6 at the Pavilion Gym on the University of California, Santa Barbara campus. The team will head back to Northern California for their pre-season opener on the road against the Golden State Warriors on October 7, before heading to Las Vegas to take on the Lakers on Oct. 10. After the initial week away, the Kings will continue camp in Sacramento at the team’s practice facility in Natomas. Cowbell Kingdom has grown exponentially since its founding in 2009 and we want to make sure we know our audience. The information you provide in this brief survey will be used to help us better serve you. For your participation, you will be automatically entered into a contest to win a copy of the 2013-14 Sacramento Kings Dancers calendar and a “Blackout” t-shirt commemorating last season’s first home game. But there’s probably no other player more overlooked and underrated on this season’s roster than the fourth-year guard. Just look no further than ESPN.com’s annual NBA Rank, which appraises the value of the league’s top 500 players. The 25-year-old guard moved up just five spots (no. 136 in 2011 to no. 131 in 2012) in this year’s rankings. These were the five players ranked just ahead of Thornton in the 2012 forecast: Such is life on a bad team with little to no national exposure. However, those who follow the Kings closely know just how valuable Thornton is, especially his competition. “He’s become an outstanding scorer in this league,” said Dallas Mavericks guard Darren Collisonback in January of his former New Orleans Hornets teammate. “He’s definitely made a niche in this league as far as (being) a big time scorer. “He can shoot the ball extremely well and he can do a lot of different things off the pick and roll,” added Collison. “And he’s exceptionally quick too.” In their rookie year, Collison and Thornton formed an explosive and exciting young backcourt for the Hornets. Though they’ve since gone their separate ways, the two remain close. Thornton worked out last offseason with Collison in Los Angeles during the lockout. The fourth-year guard out of UCLA thinks Sacramento is a good fit for his old teammate. He believes Thornton will only continue to improve with the Kings’ green nucleus. “This is a young team that’s going to be good in the near future,” Collison said. “He has a starting role here, so anytime you have a starting role, it’s always a good fit. And he’s one of their best scorers, too.” Averaging 18.7 points per game, Thornton led the Kings in scoring last season and usually found himself as their go-to-guy in clutch situations. The next step for Thornton, according to another former teammate, is becoming an accomplished defender. “He’s always been a capable scorer,” said Indiana Pacers big man David West. “Key for him has always been for him to play as hard defensively as he does offensively.” As explosive as he is with the ball, Thornton could stand to see some improvement on the defensive end. The Louisiana native finished in the bottom three among his 15 teammates in defensive rating. “We would challenge him to do the same thing on the defensive end,” said West of his days with Thornton in New Orleans. “Make him more of a complete ball player.” However like Collison, West thinks Thornton will continue to find success in the league. “He’s a strong-minded, tough-minded kid,” West said. “I knew that once he got an opportunity to just get in a system that worked for him and bring out his best skills, he’d do well.” The Kings may not belong to Marcus Thornton. But his importance to their success isn’t an understatement. Twenty-five years ago today, Sacramento Kings Head Coach Keith Smart hit a shot that changed his life forever. No matter where I go, people talk about it. Once they recognize me or see a nametag on my bag or something like that, they start talking about “The Shot”. So it’s a great moment and I’m glad it went in, but wasn’t just something for me. We just had our 25 year championship reunion. And we all got together and it wasn’t so much what we all did in the tournament and our careers. It was a friendship and a relationship that we have now that that moment brings us all together. Diehard Sacramento Kings fan Kevin Fippin wanted to propose to his long-time girlfriend Lydia Nicolaisen. So before he popped the question on New Year’s Eve, he recruited the services of a Sacramento Kings fan favorite.
Three-dimensional structures of H-ras p21 mutants: molecular basis for their inability to function as signal switch molecules. The X-ray structures of the guanine nucleotide binding domains (amino acids 1-166) of five mutants of the H-ras oncogene product p21 were determined. The mutations described are Gly-12----Arg, Gly-12----Val, Gln-61----His, Gln-61----Leu, which are all oncogenic, and the effector region mutant Asp-38----Glu. The resolutions of the crystal structures range from 2.0 to 2.6 A. Cellular and mutant p21 proteins are almost identical, and the only significant differences are seen in loop L4 and in the vicinity of the gamma-phosphate. For the Gly-12 mutants the larger side chains interfere with GTP binding and/or hydrolysis. Gln-61 in cellular p21 adopts a conformation where it is able to catalyze GTP hydrolysis. This conformation has not been found for the mutants of Gln-61. Furthermore, Leu-61 cannot activate the nucleophilic water because of the chemical nature of its side chain. The D38E mutation preserves its ability to bind GAP.
Autosomal dominant polycystic kidney disease (ADPKD) is a common monoallelic disorder associated with progressive cyst development and resulting in end stage renal failure (ESRD) in 50% of patients by 60y. However, there is considerable phenotypic variability, extending from in utero onset to patients with adequate renal function into old age. Autosomal dominant polycystic liver disease (ADPLD), as traditionally defined, results in PLD with minimal renal cysts. Classically there have been considered two ADPKD genes, PKD1 and PKD2, encoding PC1 and PC2, and two ADPLD genes, PRKCSH and SEC63, but in the past few years greater genetic heterogeneity has been described, with nine genes now implicated overall. Recent data also indicates an overlap in etiology and pathogenesis associated with ADPKD and ADPLD, with the efficient biogenesis and localization of the PC-complex central to both disorders. During the last funding period we identified a novel gene, GANAB, which is associated with both disorders, where the encoded protein, GII?? is involved in the maturation and trafficking of PC1. In this proposal we will take advantage of advances in next generation sequencing (NGS) methodologies, and large populations of ADPKD and ADPLD patients that have been assembled and screened for the classic genes, to hunt for novel genes for these disorders (Aim 1). The phenotype associated with these genes will be characterized (Aim 3) along with their mechanism of action (Aim 2). NGS methods will be perfected to screen the segmentally duplicated locus, PKD1, and to identify missed mutations at the known loci, including those present in just some cells due to mosaicism (Aim 1). The significance of many PKD1 nontruncating variants has been difficult to evaluate (classed as variants of unknown significance; VUS), but recently evidence that some are incompletely penetrant alleles partially explains phenotypic variability in PKD1 populations. In Aim 2 improved in silico predictions, in combination with machine learning, will improve the understanding of the pathogenicity and penetrance of VUS. A cellular assay of the biogenesis and trafficking of this PC-complex will also be employed to quantify the penetrance of VUS. The mechanism of pathogenesis will be explored in animal models with ultralow penetrant (ULP) Pkd1 or Pkd2 alleles. Employing the large clinically, imaging, and genetically well-defined populations phenotypic groupings of patients will be defined that will then be compared to the genic and PKD1 allelic groups (Aim 3). This iterative process will allow the Variant Score (VS) associated with each PKD1 VUS to be refined. In a separate population the revised VS, alone and in combination with clinical, functional, and imaging data, will be employed to generate a comprehensive, predictive algorithm for ADPKD (Aim 3). Disease modifiers to severe disease, via biallelic ADPKD, and due to alleles at other loci will also be identified and characterized in the cellular assay and in vivo in combination with the Pkd1 hypomorphic, RC model. The final aim will exploit the newly identified information that some PKD1 and PKD2 VUS are rescuable, folding mutations that in a maturation-fostering environment can traffic and function appropriately. A screening scheme based on the level of cell surface PC1 will be improved and new chaperone drugs specific for the PC complex will be sought in collaboration with Sanford Burnham Prebys. A second mutation group that will be explored therapeutically are nonsense mutations. A cellular assay for readthrough efficiency is being developed and will be used for screening. Identified chaperone or readthrough drugs will be tested in available mouse models. Overall this proposal will better explain the etiology and the genetic causes of phenotypic variability in ADPKD/ADPLD, develop better prognostic tools for individual selection of patients for treatment that are now becoming available, and explore allele based treatments for ADPKD.
Michele Orecchia Michele Orecchia (26 December 1903 – 11 December 1981) was an Italian professional road bicycle racer, who won one stage in the 1932 Tour de France. He also competed in the individual and team road race events at the 1928 Summer Olympics. Major results 1927 Giro del Sestriere 1929 Giro d'Italia: 9th place overall classification 1932 Tour de France: Winner stage 8 References External links Official Tour de France results for Michele Orecchia Category:1903 births Category:1981 deaths Category:Italian male cyclists Category:Italian Tour de France stage winners Category:Sportspeople from Marseille Category:Olympic cyclists of Italy Category:Cyclists at the 1928 Summer Olympics Category:Tour de France cyclists Category:French male cyclists
A VISUALLY STUNNING architectural biography of Minnesota’s most influential architect of the twentieth century. Architect, artist, furniture designer, and educator, Ralph Rapson has played a leading role in the development and practice of modern architecture and design, both nationally and internationally. “Ralph Rapson is now a legend in the history of modern architecture.” —Cesar Pelli, FAIA REVIEW: Barbara Flanagan/The New York Times Ralph Rapson is best known as the designer of the Gutherie, Minneapolis’s landmark of theater design, but because he worked, taught and competed with most of the world’s first modernists–Wright, Mies, Corbusier, Saarinen–his elder son and biographer calls him “the Forest Gump of architecture.” Ralph Rapson: Sixty Years of Modern Design, by Rip Rapson, Jane King Hession and Bruce N. Wright, documents the architect’s vast career and uncanny associations. Rapson believed design should be reflect the moment–furniture, houses, cities–but his take on modernism was never pompous. He perpetuated endless ideas–still fresh–vibrant drawings and youthful pranks. (He had his students hoist famous visitors upside down, including the stocky Buckminister Fuller, and footprint the ceiling with their bare soles.) The book shows how one can be talented, influential and happy, all the while remaining internationally obscure. It also tells, discreetly, how one man can achieve all this single-handedly: with his right forearm amputated at birth, Ralph Rapson drew with his left hand.
All Studio Posts The upcoming AES 54th International Conferencem focusing on audio forensics, is set to take place June 12-14, 2014, at the Holiday Inn Bloomsbury in London. Dedicated to exploring techniques, technologies and advancements in the field of audio forensics, the conference will provide a platform for sharing research related to the forensic application of speech/signal processing, acoustical analyses, audio authentication and the examination of methodologies and best practices. Chairpersons for this conference are Mark Huckvale and Jeff M. Smith. This marks… View this post From the archives of the late, great Recording Engineer/Producer (RE/P) magazine, enjoy this in-depth discussion with engineer/ producer Val Garay, conducted by Robert Carr. This article dates back to the October 1983 issue. As a natural extension to his career as a musician during the early Sixties, Val Garay’s love for music lead him to pursue the art and science of audio engineering. Starting in 1969, he apprenticed at the Sound Factory, Hollywood, under rock-recording legend Dave Hassinger (Rolling Stones,… View this post Studio Technologies recently became Audinate’s 100th Dante licensee and is embracing the audio-over-Ethernet movement by developing a line of Dante-enabled products. “Studio Technologies prides itself on developing specialized solutions for its customers,” says Studio Technologies president Gordon Kapes. “Our users rely on us to deliver products that will enhance their workflow in both fixed and mobile broadcast applications. Dante has proven its technological excellence, and we are convinced that it is the correct, progressive solution for adding networking technology to… View this post Software company Plugin Alliance has announced the availability of bx_refinement and bx_saturator V2, two new native plug-ins from German software developer Brainworx. bx_refinement is the brainchild of mastering engineer Gebre Waddell of Stonebridge Mastering, who designed the original prototype as a tool to remove harshness, a problem he was encountering more and more in his work due to the transition to digital and the prevalence of over-compressed mixes. “Harsh recordings are one of the most common problems mixing and mastering… View this post Located outside Dallas, Cool Pony Media is a record label and artist development company that works with various music genres, as well as score-to-picture work. Brothers and co-founders, Mark and Mike Stitts, recently did an upgrade in part of their studio with help from API, and as a result, the team now uses THE BOX console on a daily basis for writing, tracking, creating stems, and mixing. “We’re amazed,” says Mark Stitts. “We have quite a bit of other API… View this post Article provided by Home Studio Corner. If you’ve been mixing for any length of time, you know how valuable the high-pass filter (HPF) can be. It removes excess low end from your non-bass-heavy tracks, allowing you to clean up the low frequencies, making room for the kick and bass. But then there’s this thing called a low frequency shelf. What’s that all about? In the picture below you can see both a high-pass filter and a low-frequency shelf. A… View this post Radial Engineering has announced that it has taken on the global sales, marketing and distribution of the Jensen Iso-Max range of products. Iso-Max is a range of isolators that provide ground isolation and noise abatement for audio and video in broadcast, home theater and commercial AV integration. Radial has a long history with Jensen. According to company president Peter Janis: “When Radial was founded in 1992, we started life as a distributor. One of our first product lines was Jensen.… View this post DPA Microphones has announced the appointment of Direct Imports as its distributor in New Zealand, signaling the company’s continued commitment toward growth and customer service in the country. From its headquarters in Hastings, Hawkes Bay, Direct Imports will carry a full stock of DPA products for live, recording and broadcast applications. “We are delighted to have been appointed the New Zealand distributor for DPA Microphones and honored to have this outstanding brand join our portfolio and complement our current range… View this post Record Factory Music Academy, a music production education facility in downtown Seoul, South Korea, delivers real-world recording experience to students, which is now aided with the addition of a Solid State Logic AWS 924 hybrid console/controller in its newly built studios. More than 1,000 students have gained an education since Record Factory Music Academy was established. Through hands-on workshops covering everything from MIDI production to in-studio engineering and music video creation, the facility is gaining a reputation for its advanced… View this post
From the mid-1960's until the close of that decade, automobiles became lighter, more compact, and more powerful. Auto manufacturers continued to compete against one another for drag-strip supremacy. As government regulations and safety concerns increased, the muscle car era began to decline rapidly. Many of these ultimate high-performance muscle cars were built to satisfy homologation requirements. Others were built just to have the fastest machine on the road. The Plymouth Hemi 'Cuda is an example of one of the fiercest and most powerful vehicle ever constructed for the roadway. It was derived from the lesser Barracuda's which began in the mid-1960's. It was built atop the 'E' body platform and was restyled in 1970 by John Herlitz, making it longer, wider, and lower. The 426 cubic-inch Hemi V8 was capable of producing an astonishing 425 horsepower. Matted to a four-speed manual 833 transmission, this was the ultimate muscle car of its day. This 1971 Plymouth Hemi 'Cuda Convertible with black paint and orange billboards was offered for sale at the 2006 RM Auction in Monterey, CA where it was expected to sell between $180,000-$220,000. It came equipped from the factory with power windows, power brakes, power steering, Rally instrument cluster, rim blow steering wheel, bucket seats, AM/FM cassette radio, and driving lights. It has a Dana '60' rear end and the 426 cu in engine. It is one of just 374 'Cda Convertibles built in 1971. On auction day bidding reached $165,000 which was not high enough to satisfy reserve. The vehicle was left unsold.By Daniel Vaughan | Dec 2006 This 'Cuda Convertible was given a show-quality restoration to original specifications and is one of just 374 examples originally produced for the 1971 model year. It is believed to be one of just 87 383-powered convertibles produced for the last year of 'Cuda convertible production in 1971. The 383 cubic-inch V8 has four-barrel carburetors and is capable of producing 300 horsepower. There is a TorqueFlite three-speed automatic gearbox and four-wheel hydraulic brakes. The car is finished in Tawny Gold, with a white interior and a white power-operated convertible top. Features include dual chrome-tipped exhaust outlets, floor console, hood pins, power brakes, power steering, Rallye wheels, a 'Slap Stik' shifter and a 'Tuff' steering wheel. In 2010, this 'Cuda Convertible was offered for sale at the Vintage Motor Cars of Meadow Brook presented by RM Auctions. The car was estimated to sell for $60,000 - $70,000. As bidding came to a close, the car had been sold for the sum of $44,000 including buyer's premium.By Daniel Vaughan | Aug 2010 V8 Cuda Convertible The 3rd generation Barracuda ran from 1970 through 1974; the previous generations were A-body Valiant based which began in 1964. Designed by John E. Herlitz on the 108-inch wheelbase, unibody, E-platform, a shorter and wider version of the existing B-body. This example has the non-Hemi 340 cubic-inch V8 with automatic and it is a stock example. 1971 was the only year for four headlamps. Somehow, this model series didn't sell to expectation and production slowed over the years, making the cars quite rare today. An unaltered car is even more rare. V8 Cuda Hard Top Coupe The writing was on the wall by 1971 for the muscle car enthusiast. With rising gas prices and skyrocketing insurance rates, the days of the overpowered and often low priced performance automobile were numbered. For the big three, it seems that the decision was made to go out with a bang, and some of the rarest and most desirable muscle cars ever to come out of the Motor City were produced. Among the hottest is the Hemi 'Cuda, produced for a mere two model years. In 1970, it is believed that Plymouth produced just 696 Hemi 'Cuda hardtops and for 1971, a mere 118 would leave the line. Wild colors would survive for the 1971 model year and Chrysler would lead the pack with their Hi-Impact color palate. Several eye popping colors were offered, including Sassy Grass Green as seen on this example, which is one of the rarest offerings. When it comes to American Muscle, the Plymouth hemi 'Cuda is always at the top of the list. And when it comes to rarity and desirability, nothing compares to a 1971 Hemi ' Cuda. No matter what make or model you may prefer, there is no disputing the visual impact of the 426 Street Hemi engine. With the massive valve covers and the huge dual quad carbs, it certainly takes top honors when it comes to intimidation. To add the outrageous FC7 in Violet, (aka Plum Crazy) paint to the mix is to take things a step beyond. This 1971 Hemi 'Cuda exemplifies what Mopar Performance was all about in the final years of the original Muscle Car era. With a mere 107 leaving the Hamtramck, Michigan assembly plant with the Hemi engine under the shaker hood, these cars were rare even when new. This car is one of just 48 equipped with the Torqueflite automatic transmission and it also features the rare leather interior, elastomeric color keyed bumpers, power steering and power front disc brakes, a center console, the AM radio with the Dictaphone cassette recorder, tinted glass, dual color keyed mirrors and more, making it one of the highest option 1971 Hemi 'Cuda's in existence. Of course, when new these cars were flogged not only on the street, but at the tracks throughout the country, making this example among the most sought after and valuable American muscle cars ever built. The first series of the Barracuda was produced from 1964 through 1969, distinguished by its A-body construction. From 1970 through 1974 the second series was produced using an E-body construction. In 1964, Plymouth offered the Barracuda as an option of the Valiant model line, meaning it wore both the Valiant and Barracuda emblems. The base offering was a 225 cubic-inch six-cylinder engine that produced with 180 horsepower. An optional Commando 273 cubic-inch eight-cylinder engine was available with a four-barrel carburetor, high-compression heads and revised cams. The vehicle was outfitted with a live rear axle and semi-elliptic springs. Unfortunately, the Barracuda was introduced at the same time, separated by only two weeks, as the Ford Mustang. The Mustang proved to be the more popular car outselling the Valiant Barracuda by a ratio of 8 to 1. The interior was given a floor-shifter, vinyl semi-bucket seats, and rear seating. The rear seats folded down allowing ample space for cargo. By 1967, Plymouth redesigned the Barracuda and added a coupe and convertible to the model line-up. To accommodate larger engines, the engine bay was enlarged. There were multiple engine offerings that ranged in configuration and horsepower ratings. The 225 cubic-inch six-cylinder was the base engine while the 383 cubic-inch 8-cylinder was the top-of-the-line producing 280 horsepower. That was impressive, especially considering the horsepower to weight ratio. Many chose the 340 cubic-inch eight-cylinder because the 383 and Hemi were reported to make the Barracuda nose-heavy while the 340 offered optimal handling. In 1968 Plymouth offered a Super Stock 426 Hemi package. The lightweight body and race-tuned Hemi were perfect for the drag racing circuit. Glass was replaced with lexan, non-essential items were removed, and lightweight seats with aluminum brackets replaced the factory bench, and were given a sticker that indicated the car was not to be driven on public highways but for supervised acceleration trials. The result was a car that could run the quarter mile in the ten-second range. For 1969 a limited number of 440 Barracudas were produced, giving the vehicle a zero-to-sixty time of around 5.6 seconds. In 1970 the Barracuda were restyled but shared similarities to the 1967 through 1969 models. The Barracuda was available in convertible and hardtop configuration; the fastback was no longer offered. Sales were strong in 1970 but declined in the years that followed. The muscle car era was coming to a close due to the rising government safety and emission regulations and insurance premiums. Manufacturers were forced to detune their engines. The market segment was slowly shifting from muscle-cars to luxury automobiles. 1974 was the final year Plymouth offered the Barracuda.By Daniel Vaughan | Aug 2010 ◾Dodge Charger and Durango 'most loved' in their respective segments for second consecutive year ◾Jeep® Renegade leads Entry SUV segment in 2015 Most Loved Vehicles in America survey by Strategic Vision ◾FIAT captures most segment wins among small cars with 500 and 500e ◾FCA US ranked highest overall in Strategic Vision's 20th annual Total Quality Index™ this past July November 24, 2015 , Auburn Hills, Mich. - Strategic Vision has named five FCA US LLC vehicles to its 'Most Loved Ve...[Read more...] Scottsdale, Arizona (July 18th, 2015) – Thomas Scott is an accountant and entrepreneur from Athens, Georgia who has had a love for all things automotive for as long as he can remember. He possesses a lifetime of passion for buying, selling and working on classic American cars. 'I started out with the muscle cars — the Mopars, the Cobra Jet Mustang, the Chevelle,' Scott says. 'Those are cars that everybody recognizes — they're widely popular and very tradeable.' However, as S...[Read more...] Scottsdale, Arizona (December 1st, 2014) – For Enthusiasts – By Enthusiasts. ™ This is far more than a tagline at Russo and Steele Collector Automobile Auctions. It's a lifestyle, and we are gearing up to deliver that singular passion to the High Desert of sunny Scottsdale, Arizona for our annual flagship event during the world renowned collector car week. Additionally, Scottsdale marks the kick-off of the year-long celebration of our 15th anniversary. Held over five thrilling a...[Read more...]
/*********************************************************************** !!!!!! DO NOT MODIFY !!!!!! GacGen.exe Resource.xml This file is generated by Workflow compiler https://github.com/vczh-libraries ***********************************************************************/ #ifndef VCZH_WORKFLOW_COMPILER_GENERATED_DEMOREFLECTION #define VCZH_WORKFLOW_COMPILER_GENERATED_DEMOREFLECTION #include "Demo.h" #ifndef VCZH_DEBUG_NO_REFLECTION #include "GacUIReflection.h" #endif #if defined( _MSC_VER) #pragma warning(push) #pragma warning(disable:4250) #elif defined(__GNUC__) #pragma GCC diagnostic push #pragma GCC diagnostic ignored "-Wparentheses-equality" #elif defined(__clang__) #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wparentheses-equality" #endif /*********************************************************************** Reflection ***********************************************************************/ namespace vl { namespace reflection { namespace description { #ifndef VCZH_DEBUG_NO_REFLECTION DECL_TYPE_INFO(::demo::MainWindow) DECL_TYPE_INFO(::demo::MainWindowConstructor) #endif extern bool LoadDemoTypes(); } } } #if defined( _MSC_VER) #pragma warning(pop) #elif defined(__GNUC__) #pragma GCC diagnostic pop #elif defined(__clang__) #pragma clang diagnostic pop #endif #endif
Purdy was chatting to her bezzie mate who works at Colchester Hospital last night, and was impressed to hear that the Hospital wants more people to car share! Her mate, inspired by all the money she knows Purdy is saving, [...] Loveurcar The Loveurcar campaign is brought to you by the Colchester Travel Plan Club, Colchester Borough Council Air Quality Team and V102 as part of a Defra funded project to encourage more sustainable driving for those journeys that have to be made by car.
INTRODUCTION {#s1} ============ Hepatitis B virus (HBV) is still a major global health problem, with an estimated 257 million people worldwide that are chronically infected with HBV ([@B1]). HBV, together with duck hepatitis B virus (DHBV) and several other related animal viruses, belongs to the *Hepadnaviridae* family ([@B2]). The HBV virion is comprised of an outer envelope and an inner icosahedral nucleocapsid (NC) assembled by 240 copies of core protein (HBc) and packaged with a 3.2-kb partially double-stranded circular DNA genome ([@B3][@B4][@B8]). In addition to DNA-containing virions, a large amount of incomplete viral particles, such as hepatitis B surface antigen (HBsAg) particles, empty virions, and naked capsids, can also be released from cells in the process of virus replication ([@B9]). Subviral HBsAg particles are spherical or rodlike and are present in vast excess over virions in sera of CHB patients ([@B2]). Empty virions share the same structure as DNA-containing virions but are devoid of nucleic acids ([@B10][@B11][@B14]). Naked capsids, which exit cells via a route different from that of virions ([@B15][@B16][@B17]), have the same structure as NCs but are either empty or filled with viral RNA and immature viral DNA ([@B7], [@B11], [@B18][@B19][@B20]). In NC, pgRNA undergoes reverse transcription into minus-strand DNA, followed by plus-strand DNA synthesis ([@B2], [@B21][@B22][@B24]). Intracellular NCs can be packaged with viral nucleic acids at all levels of maturation, including pgRNA, nascent minus-strand DNA, minus-strand DNA-RNA hybrids, and relaxed circular DNA (RC DNA) or double-stranded linear DNA (DSL DNA) ([@B5], [@B7]). Only the NCs with relatively mature viral DNA (RC or DSL DNA) are enveloped and secreted as virions. HBV replicating cells can release empty core particles assembled from HBc proteins and NCs that contain various species of replicative intermediate nucleic acids into the culture supernatant. However, while free naked capsids could be readily detected *in vitro* ([@B7], [@B11], [@B18][@B19][@B20]), they are hardly found in the blood of HBV-infected patients ([@B17], [@B25], [@B26]). Although extracellular HBV RNA was detected in both *in vitro* cell culture systems and in clinical serum samples, its origin and composition remain controversial. It was proposed that extracellular HBV RNA represents pgRNA localized in virions ([@B27]). However, HBV spliced RNA and HBx RNA were also detected in culture supernatant of HBV stably replicating cells as well as in sera of CHB patients ([@B28], [@B29]). In addition, extracellular HBV RNA was also suggested to originate from damaged liver cells ([@B30]), naked capsids, or exosomes ([@B11], [@B29]). Hence, these extracellular RNA molecules have never been conclusively characterized. Here, we demonstrate that extracellular HBV RNAs are heterogeneous in length, ranging from full-length pgRNA (3.5 kilonucleotides \[knt\]) to RNA fragments with merely several hundred nucleotides. These RNA molecules represent 3′ receding pgRNA fragments that have not been completely reverse transcribed to DNA and pgRNA fragments hydrolyzed by the RNase H domain of polymerase in the process of viral replication. More importantly, extracellular HBV RNAs are localized in naked capsids and in virions in culture supernatants of HBV replicating cells and also circulate as CACs and virions in blood of hepatitis B patients. RESULTS {#s2} ======= Extracellular HBV RNAs are heterogeneous in length and predominantly integral to naked capsids instead of virions in HepAD38 cell culture supernatant. {#s2.1} ------------------------------------------------------------------------------------------------------------------------------------------------------ To ascertain the origin of extracellular HBV RNA, we first examined viral particles prepared from culture medium of an *in vitro* HBV stably transduced cell line. A human hepatoma HepAD38 cell line was used in this study, as it sustains vigorous HBV replication under the control of a tetracycline-repressible cytomegalovirus (CMV) promoter ([@B31]). Total viral particles were concentrated and centrifuged over a 10% to 60% (wt/wt) sucrose gradient. Most of the subviral HBsAg particles, virions, and empty virions were detected between fractions 9 to 14 ([Fig. 1A](#F1){ref-type="fig"}, upper and middle). Naked capsids, detected only by anti-HBcAg and not by anti-HBsAg antibodies, settled in fractions 5 to 8 ([Fig. 1A](#F1){ref-type="fig"}, middle and lower). The majority of viral nucleic acids were detected in fractions between 4 and 11 ([Fig. 1B](#F1){ref-type="fig"}, upper), which coincided with the fractions containing virions (fractions 9 to 11), naked capsids (fractions 4 to 7), and the mixture of these particles (fraction 8). Consistent with previous observations, HBV virions are packed with mature viral DNA (RC or DSL DNA), while naked capsids contain both immature single-stranded DNA (SS DNA) and mature viral DNA ([Fig. 1B](#F1){ref-type="fig"}, upper). Moreover, Northern blot results showed that most of the HBV RNA was detected in the naked capsids ([Fig. 1B](#F1){ref-type="fig"}, lower, fractions 4 to 7), whereas only a very small amount was associated with virions ([Fig. 1B](#F1){ref-type="fig"}, lower, fractions 9 to 11). HBV RNA detected in naked capsids ranged from the full length of pgRNA down to a few hundred nucleotides (shorter than the HBx mRNA \[0.7 knt\]). Moreover, RNA molecules within virions were much shorter than those within naked capsids. We excluded the possibility of artifacts generated by the SDS-proteinase K extraction method, as a similar RNA blot pattern was obtained using a TRIzol reagent to extract both intracellular nucleocapsid-associated and extracellular HBV RNA (not shown). Furthermore, quantification of viral RNA extracted by either the SDS-proteinase K method or TRIzol reagent produced a very similar copy number, except that the TRIzol reagent is known to preferentially extract RNA rather than DNA (not shown). Moreover, the RNA signal detected by Northern blotting could not be attributed to DNA fragments generated by DNase I treatment, which would reduce DNA to below the detection limit of the hybridization method (not shown). Furthermore, the RNA signal could be completely removed by an additional RNase A treatment (not shown). ![Sucrose gradient separation and analysis of viral particles from HepAD38 cell culture supernatant. (A) Distribution of hepatitis B viral particle-associated antigens and DNA/RNA in sucrose gradient. Viral particles prepared from HepAD38 cell culture supernatant (via PEG 8000 precipitation) were layered over a 10% to 60% (wt/wt) sucrose gradient for ultracentrifugation separation. Fractions were collected from top to bottom, and HBsAg level was analyzed by enzyme-linked immunosorbent assay (ELISA). HBsAg and viral DNA and RNA (quantified from gray density of bands in panel B) signals and sucrose density were plotted together. Viral particles were first resolved by native agarose gel electrophoresis, followed by immunoblotting (IB) of HBV envelope and core proteins with anti-HBsAg and anti-HBcAg antibodies. (B) Detection of viral DNA/RNA by Southern or Northern blotting. Total viral nucleic acids were extracted by the SDS-proteinase K method, and viral DNA (extracted from one-tenth of the samples used for Northern blotting) and RNA (treated with DNase I) were detected by Southern and Northern blot analyses with minus- or plus-strand-specific riboprobes, respectively. Symbols of HBsAg particles, empty virions (without nucleic acid), virions (with RC DNA), and naked capsids (empty or with nucleic acids) are depicted on the lower right side of panel A. Blank, no nucleic acids; two centered and gapped circles, RC DNA; straight line, SS DNA; wavy lines, pgRNA; M, markers (50 pg of 1-kb, 2-kb, and 3.2-kb DNA fragments released from plasmids as the DNA ladder or total RNA extracted from HepAD38 cells as the RNA ladder).](zjv0241840640001){#F1} To confirm the above-described results and to better separate naked capsids from HBV virions, isopycnic CsCl gradient ultracentrifugation was employed. Naked capsids were observed mainly in fractions 5 to 7, with densities ranging from 1.33 to 1.34 g/cm^3^ ([Fig. 2A](#F2){ref-type="fig"}). The smearing bands of naked capsids were likely caused by high concentrations of CsCl salt, as fractionation of naked capsids in a 1.18-g/cm^3^ CsCl solution produced single bands. Virions, detected by both anti-HBcAg and anti-HBsAg antibodies ([Fig. 2A](#F2){ref-type="fig"}, upper and middle), were packaged with viral DNA ([Fig. 2A](#F2){ref-type="fig"}, lower) and settled in fractions 13 to 15, with densities ranging from 1.23 to 1.25 g/cm^3^. In agreement with the results shown in [Fig. 1](#F1){ref-type="fig"}, HBV virions contained only the mature viral DNA (RC or DSL DNA), while naked capsids contained viral DNA replicative intermediates that ranged from the nascent minus-strand DNA to mature viral DNA ([Fig. 2B](#F2){ref-type="fig"} and [C](#F2){ref-type="fig"}). The lengths of viral minus- and plus-strand DNA in naked capsids and virions were determined by alkaline agarose gel electrophoresis analysis, a condition where denatured single-stranded DNA molecules migrate according to their lengths. In contrast to the complete minus- and mostly complete plus-strand DNA (closed to 3.2 knt) in virions, in naked capsids the minus-strand DNA and the plus-strand DNA can be both complete and incomplete (shorter than 3.2 knt) ([Fig. 2D](#F2){ref-type="fig"} and [E](#F2){ref-type="fig"}). Moreover, the length of HBV RNAs within naked capsids still ranged from 3.5 knt of pgRNA to shorter than the 0.7 knt of HBx mRNA. Full-length pgRNA accounted for only 10% of total RNA signal detected by Northern blotting (quantified from gray density of bands shown in [Fig. 2F](#F2){ref-type="fig"}). In contrast, HBV RNA species in virions are relatively shorter and barely detectable. In addition, we also determined viral DNA and RNA copy numbers in pooled naked capsids (fractions 3 to 7) and virions (fractions 10 to 21) by quantitative PCR. Quantification results showed that viral DNA in naked capsids and in virions accounted for about 60% and 40%, respectively, of total viral DNA signal in the HepAD38 cell culture supernatant ([Fig. 2G](#F2){ref-type="fig"}). More importantly, 84% of the HBV RNA was associated with naked capsids, while merely 16% was detected within virions ([Fig. 2G](#F2){ref-type="fig"}). Additionally, the DNA/RNA ratio was 11 in virions and 3 in naked capsids ([Fig. 2H](#F2){ref-type="fig"}), suggesting that more HBV RNA is present in naked capsids. ![CsCl density gradient separation and analysis of viral particles from HepAD38 cell culture supernatant. (A) Native agarose gel analysis of viral particles. Culture supernatant of HepAD38 cells was concentrated (via ultrafiltration) and fractionated by CsCl density gradient centrifugation (3 ml of 1.18 g/cm^3^ CsCl solution in the upper layer and 1.9 ml of 1.33 g/cm^3^ CsCl solution in the lower layer). Viral particles in each fraction were resolved by native agarose gel electrophoresis, followed by detection of viral antigens with anti-HBsAg and anti-HBcAg antibodies and viral DNA by hybridization with minus-strand-specific riboprobe. (B to F) Southern and Northern blot detection of viral nucleic acids. Viral DNAs were separated by electrophoresis through Tris-acetate-EDTA (TAE) or alkaline (ALK) agarose gel for Southern blotting with minus- or plus-strand-specific riboprobes. Viral RNA was obtained by treatment with total nucleic acids with DNase I and separated by formaldehyde-MOPS agarose gel, followed by Northern blotting. (G) Quantification of viral DNA and RNA in naked capsids or virions. Fractions containing naked capsids (fractions 3 to 7) or virions (fractions 10 to 21) were pooled, and viral DNA and RNA were quantified by PCR. (H) DNA and RNA ratios in naked capsids and virions calculated based on quantitative results. Asterisks indicate unknown high-density viral particles detected by anti-HBcAg or anti-HBsAg antibodies but devoid of any HBV-specific nucleic acids. M, markers (E. coli-derived HBV capsids or DNA and RNA ladders as described in the legend to [Fig. 1](#F1){ref-type="fig"}).](zjv0241840640002){#F2} Extracellular HBV RNAs and immature viral DNA are detected in sera from CHB patients. {#s2.2} ------------------------------------------------------------------------------------- Employing the HepAD38 cell culture system, we demonstrated the presence of extracellular HBV RNAs and immature and mature viral DNA packaged in both the naked capsids and virions. Interestingly, Southern blot analyses showed that SS DNA could also be observed in serum samples from some CHB patients. We speculated that SS DNA in circulation would be carried by capsid particles that were released by HBV-infected hepatocytes into patients' bloodstreams. However, we reasoned that due to strong immunogenicity of naked capsids ([@B32], [@B33]), it would be difficult to detect them as free particles; rather, they would form complexes with specific anti-HBcAg antibodies and therefore circulate as antigen-antibody complexes ([@B25], [@B32][@B33][@B34]). To entertain this possibility, we then used protein A/G agarose beads to pull down the immune complexes. Forty-five serum samples obtained from CHB patients, with HBV DNA titers higher than 10^7^ IU per ml, were examined for the presence of particles containing SS DNA by a combination of protein A/G agarose bead pulldown assay and Southern blot analysis ([Fig. 3A](#F3){ref-type="fig"} and [B](#F3){ref-type="fig"}). SS DNA was detected, albeit to a different extent, in 34 serum samples ([Fig. 3A](#F3){ref-type="fig"} and [B](#F3){ref-type="fig"}, upper). The particles containing SS DNA were pulled down by protein A/G agarose beads from 11 out of the 34 samples ([Fig. 3A](#F3){ref-type="fig"} and [B](#F3){ref-type="fig"}, lower). Patient sera negative for SS DNA (patients 37, 38, 14, and 35) or positive for SS DNA (patients 17, 21, 42, and 44), as determined by the protein A/G agarose bead pulldown experiments, were selected for further studies ([Fig. 3C](#F3){ref-type="fig"}). ![Characterization of HBV DNA and RNA in sera of CHB patients. (A and B) Analyses of serum viral DNA from CHB patients by Southern blotting. Viral DNA was extracted from serum samples obtained from forty-five chronic hepatitis B patients (20% of input sample used for protein A/G agarose beads pulldown) and subjected to Southern blot analysis. Alternatively, these samples were first incubated with protein A/G agarose beads, and then viral DNA in the pulldown mixtures was analyzed by Southern blotting. Serum samples selected for further examining are marked with arrows, and samples with SS DNA detection are labeled with asterisks. (C) Protein A/G agarose bead pulldown of viral particles. Sera (25 μl each) from CHB patients 37, 38, 14, and 35 (M1, mixture one) or from patients 17, 21, 42, and 44 (M2, mixture two) were pooled and incubated with protein A/G agarose beads. Viral DNA in input sera, protein A/G bead pulldown mixtures (beads), and the remaining supernatants (sup.) were extracted and subjected to Southern blot analysis. (D) Northern blot detection of serum viral RNA from patients 37, 38, 14, 35, 17, 21, 42, and 44. Total RNA were extracted from serum samples by TRIzol reagent and treated with DNase I before Northern blot analysis. (E to G) Southern blot analyses of viral DNA from selected samples. Viral DNA was separated by electrophoresis through TAE or alkaline agarose gels, followed by Southern blot detection with the indicated riboprobes.](zjv0241840640003){#F3} Northern blot analyses showed that HBV RNA was only detected in serum samples from patients 17, 21, and 42 ([Fig. 3D](#F3){ref-type="fig"}). Moreover, total viral DNA was analyzed by Southern blotting, and SS DNA was readily observed in serum samples from patients 17, 21, and 42 ([Fig. 3E](#F3){ref-type="fig"}). We also analyzed the lengths of DNA minus and plus strands in patients' sera. Despite the finding that most minus-strand DNA was complete, a small amount of viral DNA (that of patients 38, 35, 17, 21, and 42) was shorter than 3.2 knt ([Fig. 3F](#F3){ref-type="fig"}). Compared with viral minus-strand DNA, the length of plus-strand DNA, particularly in sera from patients 17, 21, and 42, was more variable, ranging from shorter than 2 knt to ∼3.2 knt ([Fig. 3G](#F3){ref-type="fig"}). Naked capsids form CACs with anti-HBcAg antibody in blood of CHB patients. {#s2.3} -------------------------------------------------------------------------- We showed that particles containing SS DNA were present in CHB patients' sera. To further examine these particles, we used CsCl density gradient centrifugation to fractionate a serum mixture from patients 37, 38, 14, and 35. In agreement with our earlier results ([Fig. 2A](#F2){ref-type="fig"}, lower, fractions 13 to 15, and B) and previous reports, HBV virions, with the characteristic mature viral DNA (RC or DSL DNA), were detected in fractions 12 to 14 with densities between 1.26 and 1.29 g/cm^3^ ([Fig. 4A](#F4){ref-type="fig"}) ([@B2]). Careful inspection of the blots revealed that SS DNA could be detected, albeit at very low level, in fractions 8 and 9, with densities from 1.33 to 1.34 g/cm^3^, and in fractions 18 to 21, with densities from 1.20 to 1.23 g/cm^3^ ([Fig. 4A](#F4){ref-type="fig"}). In contrast, CsCl density gradient separation of viral particles from serum of patient 17 showed a mixture of mature and immature viral DNA species. As SS DNA was detected at densities ranging from 1.37 to 1.20 g/cm^3^ ([Fig. 4B](#F4){ref-type="fig"}), no distinct viral DNA (mature RC or DSL DNA) specific to virions could be identified at densities between 1.27 and 1.29 g/cm^3^. Similar results were obtained using CsCl density gradient fractionation of sera from patient 21 (not shown) and patient 46 ([Fig. 4E](#F4){ref-type="fig"}). ![CsCl density gradient analysis of hepatitis B viral particles. (A and B) CsCl density gradient analysis of viral particles in patient sera. One hundred-microliter volumes of serum mixture from patients 37, 38, 14, and 35 (25 μl each) and 100 μl serum from patient 17 were separated by CsCl density gradient centrifugation (2 ml of 1.18 g/cm^3^ CsCl solution in the upper layer and 2.9 ml of 1.33 g/cm^3^ CsCl solution in the lower layer). Viral DNA in each fraction was extracted and detected by Southern blotting. (C to G) CsCl density gradient analysis of viral particles treated with detergent or anti-HBcAg antibody (Ab). Concentrated HepAD38 cell culture supernatant (250 μl each) (via ultrafiltration) was either mixed with anti-HBcAg antibody (10 μl) followed by incubation without (C) or with NP-40 (final concentration, 1%) (D) for 1 h at room temperature and 4 h on ice or treated with only NP-40 (G) and then fractionated by CsCl density gradient ultracentrifugation. Sera from CHB patient 46 either left untreated (E) or treated with NP-40 (final concentration, 1%) (F) were fractionated by CsCl density gradient ultracentrifugation. Viral DNA in each fraction was extracted and subjected to Southern blot analyses.](zjv0241840640004){#F4} We hypothesized that naked capsids could be released into blood circulation of CHB patients but were bound to specific antibodies. As SS DNA was detected in both high- and lower-density regions in CsCl gradient ([Fig. 4B](#F4){ref-type="fig"} and [E](#F4){ref-type="fig"}), we envisaged that the binding with specific antibodies led to a change of capsids' buoyant density. To test this, anti-HBcAg antibody was mixed with HepAD38 cell culture supernatant to mimic the postulated CACs in serum samples. The results demonstrated that in contrast to SS DNA from naked capsids, distributed to three fractions at densities between 1.33 and 1.34 g/cm^3^ ([Fig. 2A](#F2){ref-type="fig"}, lower, and B), the mixture of naked capsids and CACs (SS DNA) was distributed more widely and could be detected in the lower density region (1.25 to 1.32 g/cm^3^) ([Fig. 4C](#F4){ref-type="fig"}, fractions 11 to 16). Similarly, intracellular capsids from HepAD38 cells were incubated with anti-HBcAg antibody, and a density shift of CACs to a lower-density region was also observed (not shown). To further confirm the lower density of CACs, NCs in virions secreted to HepAD38 cell culture supernatant were treated with NP-40 and mixed with anti-HBcAg antibody. CsCl fractionation showed that naked capsids and virion-derived NCs have become a homogenous mixture banding at densities from 1.37 to 1.27 g/cm^3^ ([Fig. 4D](#F4){ref-type="fig"}). Likewise, virion-derived NCs, obtained by treatment of serum sample from patient 46 with NP-40 bound with antibody, further formed new homogeneous CACs that settled at densities between 1.23 and 1.27 g/cm^3^ ([Fig. 4E](#F4){ref-type="fig"} versus F). However, NP-40 treatment alone did not produce a homogeneous mixture of naked capsids and virion-derived NCs, as these two particles still settled at distinct density regions with their characteristic viral DNA content ([Fig. 4G](#F4){ref-type="fig"}). On the other hand, DNA molecules in the two types of capsids still banded at densities between 1.38 and 1.31 g/cm^3^, further confirming that CACs have relatively lighter density ([Fig. 4G](#F4){ref-type="fig"}). Alternatively, the appearance of a homogenous mixture of virion-derived NCs and naked capsids ([Fig. 4D](#F4){ref-type="fig"} and [F](#F4){ref-type="fig"}) suggests the formation of higher-order antibody-mediated complexes of capsids. For instance, the complexes might not represent individual antibody-coated capsid particles but rather big CACs consisting of several capsid particles interconnected by antibodies. To verify whether intercapsid immune complexes exist, anti-HBcAg antibody was added to the purified HBV capsids expressed by Escherichia coli, and this mixture was examined by an electron microscope. E. coli-derived capsids were scattered as separate, distinct particles ([Fig. 5A](#F5){ref-type="fig"}). However, addition of antibody caused capsids to aggregate into clusters, making them too thick to be properly stained ([Fig. 5B](#F5){ref-type="fig"}). Despite this, a few capsids, which might not have been bound by antibodies or might have been associated with antibodies but did not form intercapsid antibody complexes, could be observed by electron microscopy (EM) ([Fig. 5B](#F5){ref-type="fig"}). ![EM analysis of hepatitis B viral particles. (A and B) EM of E. coli-derived HBV capsids incubated without or with anti-HBcAg antibody. (C) EM of viral particles prepared from sera of CHB patients. Serum mixtures (obtained from patients 11, 22, 23, 27, 28, 30, and 41) depleted of HBsAg particles were negatively stained and examined with an electron microscope. The 42-nm HBV virions (arrowhead) and 27-nm naked capsids (arrow) are indicated, while the smaller 22-nm rods and spheres of HBsAg particles could also be observed but are not pointed out. Scale bars indicate 200 nm or 500 nm.](zjv0241840640005){#F5} We then examined CACs in serum samples from CHB patients by EM. Sera from patients 11, 17, 21, 22, 23, 27, 28, 30, and 41, positive for SS DNA, were combined. Serum mixtures, with diminished HBsAg particles by centrifugation through a 20% and 45% (wt/wt) sucrose cushion, were examined by EM. The 27-nm capsid particles or CACs were visible ([Fig. 5C](#F5){ref-type="fig"}, arrow) along with the 42-nm HBV virions ([Fig. 5C](#F5){ref-type="fig"}, arrowheads) and the 22-nm spheres and rods of residual HBsAg particles (not indicated). However, the picture was not clear enough for us to conclusively determine if capsids were connected by or bound with antibodies, as described for unrelated virus in *in vitro* experiments ([@B35]). In addition, it is possible that some of the CACs are not visible by EM, as the complexes maybe too thick to gain clear contrast between lightly and heavily stained areas ([Fig. 5B](#F5){ref-type="fig"}). Lastly, CACs might be heterogeneous, having different molecular sizes and isoelectric points (pI) in hepatitis B patients' blood circulation. *In vitro* binding of naked capsids derived from HepAD38 cell culture supernatant with anti-HBcAg antibody changed their electrophoretic behavior and made them unable to enter the TAE-agarose gel ([Fig. 6A](#F6){ref-type="fig"}). Moreover, viral particles from sera of patients 0, 37, 38, 14, 35, 17, 21, 42, and 44 could not enter agarose gels prepared in TAE buffer. However, in buffer with higher pH value (10 mM NaCHO~3~, 3 mM Na~2~CO~3~, pH 9.4), they appeared as smearing bands on blots ([Fig. 6B](#F6){ref-type="fig"} and [C](#F6){ref-type="fig"}). Hence, the irregular electrophoretic behavior of these viral particles may result from changes in molecular size and/or pI value of capsid particles (pI  4.4) following their association with specific immunoglobulin G (or other types of antibodies) having different pI values (pI of human IgG may range from 6.5 to 9.5) ([@B36][@B37][@B39]). ![Native agarose gel analysis of viral particles in sera from hepatitis B patients. (A) Native agarose gel analysis of viral particles from HepAD38 cell culture supernatant. Ten microliters of HepAD38 cell culture supernatant (concentrated by ultrafiltration) incubated with or without anti-HBcAg antibody was resolved by native (TAE) agarose gel (0.8%) electrophoresis, followed by hybridization with minus-strand-specific riboprobe. (B and C) Native agarose gel analysis of viral particles from serum samples of hepatitis B patient in buffer with different pH values. Ten microliters of concentrated HepAD38 cell culture supernatant, plasma sample of patient 0 (not concentrated), and serum of a chronic hepatitis B carrier without liver inflammation (ctrl serum) were loaded into agarose gels prepared in TAE buffer (pH 8.3) (B, left) or Dunn carbonate buffer (10 mM NaCHO~3~, 3 mM Na~2~CO~3~, pH 9.4) (B, right) and separated overnight. Viral particle-associated DNA was detected by hybridization with specific riboprobe. Sera from patients 37, 38, 14, 35, 17, 21, 42, and 44 (10 μl each) were resolved by electrophoresis through 0.7% high-strength agarose (type IV agarose used for pulsed-field gel electrophoresis) gels prepared in TAE (C, left) or carbonate buffer (C, right), followed by probe hybridization.](zjv0241840640006){#F6} Circulating HBV RNAs are of heterogeneous lengths and associated with CACs and virions in hepatitis B patient's plasma. {#s2.4} ----------------------------------------------------------------------------------------------------------------------- To characterize HBV RNAs circulating in CHB patients' sera, a plasma sample from patient 0 was studied. Similar to results obtained for patients 17, 21, and 46 ([Fig. 4B](#F4){ref-type="fig"} and [E](#F4){ref-type="fig"} and not shown), viral DNA in the plasma sample of patient 0 was detected in a broad density range in CsCl gradient and no distinct bands specific to HBV virions or naked capsids could be identified, indicating the presence of a mixture of virions and CACs ([Fig. 7A](#F7){ref-type="fig"}). ![Characterization of nucleic acid content within viral particles in plasma sample from patient 0. (A) CsCl density gradient analysis of plasma sample. Plasma from patient 0 was added directly with CsCl salt to a concentration of 21% (wt/wt) or 34% (wt/wt). Two milliliters of the 21% CsCl-plasma mixture was underlayered with 2.9 ml 34% CsCl-plasma mixture, followed by ultracentrifugation. Viral DNA from each fraction was extracted and subjected to Southern blot analysis. (B) Sucrose gradient analysis of concentrated plasma sample. Five hundred microliters of concentrated plasma sample (via ultracentrifugation through a 20% sucrose cushion) was fractionated in a 10% to 60% (wt/wt) sucrose gradient. PreS1 and HBsAg levels were determined by ELISA. Viral DNA and RNA were detected by Southern and Northern blotting with minus- or plus-strand-specific riboprobes. HBsAg, PreS1, and viral DNA and RNA (quantified from gray density of viral DNA/RNA bands, middle and lower) signals and sucrose density were plotted together. (C) Analysis of concentrated plasma sample with lower CsCl density gradient centrifugation. Two hundred fifty microliters of concentrated plasma sample was mixed with 2.2 ml TNE buffer and 2.45 ml of 37% (wt/wt) CsCl-TNE buffer (resulting in a homogenous CsCl solution with density of about 1.18 g/cm^3^), followed by ultracentrifugation. DNA in viral particle pellets (lane P) stuck to the sidewall of centrifugation tubes and was recovered by digesting with SDS-proteinase K solution. Viral DNA and RNA were subjected to Southern and Northern blot analyses. (D) Analysis of concentrated plasma sample with higher level of CsCl density gradient centrifugation. Two hundred fifty microliters of concentrated plasma sample was mixed with 1 ml of TNE buffer and 1.25 ml of 37% (wt/wt) CsCl-TNE buffer and underlayered with 2.4 ml of 27% (wt/wt) (1.25 g/cm^3^) CsCl-TNE solution, followed by ultracentrifugation. HBV DNA and RNA was detected by Southern and Northern blotting.](zjv0241840640007){#F7} Furthermore, viral particles were pelleted through a 20% sucrose cushion and separated in a sucrose gradient. HBsAg was detected in fractions 5 to 14, peaking at fraction 11. The PreS1 antigen was found in fractions 5 to 12 with the peak at fractions 7 and 10, indicating its presence in HBsAg particles and HBV virions ([Fig. 7B](#F7){ref-type="fig"}, upper). Viral DNA, representing a combination of both mature and immature viral DNA, was detected in fractions 4 to 9 ([Fig. 7B](#F7){ref-type="fig"}, middle), suggesting the localization of CACs and virions in these fractions. HBV RNA was detected between fractions 5 and 7 and appeared in the same peak as viral DNA ([Fig. 7B](#F7){ref-type="fig"}, lower), indicating that HBV RNA is incorporated in the same viral particles as viral DNA. Therefore, circulating HBV RNA may be localized within CACs and/or virions. To better characterize HBV RNA in CACs and virions, plasma sample from patient 0 was centrifuged through a 20% sucrose cushion and pellets were fractionated in a homogenous CsCl solution (1.18 g/cm^3^) as previously described ([@B8]). However, possibly due to a tendency of capsid particles to aggregate and stick to the wall of the centrifugation tube and the low density of the initial CsCl solution ([@B8], [@B40]), only mature DNA species from virions were detected in densities ranging from 1.22 to 1.24 g/cm^3^ ([Fig. 7C](#F7){ref-type="fig"}, upper). Northern blot analyses demonstrated that the lengths of virion-associated HBV RNAs were approximately several hundred nucleotides ([Fig. 7C](#F7){ref-type="fig"}, lower). Virion-associated RNAs were unlikely to be contaminated by CAC-associated HBV RNAs, since the immature SS DNA could not be observed even after a long exposure of X ray film. Moreover, RNA molecules would have been longer if there were CAC contamination ([Fig. 7D](#F7){ref-type="fig"}, lower). Viral nucleic acids in pellets recovered from the centrifugation tube sidewalls could be readily detected on Northern ([Fig. 7C](#F7){ref-type="fig"}, lower, lane P) or Southern ([Fig. 7C](#F7){ref-type="fig"}, upper, lane P) blots using plus-strand-specific rather than minus-strand-specific riboprobe. To analyze viral nucleic acids in CACs, concentrated plasma sample was separated in a higher CsCl density gradient (1.18 g/cm^3^ and 1.25 g/cm^3^). Both mature and immature viral DNA species were only detected in fractions with densities from 1.21 to 1.26 g/cm^3^ ([Fig. 7D](#F7){ref-type="fig"}, upper), indicating the presence of a mixture of HBV virions and CACs. Viral RNAs were detected and ranged in length from a little shorter than the full-length pgRNA to a few hundred nucleotides ([Fig. 7D](#F7){ref-type="fig"}, lower). Compared to virion-associated RNAs ([Fig. 7C](#F7){ref-type="fig"}, lower), HBV RNA species detected in the mixture of CACs and virions were longer, with the longer RNA molecules possibly being associated with CACs. Extracellular HBV RNAs could serve as templates for synthesis of viral DNA. {#s2.5} --------------------------------------------------------------------------- Intracellular NCs are known to contain viral nucleic acids in all steps of DHBV DNA synthesis, including pgRNA, nascent minus-strand DNA, SS DNA, and RC DNA or DSL DNA ([@B5]). Our results showed that naked capsids contained almost the same DNA replicative intermediates as intracellular NCs ([Fig. 1B](#F1){ref-type="fig"} and [2B](#F2){ref-type="fig"}) ([@B7], [@B11]). We also demonstrated that extracellular HBV RNAs within the naked capsids, CACs, and virions were heterogeneous in length ([Fig. 1B](#F1){ref-type="fig"}, lower, [2F](#F2){ref-type="fig"}, and [7C](#F7){ref-type="fig"} and [D](#F7){ref-type="fig"}). In the presence of deoxynucleoside triphosphates (dNTPs), viral RNA could be degraded and reverse transcribed into minus-strand DNA by the endogenous polymerase *in vitro* ([@B5], [@B41], [@B42]). Also, incomplete plus-strand DNA with a gap of about 600 to 2,100 bases could be extended by endogenous polymerase ([@B43], [@B44]). Based on these results, we wished to examine whether extracellular HBV RNAs could serve as RNA templates for viral DNA synthesis and be degraded by polymerase in the process. As shown in [Fig. 8](#F8){ref-type="fig"}, endogenous polymerase assay (EPA) treatment of extracellular viral particles from either culture supernatant of HepAD38 cells or plasma sample from patients led to DNA minus ([Fig. 8A](#F8){ref-type="fig"} and [C](#F8){ref-type="fig"})- and plus ([Fig. 8B](#F8){ref-type="fig"} and [D](#F8){ref-type="fig"})-strand extension and, more importantly, HBV RNA signal reduction ([Fig. 8E](#F8){ref-type="fig"}, lane 4 versus 6 and lane 8 versus 10). The apparent low efficiency of EPA reaction might have been due to our hybridization method, which detected both extended and unextended DNA strands rather than detecting only newly extended DNA. ![Analysis of extracellular HBV DNA and RNA by EPA. (A to D) Southern blot analysis of viral DNA strand elongation after EPA treatment. EPA was carried out employing HepAD38 cell culture supernatant and plasma sample from patient 0. Total nucleic acids were extracted via the SDS-proteinase K method. Viral DNA was separated by electrophoresis in TAE or alkaline agarose gels, followed by Southern blot analysis with minus- or plus-strand-specific riboprobes. (E) Northern blot analysis of viral RNA changed upon EPA treatment. Total viral nucleic acids (lanes 3, 5, 7, and 9) or RNA (treated with DNase I) (lanes 4, 6, 8, and 10) were separated by formaldehyde-MOPS agarose gel electrophoresis and subjected to Northern blotting.](zjv0241840640008){#F8} In the process of HBV DNA replication, prior to minus-strand DNA synthesis, capsid-associated RNA is the full-length pgRNA. Upon transfer of viral polymerase-DNA primer to the 3′ DR1 region of pgRNA and cleavage of the 3′ epsilon loop RNA (a 3.2-knt pgRNA fragment remained), minus-strand DNA synthesis initiates and the pgRNA template is continuously cleaved from 3′ to 5′ by RNase H activity of viral polymerase. Consequently, from the initiation to the completion of minus-strand DNA synthesis, there will be a series of pgRNA fragments with receding 3′ ends ranging from 3.2 knt to 18 nt of the 5′ cap RNA primer ([@B2], [@B21][@B22][@B24]), representing the RNA templates that have not yet been reverse transcribed into minus-strand DNA. In addition to pgRNA with receding 3′ ends, there are also short RNA fragments arising from intermittent nicks by the RNase H domain of polymerase. Therefore, we used RNA probes spanning the HBV genome to map whether these RNA molecules are present in extracellular naked capsids and virions. Five probes that spanned the HBV genome, except for the overlapping region between the 5′ end of pgRNA and the RNA cleavage site (nt 1818 to 1930), were prepared to map the extracellular HBV RNAs from HepAD38 cell culture supernatant ([Fig. 9A](#F9){ref-type="fig"}). Intracellular nucleocapsid-associated HBV RNA from HepAD38 cells was used as a reference. As the probes moved from the 5′ end to 3′ end of pgRNA, especially for probes 1 to 4, RNA bands shifted from a wider range, including both short and long RNA species, to a narrower range, close to full-length pgRNA, with fewer RNA species detected ([Fig. 9A](#F9){ref-type="fig"}, upper, lanes 2, 5, 8, 11, 14, and 17). Similarly, with the probes moving from the 5′ end to the 3′ end of pgRNA, a stronger intensity band representing extracellular HBV RNAs detected by each probe, especially for probes 1 to 4, was also shifting toward a longer RNA migration region ([Fig. 9A](#F9){ref-type="fig"}, upper, lanes 3, 6, 9, 12, 15, and 18). It should be noted that the shifting pattern was more apparent when RNAs were detected with probes 1 to 4 but not with probe 5. It is possible that the reverse transcription speed is relatively quicker in the initial step (from the 3′ end of pgRNA, which overlaps the probe 5 sequence), and as a result, fewer pgRNA fragments will harbor RNA sequence for probe 5. Also, a short RNA species from either intracellular nucelocapsids or naked capsids and virions migrated faster than 0.7 knt and could be detected by all probes ([Fig. 9A](#F9){ref-type="fig"}, upper, lanes 2, 3, 5, 6, 8, 9, 11, 12, 14, 15, 17, and 18). These RNA molecules likely represent the pgRNA fragments that have been hydrolyzed by the RNase H domain of viral polymerase (including the 3′ epsilon loop RNA cleaved by polymerase in the reverse transcription step) ([@B24]). Collectively, as predicted, longer extracellular HBV RNA species that migrated slower and closer to the position of pgRNA had longer 3′ ends, the shorter viral RNA molecules that migrated faster had relatively shorter 3′ ends, and the RNA species detected by all probes may represent products of pgRNA hydrolysis. ![Mapping and identifying 3′ ends of extracellular HBV RNAs. (A) Northern blot detection of extracellular HBV RNAs with various riboprobes. Viral RNA from cytoplasmic (C) nucleocapsids (lanes 2, 5, 8, 11, 14, and 17) or culture supernatant (S) (lanes 3, 6, 9, 12, 15, and 18) of HepAD38 cells was extracted with TRIzol reagent and treated with DNase I before Northern blot analysis with plus-strand-specific riboprobes spanning the HBV genome as indicated. pgRNA was used as a reference, and map coordinates were numbered according to the sequence of the HBV genome (genotype D, accession number [AJ344117.1](https://www.ncbi.nlm.nih.gov/nuccore/AJ344117.1)). (B) Identification of 3′ ends of extracellular HBV RNAs. 3′ Ends of extracellular HBV RNAs were identified by the 3′ RACE method using different HBV-specific anchor primers (the same 5′ primers used for generating templates for producing riboprobes used in panel A, lower). Identified 3′ ends were numbered as described above, and numbers in parentheses indicate the amount of clones with the same 3′ ends. The asterisk indicates unknown nucleic acid copurified with intracellular capsid-associated viral RNA by TRIzol reagent. FL, full-length; Cap, 5′ cap of pregenomic RNA; pA, the polyadenylation site; An, poly(A) tail.](zjv0241840640009){#F9} These results were further confirmed by employing a 3′ rapid amplification of cDNA ends (RACE) method. Various 3′ ends spanning the HBV genome were identified ([Fig. 9B](#F9){ref-type="fig"}), validating the presence of 3′ receding RNA and the heterogeneous nature of extracellular HBV RNAs. EPA treatment clearly demonstrated that extracellular HBV RNAs could be used as templates for DNA synthesis, and the presence of 3′ receding-end pgRNA fragments further confirmed not only the existence but also the use of such molecules as templates for viral DNA synthesis. Therefore, just like the viral RNA counterpart within intracellular NCs, extracellular HBV RNA molecules represent the RNA molecules generated in the process of viral DNA replication. ETV reduces viral DNA level but increases extracellular HBV RNA level in naked capsids and virions *in vitro*. {#s2.6} -------------------------------------------------------------------------------------------------------------- Entecavir (ETV), widely used in anti-HBV therapy, is a deoxyguanosine analog that blocks the reverse transcription and plus-strand DNA synthesis steps in the HBV DNA replication process ([@B45][@B46][@B47]). Treatment of CHB patients with nucleos(t)ide analogs (NAs), including entecavir, efficiently reduces the level of serum viral DNA but at the same time increases circulating HBV RNA levels ([@B28], [@B48][@B49][@B52]). We examined the effect of entecavir on the levels of both intracellular and extracellular viral nucleic acids in HepAD38 cell culture. Total viral RNA level remained unchanged or marginally increased upon entecavir treatment ([Fig. 10A](#F10){ref-type="fig"}), and the intracellular capsid-associated viral RNA level was increased ([Fig. 10B](#F10){ref-type="fig"}, upper). In contrast and as expected, the intracellular capsid-associated viral DNA level was decreased ([Fig. 10B](#F10){ref-type="fig"}, lower). Similarly, extracellular viral DNA synthesis was significantly inhibited, while viral RNA was increased ([Fig. 10C](#F10){ref-type="fig"} and [D](#F10){ref-type="fig"}). Quantitative results showed that entecavir suppressed extracellular viral DNA to about one-tenth but at the same time increased viral RNA by about twofold the level for the untreated group ([Fig. 10E](#F10){ref-type="fig"}). ![Analysis of HBV DNA and RNA change upon entecavir treatment of HepAD38 cells. (A) Change of total cellular HBV RNA level upon entecavir (ETV) treatment. HepAD38 cells were treated with ETV (0.1 μM) for 4 days, and total cellular RNA was analyzed by Northern blotting with ribosomal RNAs serving as loading controls. (B) Change of intracellular nucleocapsid-associated viral RNA (core RNA) and DNA (core DNA) level after ETV treatment. Cytoplasmic core RNA was extracted by the SDS-proteinase K method and analyzed by Northern blotting. Intracellular nucleocapsids were first separated by native agarose gel electrophoresis, and capsid-associated viral DNA (core DNA) was then probed with minus-strand-specific riboprobe. (C to E) Change of extracellular HBV DNA and RNA level upon ETV treatment. Total nucleic acids in HepAD38 cell culture supernatant were extracted and subjected to Southern and Northern blot analyses with specific riboprobes or quantification by PCR. (F to H) CsCl density gradient analysis of viral DNA/RNA level in naked capsids and virions after ETV treatment. HepAD38 cells were left untreated or were treated with ETV, and culture media were concentrated by ultrafiltration, followed by fractionation in CsCl density gradients as described in the legend to [Fig. 4](#F4){ref-type="fig"}. Viral particles in each fraction were separated by native agarose gel electrophoresis, followed by immunoblotting with anti-HBcAg antibody. Viral DNA and RNA were extracted and subjected to Southern or Northern blot analyses.](zjv0241840640010){#F10} Since viral DNA and RNA were enclosed in both naked capsids and virions, CsCl density gradient was used to separate these particles and to further study the antiviral effect of entecavir. As shown in [Fig. 10](#F10){ref-type="fig"}, DNA-containing naked capsids were detected in fractions 6 to 11 and virions in fractions 15 to 24 ([Fig. 10F](#F10){ref-type="fig"}). Entecavir effectively reduced viral DNA ([Fig. 10G](#F10){ref-type="fig"}, fractions 6 to 10 and 15 to 17; this was also seen in a longer exposure of [Fig. 10G](#F10){ref-type="fig"} \[not shown\]) but increased viral RNA content mainly in naked capsids ([Fig. 10H](#F10){ref-type="fig"}, fractions 6 to 9). Moreover, the increase in RNA content within naked capsids led to an increased density of naked capsids ([Fig. 10F](#F10){ref-type="fig"}, fractions 6 and 11, lower, versus fractions 6 and 11, upper). Interestingly, entecavir seemed to reduce HBcAg signal within virions (i.e., empty virions) ([Fig. 10F](#F10){ref-type="fig"}, fractions 15 to 21, upper, versus fractions 15 to 21, lower) while increasing the egress of naked capsids from HepAD38 cells (data not shown). DISCUSSION {#s3} ========== The RNA molecules in either intracellular NCs or extracellular virions were reported more than three decades ago ([@B5], [@B41], [@B42]), and naked capsids were shown to carry pgRNA *in vitro* ([@B9], [@B11]). Recently, it was suggested that the extracellular or circulating HBV RNA could serve as a surrogate marker to evaluate the endpoint of hepatitis B treatment ([@B27], [@B30], [@B48][@B49][@B53]). With this in mind and to facilitate its application as a novel biomarker for viral persistence, we studied the origin and characteristics of extracellular HBV RNA. In the present study, we extensively characterized extracellular HBV RNAs and demonstrated that extracellular HBV RNAs were mainly enclosed in naked capsids rather than complete virions in supernatant of HepAD38 cells ([Fig. 1B](#F1){ref-type="fig"} and [2F](#F2){ref-type="fig"}). These RNAs were of heterogeneous lengths, ranging from full-length pgRNA (3.5 knt) to a few hundred nucleotides. Furthermore, circulating HBV RNAs, also heterogeneous in length, were detected in blood of hepatitis B patients ([Fig. 3D](#F3){ref-type="fig"} and [7C](#F7){ref-type="fig"} and [D](#F7){ref-type="fig"}). Interestingly, the detection of HBV RNAs coincided with the presence of immature HBV DNA ([Fig. 3D](#F3){ref-type="fig"} and [E](#F3){ref-type="fig"}). Isopycnic CsCl gradient ultracentrifugation of RNA positive serum samples exhibited a broad range of distribution of immature HBV DNA, which contrasted with the results obtained in HepAD38 cells ([Fig. 2B](#F2){ref-type="fig"} versus [@B4]B and E, [@B7]A). For the first time, we provided convincing evidence that unenveloped capsids containing the full spectrum of HBV replication intermediates and RNA species that are heterogeneous in length could be detected in the circulation of chronic hepatitis B patients. In view of our results and literature reports ([@B2], [@B21][@B22][@B24]), the presence of extracellular HBV RNAs could easily be interpreted in the context of the HBV DNA replication model ([Fig. 11A](#F11){ref-type="fig"}). Since naked capsids contain viral DNA at all maturation levels, they will also carry HBV RNA molecules originating from pgRNA, including full-length pgRNA prior to minus-strand DNA synthesis, pgRNA with 3′ receding ends, and the pgRNA hydrolysis fragments. On the other hand, virions that contain only mature forms of viral DNA species would likely bear only the hydrolyzed short RNA fragments remaining in the nucleocapsid ([@B43]). Likewise, the HBV RNA species found in CACs are longer than those in virions in sera of hepatitis B patients ([Fig. 7D](#F7){ref-type="fig"}, lower, versus C, lower). In line with this reasoning, treatment of HepAD38 cells with entecavir reduced viral DNA in naked capsids and virions ([Fig. 10C](#F10){ref-type="fig"}, [E](#F10){ref-type="fig"}, and [G](#F10){ref-type="fig"}) but at the same time increased HBV RNA content within naked capsids ([Fig. 10H](#F10){ref-type="fig"}). This may be a result of the stalled activity of viral RT with concomitant shutdown of RNA hydrolysis ([@B46], [@B54]). ![Models for the content of extracellular HBV RNAs and the formation of circulating CACs. (A) HBV RNA molecules present in the process of DNA synthesis. HBV RNAs are included in the following DNA synthesis steps: 1, encapsidation of full-length pgRNA into NCs; 2, transfer of polymerase-DNA primer to the 3′ DR1 region and initiation of minus-strand DNA synthesis (3′ epsilon loop of pgRNA will be cleaved by RNase H domain of polymerase); 3, elongation of minus-strand DNA. With the extension of minus-strand DNA, pgRNA will be continuously cleaved from the 3′ end, generating pgRNA fragments with receding 3′ ends and pgRNA hydrolysis fragments. (B) Possible forms of circulating CACs. Intracellular NCs with pgRNA or pgRNA fragment and DNA replicative intermediates released into blood circulation of CHB patients are bound with specific antibodies (IgG), forming various forms of CACs.](zjv0241840640011){#F11} Contrary to a recent report claiming that the pgRNA-containing NCs can be enveloped and secreted as virions ([@B27]), we clearly demonstrated that secreted naked capsids carry the majority of HBV RNAs ([Fig. 1B](#F1){ref-type="fig"} and [2F](#F2){ref-type="fig"}) and that virion-associated RNAs are approximately several hundred nucleotides long ([Fig. 1B](#F1){ref-type="fig"} and [7C](#F7){ref-type="fig"}). Our results are consistent with earlier reports demonstrating that only mature nucleocapsids with RC/DSL DNA are enveloped and secreted as virions ([@B6][@B7][@B8], [@B11]), and under this condition, virions carry only short RNase H-cleaved pgRNA ([Fig. 11A](#F11){ref-type="fig"}, step 3). In this research, we were unable to separate hydrolyzed pgRNA fragments from the pgRNA and pgRNA with 3′ receding ends. Thus, the length of these RNA molecules could not be determined. The existence of hydrolyzed RNA products during reverse transcription is not without precedent. In some retroviruses, DNA polymerization speed of RT is greater than the RNA hydrolysis speed of RNase H, thus hydrolysis of RNA template is often incomplete ([@B55], [@B56]). For example, RT of avian myeloblastosis virus (AMV) hydrolyzed RNA template once for every 100 to 200 nt, while cleavage frequency of RTs of human immunodeficiency virus type 1 (HIV-1) and Moloney murine leukemia virus (MoMLV) appeared to be around 100 to 120 nt ([@B57]). Moreover, RNA secondary structures, such as hairpins, may stall the RT activity promoting RNase H cleavage, producing shorter RNA fragments ([@B55], [@B56]). Furthermore, the cleaved RNA fragments may not disassociate but anneal to the nascent minus-strand DNA forming the DNA-RNA hybrids until they are displaced by plus-strand DNA synthesis ([@B55], [@B56]). Although similar studies on HBV replication were hampered by lack of fully functional viral polymerase *in vitro* ([@B58][@B59][@B61]), the reported presence of DNA-RNA hybrid molecules clearly indicated the existence of degraded pgRNA fragments that still annealed to the minus-strand DNA ([@B5], [@B41], [@B42], [@B62]). Consistent with a previous study, our results also showed that at least part of the SS DNA is associated with RNA molecules as the DNA-RNA hybrid molecules, as detected by either RNase H digestion or the cesium sulfate density gradient separation method ([@B5] and data not shown). Given the fact that HBV RNA and immature HBV DNA are packaged in naked capsids ([Fig. 1B](#F1){ref-type="fig"} and [2B](#F2){ref-type="fig"} and [F](#F2){ref-type="fig"}) ([@B11]), we postulated that, in CHB patients, unenveloped capsids are released into circulation, where they rapidly form CACs with anti-HBcAg antibodies ([Fig. 11B](#F11){ref-type="fig"}) ([@B25], [@B33], [@B34]). In support of this notion, we showed that protein A/G agarose beads could specifically pull down particles with mature and immature HBV DNA from sera of CHB patients, implying the involvement of antibody. Addition of anti-HBcAg antibody to HepAD38 cell culture supernatant led to a shift of naked capsids' buoyant density to lower-density regions ([Fig. 4C](#F4){ref-type="fig"} and [D](#F4){ref-type="fig"}), a pattern similar to that obtained in HBV RNA-positive serum samples ([Fig. 4B](#F4){ref-type="fig"} and [E](#F4){ref-type="fig"}, and [7A](#F7){ref-type="fig"}). These particles exhibited heterogeneous electrophoretic behavior that differed from that of particles in HepAD38 culture supernatant, suggesting that they are not individual naked capsid particles but are associated with antibodies and have nonuniform compositions ([Fig. 6](#F6){ref-type="fig"} and [11B](#F11){ref-type="fig"}) ([@B36][@B37][@B38]). In CHB patients, the high titers of anti-HBcAg antibodies, which exceed 10,000 IU/ml, preclude circulation of antibody-unbound naked capsids ([@B63]). Indeed, the excessive amounts of anti-HBcAg antibodies present in the plasma sample of patient 0 were able to pull down naked capsids from the culture supernatant of HepAD38 cells (not shown). We have demonstrated the presence of circulating CACs as the new form of naked capsids in CHB patients. It is known that naked capsid particles can be secreted either by the natural endosomal sorting complex required for transport (ESCRT) pathway ([@B15][@B16][@B17]) or possibly by cell lysis consequent to liver inflammation. Our preliminary clinical data (not shown) are in agreement with a recent study showing an association of circulating HBV RNA with serum ALT level ([@B64]). However, this connection can be interpreted in a different manner, as the capsid-antibody complexes might constitute a danger signal triggering inflammation. Interestingly, the release of naked capsids seems to be an intrinsic property of hepadnaviruses preserved through evolution. Recent studies by Lauber et al. provided evidence as to the ancient origin of HBV descending from nonenveloped progenitors in fish, with their envelope protein gene emerging *de novo* much later ([@B65]). Thus, it is reasonable to propose that the active release of HBV capsid particles should be deemed a natural course of viral egress. Apart from HBV particles, it was also reported that exosomes could serve as HBV DNA or RNA carriers ([@B29], [@B66], [@B67]). However, HBV DNA and RNA was detected in naked capsids or CACs and virion fractions rather than in lower-density regions where membrane vesicles like HBsAg particles (density of 1.18 g/cm^3^) and exosomes (density of 1.10 to 1.18 g/cm^3^) would likely settle ([@B2], [@B27], [@B48], [@B68], [@B69]) ([Fig. 1](#F1){ref-type="fig"} and [7B](#F7){ref-type="fig"}). As a result, it is not likely that exosomes serve as the main vehicles carrying HBV DNA or RNA molecules. Numerous pieces of data showed that HBV spliced RNAs also represent a species of extracellular HBV RNAs ([@B28], [@B70], [@B71]). However, in HepAD38 cells, as most of the RNAs are transcribed from the integrated HBV sequence other than the cccDNA template, pgRNA packaged into nucleocapsids is the predominant RNA molecule ([Fig. 9A](#F9){ref-type="fig"} and [10D](#F10){ref-type="fig"}), and viral DNA derived from pgRNA is the dominant DNA form ([Fig. 2D](#F2){ref-type="fig"} and [E](#F2){ref-type="fig"} and data not shown). For the same reason, it would be difficult for us to estimate the amount of spliced HBV RNAs in clinical samples. Although we could not completely rule out the possibility that HBV RNAs are released into blood circulation by association with other vehicles or other pathways, it is possible that the spliced HBV RNAs also egress out of cells in naked capsids and virions like the pgRNA. In summary, we demonstrated that extracellular HBV RNA molecules are pgRNA and degraded pgRNA fragments generated in the HBV replication process *in vitro*. Moreover, we provided evidence that HBV RNAs exist in the form of CACs in hepatitis B patients' blood circulation. More importantly, the association of circulating HBV RNAs with CACs or virions in hepatitis B patients suggests their pgRNA origin. Hence, our results here suggest the circulating HBV RNAs within CACs or virions in hepatitis B patients could serve as novel biomarkers to assess efficacy of treatment. MATERIALS AND METHODS {#s4} ===================== Cell culture. {#s4.1} ------------- HepAD38 cells that replicate HBV in a tetracycline-repressible manner were maintained in Dulbecco's modified Eagle's medium (DMEM)-F12 medium supplemented with 10% fetal bovine serum, and doxycycline was withdrawn to allow virus replication ([@B31]). Patients and samples. {#s4.2} --------------------- Serum samples from 45 chronic hepatitis B patients with HBV DNA titer higher than 10^7^ IU per ml were randomly selected. Detailed medical records of these patients are included in [Table 1](#T1){ref-type="table"}. ###### Medical records of hepatitis B patients used in this research[^*a*^](#T1F1){ref-type="table-fn"} Patient no. Sex Age (yr) HBV DNA titer (IU/ml) HBeAg (IU/ml) HBsAg (IU/ml) ALT (IU/liter) SS DNA result ------------- ----- ---------- ----------------------- --------------- --------------- ---------------- --------------- 0 NA NA 2.67E + 06 4,932 396 \+ 1 M 54 1.24E + 07 25 \>250 69 \+ 2 F 32 1.20E + 07 1,067 69,384 38 \+ 3 F 21 1.36E + 07 1,712 200 149 \+ 4 M 33 \>5.00E + 07 4,812 113,933 133 \+ 5 NA NA 1.25E + 07 3,423 33 − 6 M 26 1.17E + 07 545 2,759 22 − 7 M 36 1.77E + 07 4,332 19,541 136 **+** 8 M 35 \>5.00E + 07 1,199 \>250 104 **+** 9 M 26 2.20E + 07 \>250 143 − 10 M 30 \>5.00E + 07 2 4,265 123 − 11 F 23 \>5.00E + 07 20 5,757 120 **+** 12 M 37 2.07E + 07 2,315 16,128 177 **+** 13 M 28 \>5.00E + 07 3,495 60,676 58 NA 14 F 28 \>5.00E + 07 16,515 89,575 78 \+ 15 M 37 1.62E + 07 574 +, ND 112 \+ 16 M NA \>5.00E + 07 1,601 \>250 22 NA 17 M 15 2.28E + 07 2,038 32,739 180 \+ 18 M 41 2.71E + 07 694 \>250 313 \+ 19 M 34 2.35E + 07 80 32,514 148 \+ 20 F 44 \>5.00E + 07 1,596 4,306 172 − 21 M NA 3.48E + 07 107 \>250 103 \+ 22 NA NA \>5.00E + 07 2024 45,873 147 \+ 23 M 20 1.32E + 07 13,411 12,387 344 \+ 24 M 48 \>5.00E + 07 5,511 76,914 33 − 25 M NA 3.15E + 07 15,984 366 − 26 M 31 4.16E + 07 10,251 50,469 442 \+ 27 M 60 1.35E + 07 749 \>250 105 \+ 28 F 41 \>5.00E + 07 4,173 \>52,000 194 \+ 29 NA NA \>5.00E + 07 4,233 49,125 39 \+ 30 M 29 1.42E + 07 25 5,800 940 \+ 31 M 27 2.34E + 07 1,117 22,412 129 \+ 32 M 37 2.65E + 07 70 109 NA 33 NA NA 2.03E + 07 4,902 111 \+ 34 M 32 \>5.00E + 07 993 43,582 249 \+ 35 NA NA 2.94E + 07 4,641 93,336 12 \+ 36 NA NA \>5.00E + 07 10,956 2,496 108 \+ 37 F 43 \>5.00E + 07 1,021 \>250 74 \+ 38 F 28 \>5.00E + 07 215 446 26 \+ 39 M 31 \>5.00E + 07 +, ND 38,165 194 \+ 40 NA NA \>5.00E + 07 25 \>250 69 \+ 41 M 26 1.52E + 07 +, ND +, ND 95 \+ 42 M 25 \>5.00E + 07 6,300 43,151 373 \+ 43 M 22 \>5.00E + 07 3,844 23,620 329 \+ 44 M 27 1.36E + 07 1,185 11,106 149 \+ 45 M 44 1.28E + 07 663 23,330 425 − 46 F 29 \>5.00E + 07 +, ND +, ND 667 \+ NA, not available; ND, not determined; M, male; F, female; sera from patients 0 and 46 were not included with sera from other patients for SS DNA screening. Plasma sample was the plasma exchange product obtained from an HBeAg-negative hepatitis B patient (patient 0) (HBV genotype B with A1762T, G1764A, and G1869A mutation) who died of fulminant hepatitis as a consequence of reactivation of hepatitis B ([Table 1](#T1){ref-type="table"}). Ethics statement. {#s4.3} ----------------- All samples from HBV-infected patients used in this study were from an already-existing collection supported by the National Science and Technology Major Project of China (grant no. 2012ZX10002007-001). Written informed consent was received from participants prior to collection of clinical samples ([@B72]). Samples used in this study were anonymized before analysis. This study was conducted in compliance with the ethical guidelines of the 1975 Declaration of Helsinki and was approved by the ethics committee of the Shanghai Public Health Clinical Center. Preparation of viral particles. {#s4.4} ------------------------------- HepAD38 cell culture supernatant was mixed with polyethylene glycol 8000 (PEG 8000) to a final concentration of 10% (wt/vol) and incubated on ice for at least 1 h, followed by centrifugation at 925 × *g* for 20 min. Pellets were suspended in TNE buffer (10 mM Tris-Cl \[pH 7.5\], 100 mM NaCl, and 1 mM EDTA) containing 0.05% β-mercaptoethanol to 1/150 of the original volume, followed by a brief sonication ([@B73], [@B74]). Alternatively, viral particles in HepAD38 cell culture supernatant were concentrated 50- to 100-fold by ultrafiltration using a filter unit (Amicon Ultra-15, 100 kDa). Plasma samples from patient 0 were centrifuged through a 20% (wt/vol) sucrose cushion at 26,000 rpm for 16 h in an SW 32 Ti rotor (Beckman), and pellets were resuspended in 1/200 the original volume of TNE buffer and sonicated briefly ([@B75]). Samples prepared using methods described above were either used immediately or aliquoted and stored at −80°C for later use. Sucrose density gradient centrifugation. {#s4.5} ---------------------------------------- HepAD38 cells culture supernatant concentrated by PEG 8000 was centrifugation at 500 × *g* for 5 min to remove aggregates. Ten percent, 20%, 30%, 40%, 50%, and 60% (wt/wt) sucrose gradients were prepared by underlayering and incubated for 4 h in a water bath at room temperature to allow gradient to become continuous. Five hundred microliters of concentrated sample was layered over the gradient and centrifuged at 34,100 rpm for 14 h at 4°C in a Beckman SW 41 Ti rotor. Fractions were collected from top to bottom, and the density of each fraction was determined by refractometry ([@B10]). Fractions containing viral particles were subjected to native agarose gel analysis, and HBsAg level was determined by enzyme-linked immunosorbent assay (ELISA) (Shanghai Kehua). Cesium chloride density gradient centrifugation. {#s4.6} ------------------------------------------------ HepAD38 cell culture supernatant (1.5 ml), concentrated by ultrafiltration, or serum samples from chronic hepatitis patients diluted with TNE buffer to 1.5 ml were mixed with equal volumes of 37% (wt/wt) CsCl-TNE buffer (1.377 g/cm^3^) and underlayered with 1.9 ml 34% (wt/wt) CsCl-TNE buffer (1.336 g/cm^3^), followed by centrifugation at 90,000 rpm at 4°C for 12 h (Beckman VTi 90 rotor) ([@B8]). The tube was punctured from the bottom, and every six to seven drops were collected as one fraction. Densities of separated fractions were determined by weighing. Each fraction was then desalted against TNE buffer by ultrafiltration, followed by native agarose gel separation or nucleic acid extraction. All of the CsCl density gradient centrifugation experiments were carried out at 90,000 rpm at 4°C for 12 h in a Beckman VTi 90 rotor. Native agarose gel analysis of viral particles and capsid-associated DNA. {#s4.7} ------------------------------------------------------------------------- Viral particles were resolved by native agarose gel (0.8% agarose gel prepared in Tris-acetate-EDTA \[TAE\] buffer) electrophoresis and transferred in TNE buffer to either a nitrocellulose membrane (0.45 μM) for detection of viral antigens with specific antibodies or a nylon membrane for Southern blot analysis of viral DNA. For viral antigens detection, the membrane was first fixed as previously described ([@B74]), and HBV core antigen was detected by anti-HBcAg antibody (Dako) (1:5,000). The same membrane then was soaked in stripping buffer (200 mM glycine, 0.1% SDS, 1% Tween 20, pH 2.2) and reprobed with anti-HBsAg antibody (Shanghai Kehua) (1:5,000). For Southern blot analysis of viral DNA, the membrane was dipped in denaturing buffer (0.5 N NaOH, 1.5 M NaCl) for 10 s and immediately neutralized in 1 M Tris-Cl (pH 7.0)--1.5 M NaCl for 1 min, followed by hybridization with minus-strand-specific riboprobe ([@B76]). Viral nucleic acid extraction, separation, and detection. {#s4.8} --------------------------------------------------------- **(I) Nucleic acid extraction.** To extract total viral nucleic acids (DNA and RNA), the SDS-proteinase K method was used ([@B77]). Samples were digested in solution containing 1% SDS, 15 mM EDTA, and 0.5 mg/ml proteinase K at 37°C for 15 min. The digestion mixture was extracted twice with phenol and once with chloroform. Aqueous supernatant were added with 1/9 volume of 3 M sodium acetate (pH 5.2) and 40 μg of glycogen and precipitated with 2.5 volumes of ethanol. In addition to the SDS-proteinase K method, viral RNA was also extracted with TRIzol LS reagent according to the manufacturer's instructions (Thermo Fisher Scientific). To isolate intracellular capsid-associated viral RNA, HepAD38 cells were lysed in NP-40 lysis buffer (50 mM Tris-Cl \[pH 7.8\], 1 mM EDTA, 1% NP-40), and cytoplasmic lysates were incubated with CaCl~2~ (final concentration, 5 mM) and micrococcal nuclease (MNase) (Roche) (final concentration, 15 U/ml) at 37°C for 1 h to remove nucleic acids outside nucleocapsids. The reaction was terminated by addition of EDTA (final concentration, 15 mM), and then proteinase K (0.5 mg/ml without SDS) was added to the mixture, followed by incubation at 37°C for 30 min to inactivate MNase. Viral nucleic acids were released by addition of SDS to a final concentration of 1% and extracted as described above. **II. Separation. (i) TAE agarose gel.** Viral DNA was resolved by electrophoresis through a 1.5% agarose gel in 1× TAE buffer, followed by denaturation in 0.5 M NaOH--1.5 M NaCl for 30 min and neutralization with 1 M Tris-Cl (pH 7.0)--1.5 M NaCl for 30 min. **(ii) Alkaline agarose gel.** Viral DNA was denatured with a 0.1 volume of solution containing 0.5 M NaOH and 10 mM EDTA and resolved overnight at 1.5 V/cm in a 1.5% agarose gel with 50 mM NaOH and 1 mM EDTA. After electrophoresis, the gel was neutralized with 1 M Tris-Cl (pH 7.0)--1.5 M NaCl for 45 min ([@B78]). **(iii) Formaldehyde-MOPS agarose gel.** Viral RNA was obtained by treatment of total nucleic acids extracted using the above-described SDS-proteinase K method with RNase free DNase I (Roche) for 15 min at 37°C. The reaction was stopped by addition of equal amounts of 2× RNA loading buffer (95% formamide, 0.025% SDS, 0.025% bromophenol blue, 0.025% xylene cyanol FF, and 1 mM EDTA) supplemented with extra EDTA (20 mM), followed by denaturing at 65°C for 10 min. Viral RNA extracted by TRIzol LS reagent was mixed with 2× RNA loading buffer and denatured. Denatured mixtures were separated by electrophoresis through a 1.5% agarose gel containing 2% (vol/vol) formaldehyde solution (37%) and 1× MOPS (3-\[N-morpholino\]propanesulfonic acid) buffer. The gels described above were balanced in 20× SSC solution (1× SSC is 0.15 M NaCl and 0.015 M sodium citrate, pH 7.0) for 20 min, and viral nucleic acids were transferred onto nylon membranes overnight with 20× SSC buffer. III. Detection. {#s4.10} --------------- Digoxigenin-labeled riboprobes used for detection of HBV DNA and RNA were prepared by *in vitro* transcription of a pcDNA3 plasmid that harbors 3,215 bp of HBV DNA (nt 1814 to 1813) by following the vendor's suggestions (12039672910; Roche). Riboprobes used for HBV RNA mapping were transcribed from DNA templates generated by PCR by incorporating T7 promoter into the 5′ end of reversed primers ([Fig. 9A](#F9){ref-type="fig"}). Hybridization was carried out at 50°C overnight, followed by two 5-min washes in 2× SSC--0.1% SDS at room temperature and two additional 15-min washes in 0.1× SSC--0.1% SDS at 50°C. The membrane was sequentially incubated with blocking buffer and anti-digoxigenin-AP Fab fragment (Roche) at 20°C for 30 min. Subsequently, the membrane was washed twice with washing buffer (100 mM maleic acid, 150 mM NaCl, and 0.3% Tween 20, pH 7.5) for 15 min, followed by detection with diluted CDP-Star substrate (ABI) and exposure to X-ray film. Protein A/G agarose bead pulldown of antibody-antigen complexes. {#s4.11} ---------------------------------------------------------------- Two hundred microliters of serum sample was first mixed with 300 μl of TNE buffer, and then 15 μl of protein A/G agarose bead slurry (Santa Cruz) was added to the mixture, followed by incubation overnight at 4°C in a sample mixer. Subsequently, protein A/G agarose beads were washed three times with TNE buffer, and viral DNA in input serum samples (40 μl) and agarose bead pulldown mixtures were extracted and subjected to Southern blot analysis. EM. {#s4.12} --- Serum samples from patients 11, 17, 21 22, 23, 27, 28, 30, and 41 were pooled (200 μl each) and mixed with 200 μl of 20% (wt/wt) sucrose. Serum mixtures were centrifuged through 2 ml of 20% (wt/wt) and 2 ml of 45% (wt/wt) (1.203 g/cm^3^) sucrose cushions at 34,100 rpm for 8 h at 4°C in an SW 41 Ti rotor (Beckman) to remove HBsAg particles. Supernatants were decanted and the centrifugation tube was placed upside down for 20 s, and residue sucrose was wiped out. One milliliter of phosphate buffer (10 mM Na~2~HPO~4~, 1.8 mM KH~2~PO~4~, and no NaCl) (pH 7.4) was added, and the bottom of the tube was gently washed without disturbing the pellet. A volume of 11.5 ml of phosphate buffer then was added into the tube and centrifuged again at 34,100 rpm for 3 h at 4°C. The pellet was resuspended in a drop of distilled water and dropped onto a carbon-coated copper grid, followed by staining with 2% phosphotungstic acid (pH 6.1) and examining in an electron microscope (Philip CM120) ([@B13], [@B79]). Viral DNA and RNA quantification. {#s4.13} --------------------------------- Viral DNA used for quantification was extracted using the SDS-proteinase K method as described above. Viral RNAs were extracted by TRIzol LS reagent, and DNase I was used to remove the remaining DNA, followed by phenol and chloroform extraction and ethanol precipitation. Reverse transcription was carried out using Maxima H minus reverse transcriptase (Thermo Fisher Scientific) with a specific primer (AGATCTTCKGCGACGCGG \[nt 2428 to 2411\]) according to the manufacturer's guidelines, except the 65°C incubation step was skipped to avoid RNA degradation. To ensure removal of viral DNA signal (below 1,000 copies per reaction), a mock reverse transcription, without addition of reverse transcriptase, was carried out. Quantitative real-time PCR (qPCR) was carried out using Thunderbird SYBR qPCR mix (Toyobo) in a StepOnePlus real-time PCR system (ABI). Primer pairs (F, GGRGTGTGGATTCGCAC \[nt 2267 to 2283\]; R, AGATCTTCKGCGACGCGG \[nt 2428 to 2411\]) conserved among all HBV genotypes and close to the 5′ end but not in the overlap region between the start codon and the poly(A) cleavage site of pgRNA were chosen. The cycling conditions were 95°C for 5 min, followed by 40 cycles of 95°C for 5 s, 57°C for 20 s, and 72°C for 30 s. DNA fragment containing 3,215 bp of full-length HBV DNA was released from plasmid by restriction enzymes, and DNA standards were prepared according to a formula in which 1 pg of DNA equals 3 × 10^5^ copies of viral DNA. EPA. {#s4.14} ---- HepAD38 cell culture supernatant or plasma from patient 0 were concentrated as described above and mixed with equal volumes of 2× EPA buffer (100 mM Tris-Cl, pH 7.5, 80 mM NH~4~Cl, 40 mM MgCl~2~, 2% NP-40, and 0.6% β-mercaptoethanol) with or without dNTPs (dATP, dCTP, dGTP, and dTTP, each at a final concentration of 100 μM) ([@B80]). The reaction mixtures were incubated at 37°C for 2 h and stopped by addition of EDTA to a final concentration of 15 mM. 3′ RACE. {#s4.15} -------- Concentrated HepAD38 cell culture supernatant (by ultrafiltration) was digested with MNase in the presence of NP-40 (final concentration, 1%) for 30 min at 37°C. EDTA (final concentration, 15 mM) and proteinase K (final concentration, 0.5 mg/ml) were then added and incubated for another 30 min at 37°C. Viral nucleic acids were extracted with TRIzol LS reagent followed by DNase I treatment to remove residue viral DNA. Poly(A) tails were added to the 3′ end of HBV RNA by E. coli poly(A) polymerase (NEB). The preincubation step at 65°C for 5 min was omitted to reduce potential RNA degradation, and reverse transcription was carried out with Maxima H minus reverse transcriptase (Thermo Scientific) using an oligo-dT(29)-SfiI(A)-adaptor primer (5′-AAGCAGTGGTATCAACGCAGAGTGGCCATTACGGCCTTTTTTTTTTTTTTTTTTTTTTTTTTTTT-3′) in reverse transcription buffer \[1× RT buffer, RNase inhibitor, 1 M betanine, 0.5 mM each dNTP, and 5 μM of oligo-dT(29)-SfiI(A)-adaptor primer\] at 50°C for 90 min, followed by heating at 85°C for 5 min and treatment with RNase H at 37°C for 15 min. PCR amplification of cDNA fragments was then performed with 5′ HBV-specific primers \[the same sequences of forward primers used for riboprobe preparation ([Fig. 9A](#F9){ref-type="fig"}), except each primer containing a flanking sequence plus a SfiI(B) site (5′-AGTGATGGCCGAGGCGGCC-3′)\] and 3′ adaptor primer (5′-AAGCAGTGGTATCAACGCAGAGTG-3′). The reaction was carried out with PrimeSTAR HS DNA polymerase (TaKaRa) at 95°C for 5 min, followed by 5 cycles of 98°C for 5 s, 50°C for 10 s, and 72°C for 210 s, 35 cycles of 98°C for 5 s, 55°C for 10 s, and 72°C for 210 s, and a final extension step at 72°C for 10 min. PCR amplicons were digested with SfiI enzyme and cloned into pV1-Blasticidin vector (kind gift from Zhigang Yi, Shanghai Medical College, Fudan University). Positive clones were identified by sequencing, and only clones with 3′ poly(dA) sequence were considered authentic viral RNA 3′ ends. We thank Zhuying Chen and Xiurong Peng for handling serum samples and compiling the clinical data used in this research. This research was supported by the National Natural Science Foundation of China (NSFC) (81671998, 91542207), National Key Research and Development Program (2016YFC0100604), National Science and Technology Major Project of China (2017ZX10302201001005), Shanghai Science and Technology Commission (16411960100), and Innovation Program of Shanghai Municipal Education Commission (2017-01-07-00-07-E00057). [^1]: **Citation** Bai L, Zhang X, Kozlowski M, Li W, Wu M, Liu J, Chen L, Zhang J, Huang Y, Yuan Z. 2018. Extracellular hepatitis B virus RNAs are heterogeneous in length and circulate as capsid-antibody complexes in addition to virions in chronic hepatitis B patients. J Virol 92:e00798-18. <https://doi.org/10.1128/JVI.00798-18>.
abcdef abc def hij klm nop qrs abcdef abc def hij tuv wxy z
Dietary sodium chloride intake independently predicts the degree of hyperchloremic metabolic acidosis in healthy humans consuming a net acid-producing diet. We previously demonstrated that typical American net acid-producing diets predict a low-grade metabolic acidosis of severity proportional to the diet net acid load as indexed by the steady-state renal net acid excretion rate (NAE). We now investigate whether a sodium (Na) chloride (Cl) containing diet likewise associates with a low-grade metabolic acidosis of severity proportional to the sodium chloride content of the diet as indexed by the steady-state Na and Cl excretion rates. In the steady-state preintervention periods of our previously reported studies comprising 77 healthy subjects, we averaged in each subject three to six values of blood hydrogen ion concentration ([H]b), plasma bicarbonate concentration ([HCO(3)(-)]p), the partial pressure of carbon dioxide (Pco(2)), the urinary excretion rates of Na, Cl, NAE, and renal function as measured by creatinine clearance (CrCl), and performed multivariate analyses. Dietary Cl strongly correlated positively with dietary Na (P < 0.001) and was an independent negative predictor of [HCO(3)(-)]p after adjustment for diet net acid load, Pco(2) and CrCl, and positive and negative predictors, respectively, of [H]b and [HCO(3)(-)]p after adjustment for diet acid load and Pco(2). These data provide the first evidence that, in healthy humans, the diet loads of NaCl and net acid independently predict systemic acid-base status, with increasing degrees of low-grade hyperchloremic metabolic acidosis as the loads increase. Assuming a causal relationship, over their respective ranges of variation, NaCl has approximately 50-100% of the acidosis-producing effect of the diet net acid load.
Stefan Priebe Stefan Priebe is a psychologist and psychiatrist of German and British nationality. He grew up in West-Berlin, studied in Hamburg, and was Head of the Department of Social Psychiatry at the Free University Berlin until 1997. He is Professor of Social and Community Psychiatry at Queen Mary, University of London, and Director of a World Health Organization collaborating centre, the only one specifically for Mental Health Services Development. He heads a research group in social psychiatry and has published more than 600 peer-reviewed scientific papers. References External links Category:1953 births Category:Living people Category:Place of birth missing (living people) Category:German psychologists Category:German psychiatrists Category:British psychologists Category:British psychiatrists Category:Free University of Berlin faculty Category:Academics of Queen Mary University of London Category:People from Berlin
1. Field of the Invention The present invention relates to particularly an optical coherence tomography apparatus including an interference optical system which is used in the medical field, an optical coherence tomography method, an ophthalmic apparatus, a method of controlling the ophthalmic apparatus, and a storage medium. 2. Description of the Related Art Currently, various types of ophthalmic apparatuses using optical devices are used. Such apparatuses include, for example, an anterior ocular segment imaging apparatus, a fundus camera, and a scanning laser ophthalmoscope (SLO). Among them all, an optical coherence tomography (OCT) apparatus (to be referred to as an “OCT apparatus” hereinafter) is an apparatus capable of obtaining a high-resolution tomogram of an object to be examined. This OCT apparatus has been becoming an indispensable apparatus for dedicated retinal outpatient clinics. For example, the OCT apparatus disclosed in Japanese Patent Laid-Open No. 11-325849 uses low-coherent light as a light source. Light from the light source is split into measurement light and reference light through a splitting optical path such as a beam splitter. Measurement light is light to irradiate an object to be examined such as the eye through a measurement light path. Return light of this light is guided to a detection position through a detection light path. Note that return light is reflected light or scattered light containing information associated with an interface relative to the irradiation direction of light on the object. On the other hand, reference light is light to be guided to the detection position through a reference light path by being reflected by a reference mirror or the like. It is possible to obtain a tomogram of an object to be examined by causing interference between this return light and reference light, collectively acquiring wavelength spectra by using a spectrometer or the like, and performing Fourier transform of the acquired spectra. An OCT apparatus which collectively measures wavelength spectra is generally called a spectral domain OCT apparatus (SD-OCT apparatus). In an SD-OCT apparatus, a measurement depth Lmax is represented, as an optical distance Lmax, by a pixel count N of the image sensor of a spectrometer and a spectrum width ΔK of the frequency detected by the spectrometer according to equation (1). Note that the spectrum width ΔK is represented by a maximum wavelength λmax and a minimum wavelength λmin. The pixel count N is often an even number, and is generally the factorial of 2, that is 1024 or 2048. L max = ± N 4 ⁢ ⁢ Δ ⁢ ⁢ K Δ ⁢ ⁢ K = 1 λ min - 1 λ max } ( 1 ) If, for example, a central wavelength of 840 nm, a band of 50 nm, and a pixel count of 1024 are set, λmax=840+50/2=840+25=865 nm, λmin=840−50/2=840−25=815 nm, and N=1024. In this case, optical distance Lmax=3.6 mm. That is, it is possible to perform measurement up to about 3.6 mm on the plus side relative to the coherence gate. The coherence gate is the point at which a reference light path coincides with an optical distance in a measurement light path. When a desired region (a distance in the depth direction) is sufficiently smaller than 3.6 mm (for example, 1 mm or less), the measurement depth can be reduced by decreasing the pixel count of the spectrometer. Decreasing the pixel count is important in order to speed up processing and reduce the data amount. This is because, when measuring a three-dimensional image of the retina, it takes much measurement time and produces a large amount of data. When an object to be examined is a moving object like the eye, in particular, it is required to further shorten the measurement time. On the other hand, changing the pixel count of a spectrometer is equivalent to changing the resolution of the spectrometer. A problem in this case will be described with reference to FIG. 1. FIG. 1 is a graph obtained by plotting, for each spectrometer resolution, the light intensity measurement results obtained when the position of the coherence gate is moved while a mirror is located at the position of an object to be examined. The ordinate corresponds to the light intensity, and the abscissa to the distance. With an increase in distance from the coherence gate, light intensity attenuation called Roll-Off occurs. The degree of attenuation of a light intensity Int mainly depends on the resolution of a spectrometer and the pixel count of an image sensor. Letting x be a distance variable and a be a coefficient proportional to the resolution of the spectrometer, the degree of attenuation is proportional to a sinc function given by Int ∝ sin ⁢ ⁢ 2 ⁢ ⁢ π ⁢ ⁢ x ⁢ ⁢ α π ⁢ ⁢ x ( 2 ) As is obvious from FIG. 1, as a value indicating a resolution increases (from 0.1 nm to 0.2 nm, 0.5 nm, and 1.0 nm), the cycle in which plotted points approach zero is shortened. As described above, images formed from spectrum data from spectrometers having different resolutions differ in light intensity in the depth direction. Differences in light intensity are differences in image contrast. This makes images in the same region look different. That is, with spectrometers having different resolutions, obtained images look different. In consideration of the above problems, the present invention provides a technique of correcting the contrast differences between images which are caused when wavelength resolutions differ (spectrometers differ in resolution in the case of an SD-OCT) in an FD-OCT apparatus such as an SD-OCT apparatus.
cask "font-cormorant-sc" do version :latest sha256 :no_check # github.com/google/fonts/ was verified as official when first introduced to the cask url "https://github.com/google/fonts/trunk/ofl/cormorantsc", using: :svn, trust_cert: true name "Cormorant SC" homepage "https://fonts.google.com/specimen/Cormorant+SC" font "CormorantSC-Bold.ttf" font "CormorantSC-Light.ttf" font "CormorantSC-Medium.ttf" font "CormorantSC-Regular.ttf" font "CormorantSC-SemiBold.ttf" end
Molly Henderson Molly Henderson (born September 14, 1953) is a former Commissioner of Lancaster County, Pennsylvania. The Commissioners are the chief executive and legislative officials of the County, which has 500,000 residents spread over and an annual County budget of $300 million. Henderson was elected in 2003 to a four-year term and was the lone Democrat on the Board of Commissioners in a County where Republicans outnumber Democrats two to one. Henderson was previously Head of Public Health for the City of Lancaster, Pennsylvania, the County seat. Henderson was not re-elected as Lancaster County Commissioner on November 7, 2007. Henderson was succeeded by Craig Lehman as the minority Commissioner. Other careers She is a former high school and college teacher, holding a doctorate degree from Temple University, a master's degree from West Chester University and her B.S. from James Madison University. Henderson is also a Respiratory Therapist and worked at Lancaster General Hospital prior to her teaching and government careers. Henderson’s book Pressed: Public Money, Private Profit - A Cautionary Tale tells the story of the development, building, and financing of the Lancaster County Convention Center and Marriott Hotel in downtown Lancaster. The highly controversial “convention center project,” as it was known to those in Lancaster County (pop. 510,000), was originally proposed in 1999 as a $75 million “public-private” partnership. The project included a publicly-owned convention center ($30 million) and a privately-owned hotel ($45 million). By the time the convention center and hotel opened in 2009, the project’s cost had ballooned to more than $170 million, with more than 90% of the total cost of both the convention center and hotel borne by Pennsylvania taxpayers. Political views Henderson is a notable opponent of the Lancaster County Convention Center Authority's controversial $170 million hotel/convention center in downtown Lancaster on the site of the former Watt & Shand building. The project's supporters believe it would promote the revitalization of the city's center. Its opponents, however, feel it poses an unacceptable risk to taxpayers. The hotel portion of the project is owned 50% by Lancaster Newspapers, Inc. which have been accused of using their monopoly print position in the County to promote the project and stifle opposition. Henderson has been referenced in more than 2,200 newspaper articles, over 700 of which concern the Lancaster County Convention Center project, many of them attacking her position. Personal life Henderson is married to Alex Henderson and has two children, Alexander "Ander" Henderson and Leslie Henderson. See also Lancaster County Lancaster City Lancaster Newspapers References External links Official Lancaster County Site Campaign Site Category:1953 births Category:Living people Category:County commissioners in Pennsylvania Category:Temple University alumni Category:Politicians from Lancaster, Pennsylvania Category:People from Cumberland, Maryland Category:West Chester University alumni Category:James Madison University alumni Category:Women in Pennsylvania politics Category:Pennsylvania Democrats
I got a wake up call, I got to make this workCause if we don´t we´re left with nothing and that´s what hurtsWe´re so close to giving up but something keeps us here I can´t see what´s yet to comeBut I have imagined life without you and it feels wrongI want to know where love begins, not where it ends Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost got it rightBut almost wasn´t what I had in mind We want it all and deserve no lessBut all we seem to give each other is second bestWe´re still reaching out for something that we can´t touch Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost got it rightBut almost wasn´t what I had in mind You know there´s nothing like this loveSo we don´t want to let it go Cause we don´t know what we´re doingWe´re just built this wayWe´re careless but we´re tryingCause we both make mistakesAnd I don´t want to keep on runningIf we´re only gonna fall behindWe´ve almost it got rightBut almost wasn´t what I had in mind
USA The EU is a political system with a unique structure and functioning, incomparable to anything which has existed before, far away from any classical, either national or international model. In such supranational union that is neither a pure intergovernmental organization nor a true federal state, political institutions appear vague and somewhat obscure and indistinguishable. Are Iran and Saudi Arabia going to war? They are already fighting – by proxy – all over the region. Relations between Saudi Arabia and Iran quickly deteriorated in January 2016 following Riyadh’s execution of Shiite cleric Nimr al-Nimi but their struggle for power dates back to Iran's Islamic Revolution in 1979. Tehran's influence extends today across a broad area of the Middle East from Iran in the east to Lebanon in the west. UNESCO’s Director-General, Irina Bokova and the Italian Minister for Foreign Affairs, Paolo Gentiloni signed in February 2016 in Rome an agreement on the establishment of a Task Force of cultural heritage experts in the framework of UNESCO’s global coalition “Unite for Heritage”. Under the agreement, UNESCO will be able to ask the Italian Government to make experts of the Task Force available for deployment for the conservation of cultural heritage in areas affected by crises. In October 2016 John Sawers, a former MI6 chief, told BBC that the world was entering an era possibly “more dangerous” than the Cold War, as “we do not have that focus on a strategic relationship between Moscow and Washington”. Lt. Gen. Eugeny Buzhinsky, head of PIR Centre, a Moscow Think Tank, did maintain: “If we talk about the last Cold War, we are currently somewhere between the erection of the Berlin Wall and the Cuban Missile Crisis but without the mechanisms to manage the confrontation”.
Chronic energy deficiency and its association with dietary factors in adults of drought affected desert areas of Western Rajasthan, India. To asses the impact of drought on nutritional status of adults of a rural population in desert area. Threestage sampling technique. 24 villages belonging to 6 tehsils (sub units of district) of Jodhpur district, a drought affected desert district of Western Rajasthan, in 2003. 1540 adults were examined for their anthropometry, dietary intake and nutritional deficiency signs. Overall chronic energy deficiency (CED) was found high (42.7 %). Severe CED was 10.7 percent, significantly higher in males than females. Regarding vitamin A deficiency, overall prevalence of Bitot spot and night blindness was 1.8 and 0.2 percent respectively, higher in females than males. Regarding vitamin B complex deficiency, angular stomatitis, cheliosis, and glossitis was 1.0, 2.6 and 5.4 percent. Anemia was 35.6 percent. Overall mean calorie and protein intake deficit was very high (38 and 16.4 %). The comparison of present drought results with earlier studies in desert normal and desert drought conditions showed higher deficiencies of calories and proteins in their diet. Severity of malnutrition is critical as CED was more than the cut-off point of 40 percent stated by World Health Organization. Vitamin A and B complex deficiencies, anemia, protein calorie malnutrition along with deficit in calories and proteins in their diet were higher in comparison to non desert areas, which may be due to the harsh environmental conditions in desert areas. Efforts should be made to incorporate intervention measures to ensure the supply of adequate calories and proteins to all age groups.
CIBC Poll: Nearly half of all Canadians with debt not making progress in paying it down Many say they simply don't have the money, but may be missing opportunities to get advice about how to reduce their debt TORONTO, June 5, 2013 /CNW/ - A new CIBC(TSX: CM) (NYSE: CM) Poll conducted by Harris/Decima reveals that half of Canadians with debtsay their debt level is the same or higher than it was a year ago, despite prior CIBC polls showing debt repayment as the top priority for Canadians in 2013. Highlights of the poll include: 71 per cent of Canadians said they currently carry some form of debt, in line with the national average in a similar poll conducted last year (72 per cent) Among Canadians with debt, 21 per cent say their level of debt has increased in the last 12 months, while another 28 per cent say their debt level has stayed the same - which indicates nearly half (49 per cent) of Canadians with debt did not make progress towards paying it down in the past year The top reason cited for not making progress on debt reduction was not having the money to do so 50 per cent said they have reduced their debt in the last year "Though Canadians have identified paying down debt as their top financial priority for the past three years, our poll shows almost an even split between those who are making strides and those who aren't," said Christina Kramer, Executive Vice President, Retail Distribution and Channel Strategy, CIBC. "Today's historically low interest rates represent a real opportunity to reduce your total debt level, however to take advantage of these low rates it is critical that Canadians have a plan to make that happen." CIBC's annual Financial Priorities Poll, released in January 2013, found that paying down debt was the top financial priority of Canadians for the third consecutive year. "Not Having the Money" Cited as Top Reason for not Making Progress Among those Canadians who said they aren't making progress on debt repayment, the top reason provided was they don't have the money to put against what they owe (29 per cent), followed by unplanned expenses which affected their ability to pay more towards their debt (12 per cent). A CIBC study from earlier this year shows that despite being a financial priority, debt is not top of mind when it comes to getting advice. When Canadians were asked what topics come to mind about a conversation they may have with an advisor, only 6 per cent cited debt. "It can be challenging to find the money each month to put towards reducing your debt, but our poll clearly shows that many Canadians are doing just that despite having the same everyday financial pressures of those who say they are not making progress," said Ms. Kramer. She noted that with many Canadians avoiding conversations about debt management, they are missing an opportunity to get personalized advice and put a plan in place. "You should talk with an advisor about your debt management goals the same way you would talk to them about your goals for retirement, because your finances are all connected," added Ms. Kramer. "A conversation with an advisor can lead to a plan that puts on you on track to achieve your broader financial goals." Advice on Managing Debt: CIBC offers these tips to help Canadians take charge of their finances and reduce debt as part of their long term financial plan. Make lump sum payments to higher interest debt first to reduce interest costs If you have debt, work with an advisor to structure it to minimize your overall interest costs by utilizing debt products that offer a lower interest rate and having a strategy to pay these balances down in a specific time frame While interest rates remain near historic lows, don't ignore the long term benefits of making small adjustments to your payment today. Setting your debt payment even slightly higher than your required payment can reduce your overall interest costs and help you become debt free faster Use free budgeting tools to help you stay on budget - CIBC CreditSmart, available to CIBC credit card holders, allows you to set customized budgets and receive spend alerts if you exceed your planned budget for the month, helping you stay on top of your everyday budgeting and saving KEY POLL FINDINGS Percentage of Canadians currently managing some form of debt, by region: 2013 2012 National 71% 72% Atlantic Canada 79% 78% Quebec 71% 72% Ontario 71% 69% Manitoba and Saskatchewan 73% 77% Alberta 69% 75% B.C. 64% 71% Percentage of Canadians currently managing some form of debt, by age: 2013 2012 National 71% 72% 18-24 59% 51% 25-34 82% 84% 35-44 79% 83% 45-54 78% 78% 55-64 66% 67% 65 + over 56% 56% Among Canadians with debt, percentage of those that say they have increased their debt over the past 12 months, by region: National 21% Atlantic Canada 8% Quebec 24% Ontario 23% Manitoba and Saskatchewan 24% Alberta 18% British Columbia 21% Among Canadians with debt, percentage of those that say their level of debt has stayed the same over the past 12 months, by region: National 28% Atlantic Canada 32% Quebec 33% Ontario 26% Manitoba and Saskatchewan 23% Alberta 24% British Columbia 31% *Each week, Harris/Decima interviews just over 1000 Canadians through teleVox, the company's national telephone omnibus survey. These data were gathered in samples of 2002 Canadians between March 28 to April 7, 2013 and 1002 Canadians between April 25 - 28, 2013. Samples of this size have a margin of error of +/-2.2%, 19 times out of 20 and +/-3.1%, 19 times out of 20 respectively. CIBC is a leading North American financial institution with over 11 million personal banking and business clients. CIBC offers a full range of products and services through its comprehensive electronic banking network, branches and offices across Canada, and has offices in the United States and around the world. You can find other news releases and information about CIBC in our Media Centre on our corporate website at www.cibc.com.
@comment $NetBSD: PLIST,v 1.5 2017/06/21 08:28:43 markd Exp $ share/texmf-dist/scripts/luaotfload/luaotfload-tool.lua share/texmf-dist/scripts/luaotfload/mkcharacters share/texmf-dist/scripts/luaotfload/mkglyphlist share/texmf-dist/scripts/luaotfload/mkimport share/texmf-dist/scripts/luaotfload/mkstatus share/texmf-dist/scripts/luaotfload/mktests share/texmf-dist/tex/luatex/luaotfload/fontloader-2017-02-11.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-basics-gen.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-basics-nod.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-basics.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-data-con.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-afk.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-cff.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-cid.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-con.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-def.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-dsp.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-gbn.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ini.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-lua.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-map.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ocl.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-one.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-onr.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-osd.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ota.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otc.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oti.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otj.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otl.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oto.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-otr.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ots.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-oup.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-tfm.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-font-ttf.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-demo-vf-1.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-enc.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-ext.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts-syn.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-fonts.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-l-boolean.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-l-file.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-l-function.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-l-io.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-l-lpeg.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-l-lua.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-l-math.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-l-string.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-l-table.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-languages.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-languages.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-math.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-math.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-mplib.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-mplib.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-plain.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor-test.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-preprocessor.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-reference.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib-test.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib-test.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-swiglib.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-test.tex share/texmf-dist/tex/luatex/luaotfload/fontloader-util-fil.lua share/texmf-dist/tex/luatex/luaotfload/fontloader-util-str.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-auxiliary.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-blacklist.cnf share/texmf-dist/tex/luatex/luaotfload/luaotfload-characters.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-colors.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-configuration.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-database.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-diagnostics.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-features.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-glyphlist.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-init.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-letterspace.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-loaders.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-log.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-main.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-parsers.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-resolvers.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload-status.lua share/texmf-dist/tex/luatex/luaotfload/luaotfload.sty
Ask HN: How to approach two competing job offers - is bidding war an option? - mbord I studied Computer Science, and I recently graduated as a bachelor. I went on to apply to two major Silicon Valley companies, let's call them A and B, and aced the interviews.<p>I got an offer from A, which I would have happily accepted had I not had another company still contemplating their offer. Now B contacted me, not yet ready to give an offer, but they mentioned that their offer would likely be significantly larger if they would be able to see the offer from A in writing.<p>I got my offer from A both verbally and in informal writing to my e-mail. I find it clear that if I asked them for the offer in writing now, they would certainly know what's happening (given that I've kept them waiting for some time now). I told this to B already previously, they understood, but it would certainly benefit me if I had it in writing now.<p>How should this game be played in your opinion? I actually prefer A, and if B's offer were roughly the same size, I would be very happy to take A. However, I am wondering whether I am a wussy if I play it safe now, and take no action, and should I instead try to get some competition between these two. There's also a small chance that A is trying to lowball me with their offer, since I might be too humble analyzing my own value. All this leads me to think that I might just want to get the offer in writing, not caring what they think about it, but I am very very open to other ideas.<p>Also, I know that I should probably never try to bluff, and that's my intention, too - I'll never try to inflate my offer if I am not really willing to take the competing one. These both are great companies, and B can become better in my mind if their offer triumphs on the financial side. ====== gvb _Now B contacted me, not yet ready to give an offer, but they mentioned that their offer would likely be significantly larger if they would be able to see the offer from A in writing._ I see nothing but red flags here. It also sounds like you are already dabbling with a bidding war... you are holding back on A, B knows about A, B is "offering" to out-bid A. Now you are wondering if you can leverage a questionable offer from B to up A's offer. If you escalate this further into a full out bidding war, the probability is high that it won't turn out well. If B wins, you work for a sketchy company just for the money... or they don't come through with a _real_ offer, A drops out (note that you do not have a _formal_ offer from A yet), and you are screwed. If A wins, the person you work for knows what you did to them and resents it. Sorry to be harsh, but from the outside looking in, B sounds pretty sketchy and your line of questioning doesn't reflect well on you. ------ antidoh "I recently graduated" "aced the interviews" "I got an offer from A" " I actually prefer A" "B can become better in my mind if their offer triumphs on the financial side." I believe that last is the only untrue thing you've said. You're young, capable and have a lot of years in front of you. Work where you want and enjoy it. ------ helen842000 I think B only want to see the letter in writing so that they can go slightly above what A has offered.It makes no sense to go largely over. Why not ask B to make a blind offer based on the value you can bring and what you're worth, tell them you're not interested in them upping A's offer, just formulating their own based on value not competition. You want to hear what they would have offered without company A in the picture. Not only do you come across less money-motivated but I think you're more likely to get a higher offer from B this way. Plus if you do get company B's offer in writing - maybe you can take that back to A. After all if you prefer company A, you should be going with them regardless. ------ ggk IMO, there is no harm asking for formal offer letter (probably a soft copy). But I would suggest choose the job which is of your interest. Salary should be the second factor. If you choose a job of your interest, you will perform well there and your career growth will be much faster there. ~~~ pmtarantino That's my opinion too. I worked in two different jobs in the last years. One of them was in company A, which I always wanted to be part of. The salary was not amazing (in fact, after of some talk with friends, it was low), but I was happy. Then, I worked in company B. The salary was superb, it was higher than average, but I was not happy. That was not what I wanted. I quit. ------ lsiebert ask for offer in writing, explain why, and that you'd prefer A, see if they are open to matching B's offer. If so, you might want to take their initial offer to B. Get B's offer in writing and go to A. Tell A if they match it, you'll work for them. Do so, that is, if they match B's offer, work for A. Explain to B, but invite them to contact you sometime in the future to see if you are happy at A. Use B's contact to either move to B if A isn't great or to negotiate from position from strength at A. But work at A to start with.
Vasa, Minnesota Vasa is an unincorporated community in Vasa Township, Goodhue County, Minnesota, United States. The community is nine miles east of Cannon Falls at the junction of State Highway 19 (MN 19) and County 7 Boulevard. It is within ZIP code 55089 based in Welch. Nearby places include Cannon Falls, Red Wing, Welch, and White Rock. Vasa is 12 miles west-southwest of Red Wing. References Category:Unincorporated communities in Minnesota Category:Unincorporated communities in Goodhue County, Minnesota
--- sandbox/linux/BUILD.gn.orig 2019-04-08 08:18:26 UTC +++ sandbox/linux/BUILD.gn @@ -12,12 +12,12 @@ if (is_android) { } declare_args() { - compile_suid_client = is_linux + compile_suid_client = is_linux && !is_bsd - compile_credentials = is_linux + compile_credentials = is_linux && !is_bsd # On Android, use plain GTest. - use_base_test_suite = is_linux + use_base_test_suite = is_linux && !is_bsd } if (is_nacl_nonsfi) { @@ -379,7 +379,7 @@ component("sandbox_services") { public_deps += [ ":sandbox_services_headers" ] } - if (is_nacl_nonsfi) { + if (is_nacl_nonsfi || is_bsd) { cflags = [ "-fgnu-inline-asm" ] sources -= [ @@ -387,6 +387,8 @@ component("sandbox_services") { "services/init_process_reaper.h", "services/scoped_process.cc", "services/scoped_process.h", + "services/syscall_wrappers.cc", + "services/syscall_wrappers.h", "services/yama.cc", "services/yama.h", "syscall_broker/broker_channel.cc", @@ -405,6 +407,10 @@ component("sandbox_services") { "syscall_broker/broker_process.h", "syscall_broker/broker_simple_message.cc", "syscall_broker/broker_simple_message.h", + ] + sources += [ + "services/libc_interceptor.cc", + "services/libc_interceptor.h", ] } else if (!is_android) { sources += [
OS 10.2 - Permanently deleting emails and files This is my first time posting so I hope I don't screw this up...Does anyone have any advice on how to permanently delete emails and files? I am running on OS X 10.3.9 and have deleted files in my trash using the secure empty trash function, however, I have a large number of emails I have deleted in Mail. Are these permenently deleted as well? Secondly, is some of the shareware or freeware out there such as Shredit any good? I have a concern that someone is going to try and retrieve deleted data off my computer sometime soon and I really don't want any emails/files showing up that I have deleted. If you are using Mail as your email client and you account is setup as a pop3 account not leaving a copy on the server and your mac is not remotely backed up and your home folder is local then your mail lives in /Users/username/Library/Mail. Using the erase deleted messages from the mailbox menu will get rid of your mail. Will it be recoveerable from a drive recovery company,...possibly. Your company on the other handprobably not. Unless the above criteria is false.
The bantamweight champion of DEEP, Takafumi Otsuka, will take on Koichi Ishiuzka on May 13th at Differ Ariake in Tokyo. Otsuka was supposed to fight Fernando Vieira for the WSOF-GC bantamweight title in December. However, the Brazilian was over the weight at the first weigh-in and never showed up at the second weigh-in. Vieira was nowhere to be found after this.The Brazilian basically fled from the entire show. Otsuka became the inaugural WSOF-GC champ but this means, the last time he fought was back in August of last year. That was, however, against a Mongolian fighter named Baataryn Azjavkhlan who was 1-0 at the time. In terms of competitive fight, vs Daisuke Engo in February 2016 maybe is the last time Otsuka went through, which is more than a year ago. Ishizuka is basically born and raised in DEEP. And, he is undefeated in the last ten fights. For Ishizuka, this must be the opportunity he has been looking for all of his pro MMA career. So, Ishizuka has to be motivated than ever. The only concern is, his recent changes in the training environment. In last year, Ishizuka moved to Aichi because of the job which forced him to leave team Brightness Monma. And, Ishizuka joined team ALIVE which is based in Aichi prefecture. But Ishizuka left ALIVE now, and his status is “independent.” Besides this title fight between Otsuka and Ishizuka, men’s strawweight bout between Haruo Ochi and “Rambo” Kosuke is also confirmed. These two met all the way back in May of 2011. This fight took place in Shooto. “Rambo” almost caught Ochi with an armbar in the first round. But Ochi came back and KO’d Kosuke in the second round. That was “Rambo”‘s first pro defeat in seven fights.
--recursive --require @babel/register
Accounting Surf Works offer a range of accounting services suitable for all types of business. Below, we have listed packages suitable for sole traders, partnerships and limited companies. The packages can be fully tailored to your requirements by adding extra services to create the exact service that you and your business requires. All services are carried out on time with the minimum of fuss by our in house, fully qualified accountant The list of services offered is not exhaustive so please let us know if you require a service not listed. If you have specific needs we can build a bespoke accountancy package tailored to your exact requirements. Standard Packages From Sole Trader to Limited Company, we can organise your accounting with a simple, no-nonsense standard package. Sole Traderfrom £25pm Partnershipfrom £45pm Personal Tax Return for each partner (includes partnership income and bank interest received) Limited Co.from £65pm Year End Accounts Accounts Filed at Companies House Company Tax Return Payroll for Directors Salary Dividend Paperwork Directors Personal Tax Return Return Filed at Companies House Bolt on Services Year End Accounts Bookkeeping VAT Returns Payroll CIS Returns Management Accounts Company Formations Company Annual Returns Personal Tax Returns Partnership Tax Returns Company Tax Returns Rental Property Accounts Capital Gains Tax Inheritance Tax Warning: strpos() expects parameter 1 to be string, array given in /srv/users/serverpilot/apps/surfworks/public/wp-content/themes/kallyas/framework/hg-theme-framework/inc/helpers/functions-image-helpers.php on line 157 Warning: strpos() expects parameter 1 to be string, array given in /srv/users/serverpilot/apps/surfworks/public/wp-content/themes/kallyas/framework/hg-theme-framework/inc/helpers/functions-image-helpers.php on line 157 We also offer a fully outsourced finance function that includes: Raise and issue sales invoices to your customers Collect, allocate and bank money from your customers Maintain your purchases ledger Issue payments to your suppliers when invoices are due For more information about our accountancy services, give us a call or email :-)
Molecular-dynamics simulations of electron-ion temperature relaxation in a classical Coulomb plasma. Molecular-dynamics simulations are used to investigate temperature relaxation between electrons and ions in a fully ionized, classical Coulomb plasma with minimal assumptions. Recombination is avoided by using like charges. The relaxation rate agrees with theory in the weak coupling limit (g identical with potential/kinetic energy << 1), whereas it saturates at g > 1 due to correlation effects. The "Coulomb log" is found to be independent of the ion charge (at constant g) and mass ratio > 25.
How Idris Elba's 'Luther' Puts Us in the Mindset of a Renegade Detective "Luther" is a series about righteous indignation. Yes, it's a police drama, a dark (sometimes ludicrously so) crime saga set in a moody London with a greater and grimmer murder rate to equal that of other bleak procedurals. Yes, it's a police drama, a dark (sometimes ludicrously so) crime saga set in a moody version of London with a greater and grimmer murder rate to equal that of other bleak procedurals. But the satisfaction of seeing those cases solved, those murderers and kidnappers caught, is muted, secondary to the suffering and sacrifice and validation of protagonist John Luther, the detective played by Idris Elba with a staggering display of movie star charisma that seems like it ought to produce static shocks with everything with which he comes into contact. Luther's devoted to his job with an obsessiveness that's destroying him, that, as the series began in 2010, had ended his marriage and eaten him up inside, changing him. He's good at what he does, if prone to extremes, and yet he seems to be perpetually doubted, maligned and hurt because of it. In season one, Luther was framed for the murder of his beloved wife and forced to run from his fellow officers, and it's not the only time in the series he's a suspect. In season two he's treated like a certain career contaminant by a new, ambitious, by-the-books officer assigned to report to him. And in the four-episode third season airing on BBC America from September 3 through 6, that former colleague, DS Erin Gray (Nikki Amuka-Bird), is targeting him as part of an investigation of police corruption with DSU George Stark (David O'Hara), who may be a little obsessive himself. Aside from his sidekick DS Justin Ripley (Warren Brown), few seem to appreciate Luther and his incredible abilities -- instead, he's infamous, the rest of the police force apparently all too able of believing he's capable of dark things. We, as viewers, don't, because of Idris Elba. John Luther is Elba's best role since that of the fascinatingly savvy Stringer Bell in "The Wire," because it showcases the actor's utterly assured presence, his air of rakishly rumpled confidence in his tweed coat. Luther does not have swagger, he has conviction, conviction that informs his every -- frequently correct -- move. It's why it's so easy to trust him in a way that the characters working with him don't, and not without reason. When the series began in 2010, it was with Luther letting a pedophile fall to what could have been his death after extracting from him information about the location of the girl he'd kidnapped. It didn't doom his career -- he got lucky -- but he hasn't really changed. He even threatens a suspect with a similar fate toward the start of the new season -- but the move doesn't come across as harsh. We're more worried, when it happens, that it'll get him in trouble again. "Luther" is mesmerizing because of Elba, and because the show is so consumed by his performance that it becomes not one about a maverick cop but instead one of a man outpacing the justice system he's allegedly a part of, one that hampers him with its pesky rules, its politics and its skeptics. It encourages us to buy into his worldview, in which he should just be allowed to do his job and get justice done, though that may mean covering up crimes or allowing culprits he's judged deserving to go free -- like Alice Morgan (Ruth Wilson), his psychopathic superhero of a friend, and a wonderful, preposterous character who's essentially too enjoyable to be locked up. Luther's tactics make him so dangerous to the people around him that the case Stark tries to build against him is based on the peripheral body count rather than evidence, and when, in the new season, he starts a tentative romance with Mary Day (Sienna Guillory), a woman another character dismissively sums up as a "pixie," it's accompanied by a sense of dread. The series comes close to confronting the nature of its protagonist in the new season, introducing a grieving man who turns to vigilanteism and gathers public support for his actions as he starts targeting rapists and killers who've gotten off lightly. Confronting Luther on opposite sides of a canal, the man says "One out of five murders are committed by men on bail," and demands to know why nothing is being done about it. "It's complicated," Luther replies. "No, it's not," says the man. "No... it's not. You've got me there," Luther admits. The difference is that, while Luther may bend the rules to fit his ideas about crime and punishment, he doesn't do so looking for outside approval the way the antagonist he's facing down does -- the opposite, really. Instead, it's the viewers who seethe on his behalf and yearn for his efforts to continue, and it's that conflicting emotion far more than the procedural aspects that lifts "Luther" above the plethora of similarly lurid recent dark crime dramas it resembles.
ARMED SERVICES BOARD OF CONTRACT APPEALS Appeal of -- ) ) _ ) ASBCA N°' 60315 ) ) Under Contract No. HTC71 l-l4-D-R033 APPEARANCE FOR THE APPELLANT: _ President APPEARANCES FOR THE GOVERNMENT: Jeffrey P. Hildebrant, Esq. Air Force Deputy Chief Trial Attomey Lt Col Mark E. Allen, USAF Jason R. Smith, Esq. Trial Attomeys OPINlON BY ADMINISTRATIVE JUDGE D’ALESSANDRIS ON APPELLANT’S MOTION FOR RECONSIDERAT]ON Appellant _ (-) has timely filed a motion for reconsideration of our 21 November 2016 decision granting the govemment’s motion for summary judgment and denying this appeal. -, ASBCA No. 60315, 1(»1 BCA 11 36,569. Familiariiy with our decision is presumed In deciding a motion for reconsideration, we examine whether the motion is based upon newly discovered evidence, mistakes in our findings of fact, or errors of law. Zulco International, lnc., ASBCA No. 55441, 08-1 BCA 1| 33,799 at 167,319. A motion for reconsideration does not provide the moving party the opportunity to reargue its position or to advance arguments that properly should have been presented in an earlier proceeding See Dixon v. Shz`nseki, 741 F.3d 1367, 1378 (Fed. Cir. 2014). We do not grant motions for reconsideration absent a compelling reason. J.F. Taylor, Inc., ASBCA Nos. 56105, 56322, 12-2 BCA 11 35,125 at 172,453. - argues in its motion for reconsideration that the government breached the contract by violating PAR 52.233-3, PROTEST AFTER AWARD (AUG 1996) for failing to cancel the stop-work order or terminating the contract for convenience after the post-award protest period (app. mot. at l, 8). In our decision, we addressed this same argument and stated that “the suspension of work and termination for convenience clauses provide no relief when no work was ordered under an [indefinite-delivery, indefinite-quantity] contract and the contractor has been paid the minimum contract value.” _, 16-1 BCA 11 36,569 ar 178,109. -, in its reply, acknowledges that part of our decision cited above, but argues that the government should still pay costs which it incurred after the suspension of work was allegedly lifted (app. reply br. at 7). However, all of the costs incurred were considered in our decision and found to be generated by tasks which was already expected to do under the terms of the contract. 16-1 BCA il 36,569 at 178,110-11. 3 We conclude - has not shown any compelling reason to modify our original decision, as - merely reargues its original position relying on the same facts. CONCLUSION For the reasons stated above, -’s motion for reconsideration is denied. Dated: 15 March 2017 DAVID D’ALESSANDRIS Administrative Judge Armed Services Board of Contract Appeals Iconcur% I concur MARK N. STEMPLER / RICHARD SHACKLEFORD Administrative Judge Administrative Judge Acting Chairman Vice Chairman Armed Services Board Armed Services Board of Contract Appeals of Contract Appeals I certify that the foregoing is a true copy of the Opinion and Decision of the Armed services Board of Contract Appeals in ASBCA Ne. 60315, Appeai ef- _, rendered in conformance with the Board’s Charter. Dated: JEFFREY D. GARDIN Recorder, Armed Services Board of Contract Appeals
<HTML><HEAD> <TITLE>Invalid URL</TITLE> </HEAD><BODY> <H1>Invalid URL</H1> The requested URL "&#91;no&#32;URL&#93;", is invalid.<p> Reference&#32;&#35;9&#46;44952317&#46;1507271057&#46;135fad8 </BODY></HTML>
The date is fast approaching for our spring rally. I have posted the reservation information in the Calendar section, I will post more details in the calendar section as they become available. If you have any questions please e-mail me at [email protected].
Koolstra K, Beenakker J‐WM, Koken P, Webb A, Börnert P. Cartesian MR fingerprinting in the eye at 7T using compressed sensing and matrix completion‐based reconstructions. Magn Reson Med. 2019;81:2551--2565. 10.1002/mrm.27594 30421448 **Funding information** This project was partially funded by the European Research Council Advanced Grant 670629 NOMA MRI. 1. INTRODUCTION {#mrm27594-sec-0005} =============== Ophthalmologic disease diagnosis conventionally relies mainly on ultrasound and optical imaging techniques such as fundus photography and fluorescent angiography (FAG), MRI is increasingly being used in the radiological community.[1](#mrm27594-bib-0001){ref-type="ref"}, [2](#mrm27594-bib-0002){ref-type="ref"}, [3](#mrm27594-bib-0003){ref-type="ref"} One of the main advantages of MRI is its capability to assess nontransparent tissues such as ocular tumors or structures behind the globe such as the eye muscles. Currently, however, these applications are mainly based on qualitative MRI methods using the large number of tissue contrasts addressable by MR. As an example, in Graves' ophthalmopathy fat‐suppressed T~2~‐weighted MRI is the standard to detect inflammation in the eye muscles,[4](#mrm27594-bib-0004){ref-type="ref"}, [5](#mrm27594-bib-0005){ref-type="ref"} whereas in the diagnosis of retinoblastoma, a rare intraocular cancer in children, standard T~1~‐ and T~2~‐weighted MRI is often performed to confirm the presence of the tumor and to screen for potential optic nerve involvement.[2](#mrm27594-bib-0002){ref-type="ref"} In more recent ophthalmologic applications of MRI, such as uveal melanoma (the most common primary intraocular tumor), quantitative MRI techniques including DWI[6](#mrm27594-bib-0006){ref-type="ref"} and DCE imaging[7](#mrm27594-bib-0007){ref-type="ref"} have been shown, but currently diagnosis is still based on qualitative methods.[3](#mrm27594-bib-0003){ref-type="ref"} To personalize treatment plans quantitative parameters of the tissues involved, as can be acquired invasively for example by performing biopsies,[8](#mrm27594-bib-0008){ref-type="ref"} are highly desirable. However, quantitative parameter mapping by means of MRI requires long examination times, which would result in significant eye‐motion artifacts, as well as patient discomfort.[9](#mrm27594-bib-0009){ref-type="ref"} MR fingerprinting (MRF) is a recently introduced method for rapid quantitation of tissue relaxation times and other MR‐related parameters.[10](#mrm27594-bib-0010){ref-type="ref"} It uses a flip angle sweep to induce a unique signal evolution for each tissue type. Incoherent undersampling can be applied during sampling of the MRF train, enabling acceleration of the MRF scans.[10](#mrm27594-bib-0010){ref-type="ref"} Together with its ability to measure simultaneously T~1~ and T~2~, MRF offers a solution to the problem of obtaining quantitative measures in an efficient manner and in relatively short scanning times. One of the main challenges in ocular imaging is in‐plane and through‐plane eye motion, often associated with eye blinking.[11](#mrm27594-bib-0011){ref-type="ref"}, [12](#mrm27594-bib-0012){ref-type="ref"}, [13](#mrm27594-bib-0013){ref-type="ref"} The motion results in corrupted k‐space data that introduces artifacts and blurring throughout the entire image. Shortening the scans would reduce motion‐related artifacts, but standard acceleration techniques are not optimal for the current eye application due to the following 3 reasons. First, a cued‐blinking protocol is typically used to control and reduce the eye motion.[3](#mrm27594-bib-0003){ref-type="ref"}, [11](#mrm27594-bib-0011){ref-type="ref"} This requires an instruction screen placed at the end of the MR tunnel to be visible to the patient which complicates the use of small phased array receive coils in front of the eye, blocking the view. Instead, a custom‐built single‐element eye loop coil is used, which provides a high local SNR[3](#mrm27594-bib-0003){ref-type="ref"} and screen visibility, but which clearly excludes the possibility of scan acceleration by means of parallel imaging.[14](#mrm27594-bib-0014){ref-type="ref"} Second, the gel‐like vitreous body has an extremely long T~1~, particularly at high field.[15](#mrm27594-bib-0015){ref-type="ref"} Its value of 3 to 5 s requires a long duration of the MRF sequence to encode the MR parameters (T~1~,T~2~) sufficiently. Thus, using a flip angle train with a small number of RF pulses is not feasible, hindering scan time reduction. Finally, a time‐efficient spiral sampling scheme, usually applied in MRF,[10](#mrm27594-bib-0010){ref-type="ref"}, [16](#mrm27594-bib-0016){ref-type="ref"}, [17](#mrm27594-bib-0017){ref-type="ref"}, [18](#mrm27594-bib-0018){ref-type="ref"}, [19](#mrm27594-bib-0019){ref-type="ref"} introduces off‐resonance effects in each of the individual MRF images.[20](#mrm27594-bib-0020){ref-type="ref"} This occurs even when combined with unbalanced sequences such as fast imaging with steady state precession,[16](#mrm27594-bib-0016){ref-type="ref"} which are in themselves robust to off‐resonance effects.[21](#mrm27594-bib-0021){ref-type="ref"} The off‐resonance effects present in spiral sampling schemes are much stronger at high field, where they result in blurring,[22](#mrm27594-bib-0022){ref-type="ref"} caused by strong main field inhomogeneities (particularly in the eye region due to many air‐tissue‐bone interfaces), as well as the presence of significant amounts of off‐resonant orbital fat around the eye. In this work, a Cartesian sampling scheme is used, which is more robust than spiral sampling to off‐resonance effects, but which is significantly less time‐efficient.[23](#mrm27594-bib-0023){ref-type="ref"} With such a Cartesian sampling scheme, undersampling artifacts have a more structured nature compared with spiral sampling, which increases the temporal coherence of the artifacts in the MRF image series.[10](#mrm27594-bib-0010){ref-type="ref"}, [20](#mrm27594-bib-0020){ref-type="ref"} In this case, direct matching of the measured MRF signal reconstructed by plain Fourier transformations, to the simulated dictionary elements is not sufficiently accurate for high undersampling factors.[24](#mrm27594-bib-0024){ref-type="ref"}, [25](#mrm27594-bib-0025){ref-type="ref"} Therefore, the quality of the reconstructed MRF data has to be improved before the matching process. Compressed sensing (CS) has been introduced as a technique to reconstruct images from randomly undersampled data by enforcing signal sparsity (in the spatial dimension only or both in spatial and temporal dimensions),[26](#mrm27594-bib-0026){ref-type="ref"}, [27](#mrm27594-bib-0027){ref-type="ref"} allowing a scan time reduction in many applications.[28](#mrm27594-bib-0028){ref-type="ref"}, [29](#mrm27594-bib-0029){ref-type="ref"}, [30](#mrm27594-bib-0030){ref-type="ref"} The flexibility of MRF toward different sampling schemes and undersampling factors makes it possible to reconstruct the source images by means of CS.[27](#mrm27594-bib-0027){ref-type="ref"}, [31](#mrm27594-bib-0031){ref-type="ref"}, [32](#mrm27594-bib-0032){ref-type="ref"} Higher acceleration factors might be feasible if the correlation in the temporal dimension is better used.[33](#mrm27594-bib-0033){ref-type="ref"} Examples of such reconstructions specifically tailored to MRF are given in Davies et al, Pierre et al, and Zhao et al[34](#mrm27594-bib-0034){ref-type="ref"}, [35](#mrm27594-bib-0035){ref-type="ref"}, [36](#mrm27594-bib-0036){ref-type="ref"} which take into account the simulated dictionary atoms in the image reconstruction process. Recent work has shown that the temporal correlation in the MRF data can be exploited even further by incorporating the low rank structure of the data into the cost function,[37](#mrm27594-bib-0037){ref-type="ref"} a technique which was introduced into MR in Liang[38](#mrm27594-bib-0038){ref-type="ref"} and in MRF in Zhao[39](#mrm27594-bib-0039){ref-type="ref"} and used by many others[40](#mrm27594-bib-0040){ref-type="ref"}, [41](#mrm27594-bib-0041){ref-type="ref"}, [42](#mrm27594-bib-0042){ref-type="ref"}: these techniques can also be combined with sparsity constraints.[43](#mrm27594-bib-0043){ref-type="ref"}, [44](#mrm27594-bib-0044){ref-type="ref"} Most of the aforementioned techniques involve Fourier transformations in each iteration, making the reconstruction process time‐consuming. In this application, the single‐element receive coil allows us to perform the reconstruction process entirely in k‐space when exploiting the low rank structure of the MRF data as is performed in matrix completion (MC)‐based reconstructions.[42](#mrm27594-bib-0042){ref-type="ref"}, [45](#mrm27594-bib-0045){ref-type="ref"} In this work, undersampled Cartesian ocular MRF is investigated using CS and MC‐based reconstructions. Simulations and experiments performed in 6 healthy volunteers for confirmation are compared with fully sampled MRF in terms of the quality of the parameter maps, and mean relaxation times were derived for different ocular structures at 7T. Finally, parameter maps after an MC‐based reconstruction are included for a uveal melanoma patient, showing the feasibility of ocular MRF in eye tumor patients. 2. METHODS {#mrm27594-sec-0006} ========== 2.1. Fingerprinting definition {#mrm27594-sec-0007} ------------------------------ The MRF encoding principle is based on a variable flip angle train with relatively short TRs, so that the magnetization after each RF pulse is influenced by the spin history. Following closely the implementation of the sinusoidal MRF pattern described in Jiang et al,[16](#mrm27594-bib-0016){ref-type="ref"} a flip angle pattern of 240 RF excitation pulses ranging from 0° to 60° (see Figure [1](#mrm27594-fig-0001){ref-type="fig"}A) was defined by the function$$FA\left( x \right) = \left\{ \begin{matrix} {20\,\text{sin}(\frac{\pi}{110}x)\,\text{for}\, 1 \leq x \leq 110} \\ {60\,\text{sin}(\frac{\pi}{130}\left( {x - 110} \right))\,\text{for}\, 110 < x \leq 240} \\ \end{matrix} \right.$$ ![The MRF sequence, instructed blinking set‐up, sampling pattern, and temporal correlation used in all experiments. A, Each flip angle train is preceded by an adiabatic 180° inversion pulse. The flip angle pattern consists of 240 RF pulses ranging from 0° to 60°. The total number of repetitions K of the MRF train is determined by the undersampling factor. The 2.5 s repetition delay between trains allows for instructed eye blinking when the scanner is not acquiring data. B, During data acquisition, a cross is shown on a screen placed at the end of the MR tunnel, which can be seen through 1 eye by means of a small mirror attached to the eye coil. During the repetition delay, the cross changes into a red circle, indicating that blinking is allowed before data acquisition starts again. The single loop eye coil setup is illustrated as well. C, Each time point (shot number) in the flip angle train is sampled differently. A simple variable density scheme is used. The outer region of k‐space is randomly sampled, whereas the central part of k‐space is fully sampled for each time point. The incoherent variable density sampling allows a CS reconstruction, while the fully sampled center can be used as calibration data for the MC‐based reconstruction. D, The singular values of the central k‐space/calibration matrix decay very quickly, which shows the low rank property of the eye MRF data, and forms the basis of the MC‐based reconstruction. Plots were generated for an undersampling factor of R = 12.3 in the outer region of k‐space, which results in a total undersampling factor of 6.7. E, Anatomical T~1~‐weighted 3D MR image of the eye, showing different ocular structures. L, lens nucleus; V, vitreous body; F, orbital fat; M, extraocular muscle; N, optic nerve](MRM-81-2551-g001){#mrm27594-fig-0001} preceded by an inversion pulse (16). A fast imaging with steady state precession sequence was used,[16](#mrm27594-bib-0016){ref-type="ref"}, [19](#mrm27594-bib-0019){ref-type="ref"} in which the TE was chosen as 3.5 ms and 4.0 ms for low resolution scans and high resolution scans, respectively. The selected excitation RF pulse had a time‐bandwidth product of 10, resulting in a reasonably sharp slice profile. The RF pulse phase was fixed to 0°. To simplify dictionary calculations, because of the simplification of the magnetization coherence pathways,[46](#mrm27594-bib-0046){ref-type="ref"} the TR was set to a constant value of 11 ms. A 3D dictionary was calculated following the extended phase graph formalism,[21](#mrm27594-bib-0021){ref-type="ref"}, [46](#mrm27594-bib-0046){ref-type="ref"} based on the Bloch equations,[47](#mrm27594-bib-0047){ref-type="ref"}, [48](#mrm27594-bib-0048){ref-type="ref"} incorporating 27,885 signal evolutions.[46](#mrm27594-bib-0046){ref-type="ref"} T~1~ values ranged from 10 to 1000 ms in steps of 10 ms, and from 1000 to 5000 ms in steps of 100 ms. T~2~ values ranged from 10 to 100 ms in steps of 10 ms and from 100 to 300 ms in steps of 20 ms. A B~1~ ^+^ fraction ranging from 0.5 to 1.0 in steps of 0.05 was incorporated into the dictionary calculation. To shorten the scan time, we used a short waiting time between repetitions of the MRF train (called the repetition delay) of 2.5 s. Therefore, each MRF scan was preceded by 3 dummy trains to establish steady state magnetization,[19](#mrm27594-bib-0019){ref-type="ref"} which was considered in the dictionary calculation. The longitudinal magnetization after the 3 dummy trains, required for correction of the M~0~ maps, was calculated for each T~1~/T~2~ combination. The repetition delay of 2.5 s was efficiently used as the blink time.[3](#mrm27594-bib-0003){ref-type="ref"}, [11](#mrm27594-bib-0011){ref-type="ref"} 2.2. Experimental setup {#mrm27594-sec-0008} ----------------------- All experiments were approved by the local medical ethics committee, and all volunteers and patients signed an appropriate informed consent form. The experiments in this study were performed on 6 healthy volunteers and 1 uveal melanoma patient using a 7T MR system (Philips Healthcare) equipped with a quadrature head volume coil (Nova Medical) for transmission and a custom‐built single‐element eye coil for reception, with a diameter of approximately 4 cm.[3](#mrm27594-bib-0003){ref-type="ref"}, [49](#mrm27594-bib-0049){ref-type="ref"} A cued‐blinking protocol was followed, which means that all subjects were instructed to focus on a fixation target shown on a screen during data acquisition and to blink in the 2.5 s repetition delay. This was performed using a small mirror integrated into the eye coil, allowing visualization of a screen placed outside the magnet through 1 eye, while the eye to be imaged was closed and covered by a wet gauze to reduce susceptibility artifacts in the eye lid.[50](#mrm27594-bib-0050){ref-type="ref"} This setup is shown schematically in Figure [1](#mrm27594-fig-0001){ref-type="fig"}B. 2.3. MR data acquisition {#mrm27594-sec-0009} ------------------------ Because of the presence of significant orbital fat around the eye, and the sensitivity of the spiral to off‐resonance resulting in blurring,[22](#mrm27594-bib-0022){ref-type="ref"} a Cartesian sampling scheme was used to acquire all data. The fingerprinting scans were acquired as a single slice at 2 different spatial resolutions: 1.0 × 1.0 × 5.0 mm^3^ and 0.5 × 0.5 × 5.0 mm^3^. The lower resolution scan was performed twice, the first fully sampled to serve as a reference, and the second one undersampled. The scan time of the fully sampled scan was 7:02 min, while the scan time of the undersampled scan, in which 15% of the data was acquired, was 1:16 min. The high resolution scan was only acquired as an undersampled data set, in which 12.5% of the data was acquired, resulting in a scan time of 1:57 min. In the undersampled scans a simple variable density k‐space sampling was applied, schematically shown in Figure [1](#mrm27594-fig-0001){ref-type="fig"}C, supporting both CS and MC‐based reconstructions. A fully sampled center of k‐space was acquired for each time point consisting of 6/8 k‐space lines for the low resolution/high resolution scans, respectively. For all scans, the FOV was set to 80 × 80 mm^2^, resulting in an acquisition matrix of 80 × 80 and 160 × 160 for the low and the high resolution scans, respectively. The phase encoding direction was set from left‐to‐right to minimize contamination by any residual motion artifacts in the eye lens, and the read out direction was set to the anterior‐posterior direction. B~1~ ^+^ maps were acquired using the dual refocusing echo acquisition mode method[51](#mrm27594-bib-0051){ref-type="ref"} with the following scan parameters: FOV = 80 × 80 mm^2^, in‐plane resolution 1 mm^2^, slice thickness 5 mm, 1 slice, TE~1~/TE~2~ = 2.38/1.54 ms, TR = 3.7 ms, FA = α:60°/ß:10°: the scan time for a single slice was less than 1 s. 2.4. Reconstruction {#mrm27594-sec-0010} ------------------- For each time point, the corresponding images were reconstructed from the available data, using custom software written in MATLAB (Mathworks, Inc) and run on a Windows 64‐bit machine with an Intel i3‐4160 CPI @ 3.6 GHz and 16 GB internal memory. Different reconstructions were performed: (i) a fast Fourier transform (FFT) of the fully sampled data and of the zero‐filled undersampled data; (ii) a CS reconstruction with total variation regularization in the spatial dimension (2D CS), and with total variation in both spatial and temporal dimensions (3D CS) of the undersampled data; (iii) an MC‐based reconstruction of the undersampled data. ### 2.4.1. CS reconstruction {#mrm27594-sec-0011} In this reconstruction, the complete image series is reconstructed by iteratively solving the nonlinear problem$$\hat{\mathbf{x}} = \text{argmin}_{\mathbf{x}}TV\left( \mathbf{x} \right)s.t.\, RF\mathbf{x} = \mathbf{y}_{u}$$ through the unconstrained version$$\hat{\mathbf{x}} = \text{argmin}_{\mathbf{x}}{\frac{\mu}{2}{|RF\mathbf{x} - \mathbf{y}_{u}|}}_{2}^{2} + \frac{\lambda}{2}TV{(\mathbf{x})}$$ In this formulation, $F \in \mathbb{C}^{Nt \times Nt}$ is a block diagonal matrix with the 2D Fourier transform matrix in each diagonal block, $R \in \mathbb{C}^{Nt \times Nt}$ is a diagonal matrix incorporating the sampling locations, $\mathbf{y}_{u} \in \mathbb{C}^{Nt \times 1}$ is the undersampled k‐t space data, $\hat{\mathbf{x}} \in \mathbb{C}^{Nt \times 1}$ is an estimate of the true image series and $\mathit{TV}$ is a total variation operator which is used to enforce sparsity in the reconstruction.[52](#mrm27594-bib-0052){ref-type="ref"}, [53](#mrm27594-bib-0053){ref-type="ref"} Here, $N$ is the number of k‐space locations per image frame and $t$ is the number of measured time points (or flip angles in the MRF train). The regularization parameters $\mu$ and $\lambda$ in Equation [\[Link\]](#mrm27594-disp-0001){ref-type="disp-formula"} were determined empirically and set to $\mu = 0.1\, and\,\lambda = 0.2.$ Two basic versions of the total variation operator,$$TV\left( \mathbf{x} \right) = {|\nabla_{x}{\mathbf{x}|}}_{1} + {|\nabla_{y}{\mathbf{x}|}}_{1}$$ $$TV\left( \mathbf{x} \right) = {|\nabla_{\mathbf{x}}{\mathbf{x}|}}_{1} + {|\nabla_{\mathbf{y}}{\mathbf{x}|}}_{1} + {|\nabla_{\mathbf{t}}{\mathbf{x}|}}_{1}$$ were implemented to investigate the effect of promoting sparsity either only in the spatial dimension (2D CS) or in both the spatial and temporal dimensions (3D CS). In these expressions, $\nabla_{x},\nabla_{y}$ and $\nabla_{t}$ are the first derivative operators acting on the spatial $x$ and $y$ dimensions and the time dimension, respectively. Solving the problem given in Equation [\[Link\]](#mrm27594-disp-0001){ref-type="disp-formula"} is done in this work using Split Bregman. For details on this algorithm the reader is referred to Goldstein and Osher.[54](#mrm27594-bib-0054){ref-type="ref"} ### 2.4.2. MC reconstruction {#mrm27594-sec-0012} Similar to CS with the TV operator acting in 3 dimensions (see Equation [(1)](#mrm27594-disp-0003){ref-type="disp-formula"}), MC uses the information from the temporal dimension.[45](#mrm27594-bib-0045){ref-type="ref"}, [55](#mrm27594-bib-0055){ref-type="ref"} A main difference between CS and MC, however, is that sparsity of singular values, which is a priori information in the MC reconstruction, can be observed both in image space and in k‐space. This allows one to complete the entire reconstruction in k‐space, which is computationally efficient, especially if only a single receiver coil is used.[42](#mrm27594-bib-0042){ref-type="ref"} The MC‐based reconstruction iteratively solves$$\hat{M} = \mathit{argmin}_{M}{|M|}_{\ast}\, s.t.\,\mathcal{P}_{\Omega}M = M_{u}$$ with ${| \bullet |}_{\ast}$ being the nuclear norm, $\mathcal{P}_{\Omega}$ the sampling operator selecting the measured k‐t space locations, $M_{u} \in {}^{t \times N}$ the undersampled k‐t space data and $\hat{M} \in \mathbb{C}^{t \times N}$ an estimate of the true k‐t space. The nuclear norm of *M* sums the singular values of *M*, and can thus be written as ${|\sigma(M)|}_{1}$, where $\sigma$ transforms $M$ into a vector containing the singular values of $M$. The central k‐t space is used as calibration data, of which the rank can be used as a priori information in the reconstruction of undersampled data. In this process, a projection matrix $\mathcal{P}_{U_{n}} \in \mathbb{C}^{t \times t}$ projects in each iteration $i$ the undersampled data matrix $M^{i}$ onto a low‐rank subspace spanned by the columns of $U_{n} \in \mathbb{C}^{t \times n}$, such that$$\overset{\mspace{600mu}}{M^{i}} = \mathcal{P}_{U_{n}}M^{i}$$ with$$\mathcal{P}_{U_{n}} = U_{n}U_{n}^{H}.$$ Here, $U_{n}$ contains the $n$ most significant left singular vectors of the calibration matrix $M_{c} \in \mathbb{C}^{t \times p}$ and is constructed from the full singular value decomposition $M_{c} = U\Sigma V^{H}$, $U \in \mathbb{C}^{t \times t}$, $\Sigma \in \mathbb{R}^{t \times p}$, $V \in \mathbb{C}^{p \times p}$, which is performed once at the beginning of the algorithm. In the second step of each iteration, the data are updated according to$$M^{i + 1} = M_{u} + {(I - \mathcal{P}_{\Omega})}{\overset{\sim}{M}}^{i}.$$ The value $n$ was determined empirically from the singular value plots (shown in Figure [1](#mrm27594-fig-0001){ref-type="fig"}D for 1 volunteer) and set to 4 for all MC‐based reconstructions. Further details of the adopted algorithm to solve Equation [(2)](#mrm27594-disp-0004){ref-type="disp-formula"}, and its implementation can be found in Doneva et al.[42](#mrm27594-bib-0042){ref-type="ref"} To ensure convergence of the iterative CS and MC‐based reconstructions, 40 Split Bregman iterations (1 inner loop) were used for the CS reconstructions and 100 iterations were used for all MC‐based reconstructions. To judge the performance of the reconstruction methods, relative error measures are defined throughout the manuscript as$$\mathit{RelativeError}\left( \mathbf{u} \right) = \frac{{{|\mathbf{u} -}\mathbf{u}_{\mathbf{r}\mathbf{e}\mathbf{f}}|}_{2}}{{|\mathbf{u}_{\mathbf{r}\mathbf{e}\mathbf{f}}|}_{2}},$$ where $\mathbf{u}_{\mathit{ref}}$ is the fully sampled image series and both $\mathbf{u}$ and $\mathbf{u}_{\mathit{ref}}$ are vectorized. 2.5. Dictionary matching process {#mrm27594-sec-0013} -------------------------------- For each subject, the measured B~1~ ^+^ map was used to calculate an average B~1~ ^+^ value in the eye. Based on this value, a 2D subdictionary was chosen that matches the drop in B~1~ ^+^ for each volunteer. Each voxel signal in the reconstructed MRF image series was then matched to an element of the subdictionary. In this process, the best match between the measured signal and the dictionary elements was found for each voxel by solving$$m = \mathit{argmax}_{\mathbf{i} \in {\{{1,..,\mathbf{M}}\}}}\left\{ {\mathbf{d}_{\mathbf{i}} \bullet \mathbf{s}} \right\}$$ where $\mathbf{d}_{i} \in \mathbb{C}^{t \times 1}$ is the ith normalized dictionary element and $\mathbf{s} \in \mathbb{C}^{t \times 1}$ is the normalized measured signal. The index $m$ that maximizes the inner product describes the dictionary element $\mathbf{d}_{m}$ (with corresponding T~1~ and T~2~ values) that gives the best match with the measured signal. Finally, the scalar proton density per voxel was determined from the model$$\mathbf{S}{= rM}_{0}\mathbf{D}_{\mathbf{m}},$$ where $\mathbf{S} \in \mathbb{C}^{t \times 1}$ is the nonnormalized signal per voxel and $\mathbf{D}_{m} \in {}^{t \times 1}$ the nonnormalized dictionary element corresponding to the best match $\mathbf{d}_{m}$, such that$$M_{0} = \frac{1}{r}\frac{(\mathbf{D}_{m} \bullet \mathbf{S})}{(\mathbf{D}_{m} \bullet \mathbf{D}_{m})}$$ *r* is a value between 0 and 1, describing the fraction of the initial longitudinal magnetization that is left after the dummy trains, also depending on T~1~ and T~2~, which takes into account the short repetition delay in between the MRF trains. M~0~ maps are all shown on a log‐scale due to the high dynamic range of the respective proton densities, with that of the vitreous body being more than an order of magnitude larger than other structures. The processed T~1~, T~2~, and M~0~ maps were compared for different reconstruction methods (FFT, 2D CS, 3D CS, and MC) and for different acquisitions (low spatial resolution, high spatial resolution). T~1~ and T~2~ values were averaged in different regions of interest, annotated in Figure [1](#mrm27594-fig-0001){ref-type="fig"}E for each volunteer. These values were used to determine mean ± SD values over all volunteers for the different reconstructions. 3. RESULTS {#mrm27594-sec-0014} ========== 3.1. Simulation results {#mrm27594-sec-0015} ----------------------- Figure [2](#mrm27594-fig-0002){ref-type="fig"} shows the parameter maps (T~1~, T~2~, and M~0~) obtained for different reconstruction methods, after subsampling the fully sampled k‐space data of 1 healthy volunteer. Even though an incoherent sampling scheme was used, a zero‐filled FFT reconstruction does not lead to accurate parameter maps. The CS reconstruction with total variation regularization in the spatial domain leads to only minor improvement for the high undersampling factor that was chosen. The results show that including the sparsity constraint in the temporal dimension on top of the spatial dimension improves the CS reconstruction, with the largest improvement in the optic nerve and the lens nucleus, indicated by the white arrows. The total undersampling factor of 6.7, however, in combination with the low resolution reconstruction matrix and the single channel signal, results in loss of detail in the CS approach. ![Simulated effect of different reconstruction methods on the parameter maps. Columns 1 to 4 show parameter maps after reconstruction of subsampled source images using a zero‐filled FFT, CS with spatial regularization (2D), CS with spatial and temporal regularization (3D), and MC. Column 5 shows parameter maps after an FFT of the fully sampled data. Adding the temporal regularization in the 3D CS reconstruction improves the quality of the parameter maps (M~0~, T~1~, T~2~) compared with the zero‐filled FFT and the 2D CS reconstruction (see white arrows). The parameter maps resulting from an MC‐based reconstruction show more detail (see white circles), much smaller errors, and the errors have a more noise‐like structure. Note that all M~0~ maps are shown on a log‐scale due to the high dynamic range of the tissue proton densities](MRM-81-2551-g002){#mrm27594-fig-0002} This is not the case for the MC‐based reconstructions. The parameter maps resulting from the MC‐based approach are very close to the parameter maps obtained from the fully sampled scan, enabling visualization of the extraocular muscles and the orbital fat, indicated by the white circles. The error maps in Figure [2](#mrm27594-fig-0002){ref-type="fig"}, defined as the relative difference with the parameter maps from the fully sampled scan, given in percentages, confirm these findings. The error has a more noise‐like behavior for the MC‐based reconstruction compared with the CS reconstruction, and is much lower in the sensitive region of the eye coil. The error maps for T~1~ show larger percentage improvements compared with T~2~. These general trends were also true for different undersampling factors (see Supporting Information Figure [S1](#mrm27594-sup-0001){ref-type="supplementary-material"}, which is available online). 3.2. Experimental results {#mrm27594-sec-0016} ------------------------- Parameter maps obtained in an undersampled experiment are shown in Figure [3](#mrm27594-fig-0003){ref-type="fig"} for low spatial resolution images. The experimental results confirm the findings from the simulation study. The parameter maps obtained from the undersampled MRF scan with a 3D CS reconstruction show loss of detail compared with the parameter maps obtained with an MC‐based reconstruction. This is especially visible in the M~0~ maps. For the MC‐based reconstruction, the parameter maps are similar quality to those obtained from the fully sampled scans, showing the feasibility of accelerating MRF in the eye using a Cartesian sampling scheme. It should be noted that the full k‐space data and the undersampled k‐space data originate from different scans, which is why residual motion artifacts are different between the resulting parameter maps. The parameter maps at high resolution in Figure [4](#mrm27594-fig-0004){ref-type="fig"} show more detail compared with the parameter maps at low resolution in Figure [3](#mrm27594-fig-0003){ref-type="fig"}, indicated by the white circle. For the high resolution case, however, the 3D CS reconstruction gives larger improvements compared with the low resolution case. ![The effect of different reconstruction methods on the parameter maps of experimental data at low resolution. Parameter maps obtained at low (1.0 × 1.0 × 5.0 mm^3^) resolution confirm the findings from the simulation (c.f., Figure [2](#mrm27594-fig-0002){ref-type="fig"}). The parameter maps obtained from a CS reconstruction show loss of detail. The quality of the maps obtained from the undersampled scan after an MC‐based reconstruction is comparable to the quality of the maps from a fully sampled scan. Inhomogeneities are visible in the vitreous body, which is very hard to accurately encode due to the low sensitivity of the MRF train for very long T~1~ values](MRM-81-2551-g003){#mrm27594-fig-0003} ![The effect of different reconstruction methods on the parameter maps of experimental data at high resolution. Parameter maps obtained at high (0.5 × 0.5 × 5.0 mm^3^) resolution for the same subject as in Figure [3](#mrm27594-fig-0003){ref-type="fig"} show more structural detail, indicated by the white circle. Note that Figure [3](#mrm27594-fig-0003){ref-type="fig"} and Figure [4](#mrm27594-fig-0004){ref-type="fig"} were different scans, in which motion artifacts are also different. Fully sampled data sets were not acquired for the high resolution case due to the prohibitively long scanning times required](MRM-81-2551-g004){#mrm27594-fig-0004} Parameter maps obtained in the 6 different volunteers for the low resolution scans are shown in Figure [5](#mrm27594-fig-0005){ref-type="fig"}. In all volunteers, some inhomogeneities are visible in the vitreous body, which is a region that is very sensitive to any type of motion or system imperfections because of the low sensitivity of the MRF sequence for very long T~1~ compared with short T~1~. This effect is illustrated in Figure [6](#mrm27594-fig-0006){ref-type="fig"}, where differences in short T~1~ values (500‐1000 ms) result in more distinguishable dictionary elements compared with the same absolute differences in long T~1~ values, (3500‐4000 ms) especially in the first half of the MRF train. These inhomogeneities differ slightly between successive scans in the same volunteer, and are more visible in the scans of volunteer 3 (Figure [5](#mrm27594-fig-0005){ref-type="fig"}C) and volunteer 5 (Figure [5](#mrm27594-fig-0005){ref-type="fig"}E). Overall, the shortened scan time reduces the risk of motion artifacts, which is clearly visible in volunteers 5 and 6 (Figure [5](#mrm27594-fig-0005){ref-type="fig"}E,F). The high resolution parameter maps for the same volunteers are shown in Supporting Information Figure [S2](#mrm27594-sup-0001){ref-type="supplementary-material"}A‐F, with several regions of improved structural detail indicated by the white circles. ![The parameter maps in all healthy volunteers. Parameter maps, resulting from low resolution scans, obtained in 6 healthy volunteers are shown in (A‐F), respectively. In all volunteers, the parameter maps obtained from a CS reconstruction (3D CS) show loss of detail compared with the maps obtained from the undersampled scan after an MC‐based reconstruction, for which the quality is comparable to that of the fully sampled scan: values are given in Table [1](#mrm27594-tbl-0001){ref-type="table"}. In some volunteers the inhomogeneities in the vitreous body appear stronger than in others, which probably correspond with cases of more motion. This can also be seen in (E,F), where the quality of the maps is better for the shorter scans (MC) compared with the fully sampled ones](MRM-81-2551-g005){#mrm27594-fig-0005} ![Simulated dictionary elements for different relaxation times. A, The simulated normalized absolute signal intensities for tissues with a T~1~ of 500 ms (blue) is plotted together with the signal evolution for tissues with a T~1~ of 1000 ms (red). Solid lines show simulation results for T~2~ values of 50 ms, while dotted lines show results for T~2~ values of 150 ms. Comparison of the red and blue graphs shows that the difference in T~1~ is encoded mostly in the first half of the MRF sequence, whereas T~2~ is encoded over the entire train. Comparison of the solid and dotted graphs shows that the second half helps to further encode differences in T~2~. B, The same results are plotted for a T~1~ of 3500 ms (blue) and 4000 ms (red), showing much smaller differences between the 2 simulated signal evolutions for the same absolute difference in relaxation times. This indicates that a certain difference in T~1~ is easier detected for lower T~1~ values with the current MRF train. Optimization of the MRF train might increase the encoding capability for large T~1~ values. For all simulations the B~1~ ^+^ fraction was set to 1](MRM-81-2551-g006){#mrm27594-fig-0006} Average T~1~ and T~2~ values in the lens nucleus, the vitreous body, the orbital fat, and the extraocular muscles are reported in Table [1](#mrm27594-tbl-0001){ref-type="table"} for the different low resolution scans and reconstruction methods. The relaxation times obtained with a CS reconstruction are relatively close to those of the MC‐based reconstruction, but differences are observed in small anatomical structures such as the extraocular muscles and the eye lens. Differences between the relaxation times from the MC‐based reconstructions and the FFT of the fully sampled data can in part be explained by the fact that motion artifacts differ from scan to scan. Average relaxation times obtained from high resolution scans (not reported) follow the results for the low resolution scans. Reference T~1~ values at 7T reported in Richdale et al[15](#mrm27594-bib-0015){ref-type="ref"} are included in Table [1](#mrm27594-tbl-0001){ref-type="table"}; it should be noted that these reported values show large differences in relaxation times between different measurement techniques. ###### T~1~ and T~2~ values for different ocular structures (annotated in Figure [1](#mrm27594-fig-0001){ref-type="fig"}C), averaged within the structure and over 6 volunteers[a](#mrm27594-note-0002){ref-type="fn"} CS 3D MC Full 7T Richdale et al. -------------------- --------------- ---------- ---------- -------------------- Lens nucleus 1403±178 1037±220 996±248 1520/1020 Vitreous body 3632±375 3614±444 3599±334 5000/4250 Orbital fat 93±23 100±29 95±26 -- Extraocular muscle 731±342 1736±346 1545±191 -- **T~2~ (ms)** Lens nucleus 29±9 29±12 21±10 -- Vitreous body 139±14 147±20 145±12 -- Orbital fat 55±12 51±16 51±19 -- Extraocular muscle 67±26 50±12 55±25 -- Values, given in milliseconds, were averaged in different regions of interest (lens nucleus, vitreous body, orbital fat, and extraocular muscle) from the different scans at low resolution, using different reconstruction methods, for each of the 6 healthy volunteers. The resulting values were used to determine mean ± SD values over all volunteers. The CS reconstruction produced different relaxation times in small anatomical regions such as the lens nucleus and the extraocular muscles. The TRs for the MC‐based reconstructions are close to the values for the fully sampled scans. Remaining differences can be explained by motion artifacts that differ from scan to scan. Reference values at 7T (variable flip angle gradient echo/inversion recovery) from previous literature were reported in the last 2 columns, showing large differences in T~1~ values between different techniques. John Wiley & Sons, Ltd Parameter maps in a uveal melanoma patient are shown in Figure [7](#mrm27594-fig-0007){ref-type="fig"}, together with a T~2~‐weighted, fat‐suppressed, TSE image for anatomical reference. The tumor and the detached retina are characterized in the MRF maps by much lower T~1~, T~2~, and M~0~ values compared with the vitreous body, which allows for clear discrimination between tumor and healthy tissue. Dictionary matches and measured signals (both normalized) in the detached retina, the lens nucleus, the eye tumor, and the fat are also shown. The average values in regions of interest are reported in Table [2](#mrm27594-tbl-0002){ref-type="table"}. ![Parameter maps and matches in a uveal melanoma patient. A, T~2~‐weighted turbo spin‐echo (TSE) images with fat suppression (SPIR) were obtained and shown (zoomed‐in) for reference, with scan parameters: FOV = 40 × 60 mm^2^; in‐plane resolution 0.5 mm^2^; 2 mm slice thickness; 10 slices; TE/TR/TSE factor = 62 ms/3000 ms/12; FA = 110°; refocusing angle = 105°; WFS = 4.1 pixels; and scan time = 1:18 min. The eye tumor, indicated by the white cross, is visible as well as retinal detachment, pointed out by the white circle in the subretinal fluid. The high resolution parameter maps show much lower T~1~, T~2~, and M~0~ values in the tumor compared with the vitreous body, while the subretinal fluid can also be distinguished from the tumor by slightly higher T~1~, T~2~, and M~0~ values. B, Signal evolutions are shown in blue together with the matched dictionary element in red, for the retina (white circle), the lens nucleus, the eye tumor (white cross) and the fat](MRM-81-2551-g007){#mrm27594-fig-0007} ###### T~1~ and T~2~ values for different ocular structures in a uveal melanoma patient[a](#mrm27594-note-0003){ref-type="fn"} T~1~(ms) T~2~(ms) ------------------------------- ---------- ---------- Lens nucleus 916 24 Vitreous body 4218 209 Orbital fat 112 84 Extraocular muscle 1282 56 Eye tumor 883 36 Liquid behind detached retina 1814 64 T~1~ and T~2~ values in milliseconds were averaged over drawn regions of interest. The eye tumor shows different relaxation times (both T~1~ and T~2~) compared with the vitreous body and with the liquid behind the detached retina, which allows for discrimination between tumor and healthy tissue. John Wiley & Sons, Ltd Reconstruction times for the different reconstruction methods were averaged over 6 healthy volunteers and reported in Table [3](#mrm27594-tbl-0003){ref-type="table"}. The iterative nature of CS and MC increases the reconstruction times compared with the direct FFT reconstruction, but the MC‐based reconstruction is much more time‐efficient because it is performed entirely in k‐space, and uses only fast matrix vector multiplications.[42](#mrm27594-bib-0042){ref-type="ref"} ###### Reconstruction times[a](#mrm27594-note-0004){ref-type="fn"} Computation time (s) -------------------------- ---------------------- ------ CS 3D (40 SB iterations) 584 2734 MC (100 iterations) 12 44 FFT 0.1 0.5 Mean values of reconstruction times in seconds calculated over 6 healthy volunteers for CS 3D, MC, and the direct FFT. The reconstruction times for both CS and MC take longer compared to the direct FFT due to the iterative process, but the MC‐based reconstruction is much more time‐efficient than the CS reconstruction because it is performed entirely in k‐space. John Wiley & Sons, Ltd 4. DISCUSSION {#mrm27594-sec-0017} ============= The results in the simulation study clearly show the benefit of using the temporal dimension in the reconstruction of MRF data, as is performed using MC. The low rank property of the signal evolutions allows higher undersampling factors than in a CS reconstruction, in which the TV operator was used to enforce sparsity in the temporal as well as in the spatial dimensions. The experimental results confirmed these findings, and showed the feasibility of reducing the MRF scan time with the proposed MC‐based reconstruction from 7:02 min to 1:16 min. Using MC, high resolution parameter maps can be obtained, which was out of practical reach for full sampling due to the long scan time. The technique was also demonstrated in a uveal melanoma patient, in which relaxation times showed a clear difference between tumor and healthy tissue. The CS reconstruction resulted in smoothed parameter maps, which averages out motion artifacts, but also reduces the amount of visible detail. One reason why the CS reconstruction did not perform as well as the MC‐based reconstruction might be that the TV operator is not the optimal sparsifying transform for transforming the measured data along the temporal domain. Other sparsifying transforms, such as the Wavelet transform or even learned transforms or dictionaries,[56](#mrm27594-bib-0056){ref-type="ref"}, [57](#mrm27594-bib-0057){ref-type="ref"} might result in improvements of the parameter maps after a CS reconstruction. For the high resolution data, however, the 3D CS reconstruction seemed to perform better compared with the low resolution case, while the MC‐based reconstruction performed well in both the low and the high resolution cases. This suggests that the CS reconstruction is more dependent on the resolution of the acquired data than MC, which might be explained by the fact that MC, as implemented here, does not incorporate any spatial correlation into the reconstruction process. Furthermore, reducing the resolution might reduce the sparsity of the images in appropriate transform domains, while this is one of the key ingredients for CS to work. Images from undersampled scans were reconstructed with MC, in which the chosen rank of the projection matrix influences the error. Here, the number of incorporated singular values was determined empirically in a simulation study: 4 singular values resulted in the smallest error after 100 iterations of the algorithm. Other sampling patterns, flip angle trains or anatomies will likely require new optimization of the projection matrix. In the current acquisition, 15% or 12.5% of the data was acquired with 6 or 8 fully sampled central k‐space lines for each image frame. Further tuning of the sampling pattern might improve the accuracy of the reconstructions or allow even shorter scan times. One should keep in mind, however, that the sampled k‐t lines are used to reconstruct the missing k‐t lines. Because higher undersampling factors result in shorter scan times, this reduces the risk of motion‐corrupted k‐space lines, but if there is still significant motion, this affects a larger percent of the acquired data. Therefore, care should be taken to find a balance between the scan time and the robustness of the reconstruction algorithm to motion. In this work, the projection matrix was constructed from the central k‐t lines of the measurement data. In Doneva et al,[42](#mrm27594-bib-0042){ref-type="ref"} it was shown that this type of projection matrix results in a more accurate reconstruction compared with a projection matrix constructed from randomly selected k‐t lines due to the lower SNR in the latter case. Other works have used the simulated MRF dictionary as calibration data, which would eliminate the need to fully sample the centers of k‐space.[41](#mrm27594-bib-0041){ref-type="ref"} Such an approach will probably show a steeper decay in normalized singular values due to the absence of noise and motion in the simulations (see Supporting Information Figure [S3](#mrm27594-sup-0001){ref-type="supplementary-material"}). The central k‐space based projection matrix, however, results in a smaller reconstruction error, indicating that the central k‐space approximates the rank of the measurement data better. Further work should investigate whether this approach could be advantageous in terms of mitigating motion artifacts. As an alternative approach to the method used in our work, in which a low‐rank constraint is added as a penalty term to the cost function, the low‐rank property of the unknown image series can be incorporated directly in the data fidelity term, transforming the minimization problem into a linear one, which may be beneficial in terms of computational costs.[41](#mrm27594-bib-0041){ref-type="ref"} It would be interesting to compare the accuracy of the 2 methods in future work. Although this study has shown the feasibility of using MR fingerprinting to characterize the relaxation times of different anatomical structures in the eye, eye motion can still be a limiting factor. The parameter maps presented in the results section show inhomogeneities in the vitreous body, which can be a result of different types of motion in the eye (see Supporting Information Figure [S4](#mrm27594-sup-0001){ref-type="supplementary-material"}). The presence of motion in combination with the long T~1~ of the vitreous body and the low sensitivity of the MRF train to these long values, make it challenging to accurately map the relaxation times in the vitreous body itself, as was shown in Figure [6](#mrm27594-fig-0006){ref-type="fig"}. Adopting a longer MRF train, as well as pattern optimization of the MRF train, might help to increase the encoding capability, but a longer time between the cued‐blinks will strongly increase the chance of blink‐induced artifacts. However, one should recognize from a clinical point‐of‐view that for almost all ocular conditions the vitreous body is not affected and, therefore, an accurate quantification of its T~1~ is clinically not relevant. Outer volume suppression pulses, applied immediately before the inversion pulse or during 0 flip angle phases in the MRF train, might offer a way to reduce the flow of fresh magnetization (caused by motion) coming from slices above and below the imaging slice or from the left and the right of the imaging field of view, during repetitions of the flip angle train. However, such an approach and its effect on the quality of the parameter maps has to be investigated further. The parameter maps corresponding to patient data showed a very large difference between tumor tissue and healthy vitreous body, suggesting that fully homogeneous regions of T~1~ in the vitreous body are not necessary for disease quantification and classification. Future work should investigate the extension of the current single slice approach to a 3D approach, such that the entire eye can be efficiently quantified from 1 scan. The measured relaxation times are different between volunteers, potentially explained by anatomical or other volunteer‐specific differences. Small differences in relaxation times were observed for different scans in the same volunteer, caused by motion artifacts that change from scan to scan, but overall they are consistent within each volunteer, which is important for the use of this technique in practice. Considering the large deviations in measured relaxation times between different studies, it will be interesting to compare the MRF technique to standard T~1~ and T~2~ mapping techniques on a patient‐specific basis, and in this way investigate the origin of deviations from mean values as well as compare the robustness to motion for the different techniques. It should be noted, however, that in Ma et al,[58](#mrm27594-bib-0058){ref-type="ref"} it was already observed that MRF values do not always agree perfectly with reference values from other techniques, and potential reasons for this need to be investigated. Parameter maps in the current study were not corrected for slice profile effects, but all experiments were performed using an RF pulse with a very high time‐bandwidth product, minimizing the effects as demonstrated in Ma et al.[58](#mrm27594-bib-0058){ref-type="ref"} The flip angle map, which is used as an input in the matching process, was produced with DREAM, in which the B~1~ ^+^ encoding slice thickness was set to be double the acquisition slice thickness to eliminate the slice profile effect.[51](#mrm27594-bib-0051){ref-type="ref"} Values for the optic nerve were not reported in this study because the optic nerve was not visible in all scans due to small differences in planning and anatomy, and the slice thickness of 5 mm makes the measured values in the optic nerve very sensitive to partial volume effects. These partial volume effects also complicate quantification of heterogeneous tumors. In particular, tumor relaxation values could become inaccurate due to averaging with the strong signal coming from the surrounding vitreous body. Planning the imaging slice through the tumor as well as through the center of the vitreous body, such that the imaging plane is perpendicular to the tangent along the retina, would help to reduce these effects. One limitation of the current study is the rather high slice thickness used (which is limited by the gradient strengths). With small changes in the sequence such as using a slightly longer echo time, acquisition and reconstruction of a 2‐mm‐thick slice is feasible (see Supporting Information Figure [S5](#mrm27594-sup-0001){ref-type="supplementary-material"}). The in‐plane resolution of 0.5 mm is satisfactory for tumor quantification and classification, as well as visualizing small structures such as the sclera and the ciliary body. The results in this study show the potential to perform ocular MRF in tumor patients. To adopt ocular MRF in clinics, the technique could be further tailored to quantify specifically the relevant T~1~ and T~2~ values of tumors. Extensions to multislice or 3D acquisitions could be developed such that the whole tumor volume can be covered and quantified. Further studies should investigate which clinical applications will benefit from ocular MRF and in that way explore the clinical relevance of the technique. In conclusion, the high undersampling factors used for this Cartesian, nonparallel imaging‐based approach shorten scan time and in this way reduce the risk of motion artifacts, which is most relevant for elderly patients, who typically experience difficulties focusing on a fixation target. Supporting information ====================== ###### **FIGURE S1** The effect of the undersampling factor on the performance of different reconstruction methods. Undersampled data sets were obtained by subsampling a fully sampled data set, while fixing the number of central k‐space lines to six for all undersampling factors. For larger undersampling factors, MC outperforms 2D and 3D CS. For undersampling factors smaller than three, MC has a slightly higher error compared to 3D CS. Overall, the error appears to be less affected by the undersampling factor for MC compared to the other reconstruction methods. Error measures are defined according to Equation 5 **FIGURE S2** The parameter maps in all healthy volunteers for high resolution scans. Parameter maps obtained in six healthy volunteers are shown in (a)‐(f), respectively. The CS 3D reconstruction performs better for the high resolution scans than for the low resolution scans, but the parameter maps still show loss of detail compared to the maps obtained from the undersampled scan after an MC‐based reconstruction, with examples indicated by the white circles. Fully sampled reference scans were not obtained due to the long scan time required. A zoomed‐in version of the MC result in volunteer 1 is shown in (g), and repeated in (h) with a different color scale **FIGURE S3** Comparison of 2 different projection matrices. (a) The normalized singular value vector of the simulated MRF dictionary shows a steeper decay compared to the normalized singular vector of the central k‐space data. (b) The reconstruction error (defined as in Equation 5) as a function of the n most significant left singular values, is smaller when using the central k‐space as calibration data. A rank 3‐4 projection matrix results in the smallest reconstruction error when using the central k‐space data **FIGURE S4** The effect of motion on the parameter maps. (a) Motion was simulated by randomly replacing 1 of the 12 acquired k‐space lines in each MRF frame by (type 1) its phase‐modulated version with a random phase shift between 0 and 2π, mimicking in‐plane rigid body motion and (type 2) white gaussian noise (matching the maximum intensity of the replaced k‐space line), representing the worst case scenario of a completely corrupted signal. For motion type 1 larger differences are visible in the vitreous body. Motion type 2 results in noise break‐through in the parameter maps. For both types of motion, less than 6% change in T~1~ was observed in the vitreous body, while the T~2~ of the eye lens was changed by more than 20%, underlining the nonlinear effect of motion on the parameter maps. (b) The singular values of the calibration data show a less steep decay when k‐space lines are corrupted by motion **FIGURE S5** Parameter maps obtained from a thinner slice. By increasing the echo time from 3.5 ms to 4.6 ms, a slice of 2 mm can be acquired, spatial resolution 1×1×2 mm^3^. With this slice thickness the resulting parameter maps are less susceptible to partial volume effects, but slightly more noise is present in the maps due to the reduced SNR in the MRF images ###### Click here for additional data file. The authors thank Mariya Doneva for helpful discussions on reconstruction, and Thomas O'Reilly and Luc van Vught for useful insights during data acquisition.
var config = { type: Phaser.AUTO, parent: 'phaser-example', width: 800, height: 600, scene: { create: create }, }; var game = new Phaser.Game(config); function create() { var graphics = this.add.graphics(); drawStar(graphics, 100, 300, 4, 50, 50 / 2, 0xffff00, 0xff0000); drawStar(graphics, 400, 300, 5, 100, 100 / 2, 0xffff00, 0xff0000); drawStar(graphics, 700, 300, 6, 50, 50 / 2, 0xffff00, 0xff0000); } function drawStar (graphics, cx, cy, spikes, outerRadius, innerRadius, color, lineColor) { var rot = Math.PI / 2 * 3; var x = cx; var y = cy; var step = Math.PI / spikes; graphics.lineStyle(4, lineColor, 1); graphics.fillStyle(color, 1); graphics.beginPath(); graphics.moveTo(cx, cy - outerRadius); for (i = 0; i < spikes; i++) { x = cx + Math.cos(rot) * outerRadius; y = cy + Math.sin(rot) * outerRadius; graphics.lineTo(x, y); rot += step; x = cx + Math.cos(rot) * innerRadius; y = cy + Math.sin(rot) * innerRadius; graphics.lineTo(x, y); rot += step; } graphics.lineTo(cx, cy - outerRadius); graphics.closePath(); graphics.fillPath(); graphics.strokePath(); }
My Hero Academia Season 2- Episode 18 After last weeks episode, I was really curious what they had in store for us this week. How the heroes will come back to Earth after such a traumatic experience. And good thing for us, this episode is rightly named “The Aftermath of Hero Killer: Stain”. My Hero Academia- Funimation We open with Izuku, Iida, and Todoroki all in the hospital. They are all recovering from their tremendous fight. But also reflecting how lucky they are to be alive still. The door opens and we see Gran Torino and Pro Hero Manual. First thing Gran Torino does of course is scold Midoriya. But before Gran Torino goes full instructor on him, he tells the boys that they have a visitor. My Hero Academia- Funimation A tall figure turns the corner wearing a professional business suit. It’s Hosu’s chief of police, Kenji Tsuragamae. Who also happens to look like a dog (just go along with it I guess?). Kenji tells the boys that Stain is in custody and is being treated for several broken bones and serious burns. He also reminds them that what they have done was not okay on paper. Uncertified heroes using their Quirks against their instructors orders is highly frowned upon. But Todoroki is not taking it. He tells the chief that if Iida didn’t step in, then Pro Hero Native would have been killed. And the both of them would have been killed without Izuku’s help. But Gran Torino tells Todoroki to hear the chief out. Kenji tells them that the punishment would only happen if this was made public. And the people would applaud their efforts anyway. But if the police kept it quiet, no one gets punished, but the boys don’t get the praise they deserve. Instead Endeavor will get the praise from the masses. It would also explain Stain’s burn scars. So they choose to not be celebrated as heroes and apologize anyways. But Kenji tells them he respects what they did and he thanks them for protecting the peace. So it has hit the news that Endeavor has stopped hero killer Stain and the nomus from destroying Hosu City. It’s all anyone is talking about. Meanwhile, we get a look at how everyone else from Class 1-A is doing in their internship programs. First we look at Bakugo who is having a less than stellar time. The first thing he wants to do is go knock some heads around but his mentor is not allowing him and says it will be business as usual. Hopefully that could help Bakugo control his temper. Kirishima finds out the reason why Midoriya sent him his location. Apparently he also reported the incident last night. Go Kirishima! Momo debuted in a commercial with her mentor and it seems obvious modelling is not what she wants to be doing. But her mentor is letting her go on patrol like she has wanted since the start of their training together. My Hero Academia- Funimation And finally there is Uraraka who is on the phone with Midoriya. She tells him how glad she is that they are all safe. Midoriya of course apologizes for not contacting sooner but she understands. In the midst of the conversation, Uraraka’s mentor Gunhead reminds her that they are going to start their basic training. She then says bye to Midoriya and Gunhead asks in a very cute way, “Your boyfriend?”. And she immediately dismisses it. When Midoriya hangs up he gets all worked up that he talked to a girl on the phone. This scene was my favorite from this episode it had me busting up! We get back to the guys in the hospital and Iida comes out and tells the two that he may have long-term damage in his hand. But he reflects on his actions from that night and regrets them greatly. He shouldn’t have acted so swiftly and carelessly. But Midoriya doesn’t let Iida beat himself up too much. He agrees that him and Iida should get stronger together. My Hero Academia- Funimation We cut to U.A. with All Might in the staff room. He gets a phone call from Gran Torino. He tells All Might that he has had his teaching licence revoked for six months because of Midoriya’s actions but there was no way of avoiding that and that he has come to terms with it. But All Might is very ashamed of himself for letting down his former instructor. But this isn’t the reason Gran Torino called. He really wants to talk about Stain. He says in the few minutes he was with him, it had him trembling. It was because of how intimidating and obsessed he was on what he think a hero should be and what he will do to correct our society. Because this has hit so many news stations, Stains ideology and opinions will be put on blast. People will become influenced by Stain’s beliefs and become a plague. But All Might doesn’t believe it will be a problem because they will probably show up sporadically and they will be taken out 1 by 1. But this is where the League of Villains comes in. If they all combine their hatred and Shigaraki gives them an outlet to express and deal with their evil intentions, it will become a serious problem. Gran Torino then reminds All Might that he must tell Midoriya properly what is concerning him and One for All. Which I have no idea what that is all about. Apparently the quirk is “on the move”? I’m interested in what that might mean. This episode was mostly a lot of dialogue and context but it was needed after such a shift in the story. It was refreshing getting some insight on how everyone else is doing too. I’m hoping next time they elaborate on what is concerning All Might and what is happening with his quirk? Only time will tell.
This invention relates to a metal-cutting milling tool. Such tools are known that comprise a body rotatable around a central geometric axis, which body has a peripheral envelope surface extending between opposite end surfaces. In the envelope surface, recesses are provided which open outwards, each recess defined by a front wall, a rear wall and a bottom wall and has the purpose of receiving a machining element (e.g., a cassette which carries a cutting insert) as well as at least one clamping wedge arranged in the recess for fixing the machining element in place. The clamping wedge can be tightened by means of a clamping screw which enters a threaded hole formed in the bottom wall of the recess. The rear wall of the recess has first serrations arranged to co-operate with second serrations disposed on a rear side of the machining element, while the front wall is smooth in order to cooperate with a similar smooth front surface on the clamping wedge. A contact surface on the clamping wedge and a front contact surface on the machining element are both smooth in order to allow a substantially radial displacement of the clamping wedge in relation to the machining element during the clamping thereof.
The Effects of Event Rate on a Cognitive Vigilance Task. The present experiment sought to examine the effects of event rate on a cognitive vigilance task. Vigilance, or the ability to sustain attention, is an integral component of human factors research. Vigilance task difficulty has previously been manipulated through increasing event rate. However, most research in this paradigm has utilized a sensory-based task, whereas little work has focused on these effects in relation to a cognitive-based task. In sum, 84 participants completed a cognitive vigilance task that contained either 24 events per minute (low event rate condition) or 40 events per minute (high event rate condition). Performance was measured through the proportion of hits, false alarms, mean response time, and signal detection analyses (i.e., sensitivity and response bias). Additionally, measures of perceived workload and stress were collected. The results indicated that event rate significantly affected performance, such that participants who completed the low event rate task achieved significantly better performance in terms of correction detections and false alarms. Furthermore, the cognitive vigil utilized in the present study produced performance decrements comparable to traditional sensory vigilance tasks. Event rate affects cognitive vigilance tasks in a similar manner as traditional sensory vigilance tasks, such that a direct relation between performance and level of event rate was established. Cognitive researchers wishing to manipulate task difficulty in their experiments may use event rate presentation as one avenue to achieve this result.
Located on the Kefalonia Island in Greece, this spectacular cave was lost for centuries until being rediscovered in 1951 by Giannis Petrocheilos. Take a look at this beautiful cave system and the island that it is part of. The famous Mytros Beach is also on this island. In Greek mythology it is believed that Nymph's used to live in these caves.
Vale de Lua – Moon Surface on Earth The valley terrain is all covered with rock formations and intricate labyrinths created by nature. In ancient times, there were deposits of quartz. But the noisy river San Miguel has streamed lots of passages over the years. Now, quartz rocks hang over the pond and the holes of different shapes remind those seen on the moon. Due to the different degrees of refraction of light water in some places seems to be dark blue, in other - clear and transparent. Dark brown and almost black, sometimes bluish-gray rocks vary in height and shape. Such miracle undoubtedly shows that the forces of nature are capable of creating the most unusual landscapes. In Brazil, Vale de Lua unusual relief appeared also due to the presence of sand. Gradually, layer after layer, it was brought there with river’s stream, settled on the coastal cliffs and formed numerous mounds an unusual shape. If looking a little closer you will notice that in some places the quartz rock thinned to such an extent that its thickness does not exceed the thickness of a sheet of paper. The magnificent landscape of small lakes is completed with waterfalls.
United States Court of Appeals For the Eighth Circuit ___________________________ No. 12-3842 ___________________________ Barbara Hager lllllllllllllllllllll Plaintiff - Appellee v. Arkansas Department of Health; Namvar Zohoori, individually and in his official capacity lllllllllllllllllllll Defendants - Appellants ____________ Appeal from United States District Court for the Eastern District of Arkansas - Little Rock ____________ Submitted: September 24, 2013 Filed: November 14, 2013 ____________ Before LOKEN, COLLOTON, and BENTON, Circuit Judges. ____________ BENTON, Circuit Judge. Barbara Hager was fired from the Arkansas Department of Health by her supervisor, Dr. Namvar Zohoori. Hager sued Dr. Zohoori and the Department for statutory and constitutional violations. The district court granted, in part, their motion to dismiss. They appeal. Having jurisdiction under 28 U.S.C. § 1291 over Dr. Zohoori’s appeal, this court reverses and remands. I. Hager claims that in May 2011, her branch chief and supervisor, Dr. Zohoori, instructed her to cancel a doctor’s appointment (necessary, she says, to prevent cataracts) in order to discuss a report. When she refused, she alleges Dr. Zohoori became irritated and falsely claimed she was insubordinate and disrespectful. Four days later, he terminated her without explanation. Hager sued Dr. Zohoori, in his individual and official capacities, and the Department alleging violations of Title VII of the Civil Rights Act of 1964, the Equal Protection and Due Process Clauses of the Constitution (§ 1983 claim), the Age Discrimination and Employment Act, the Rehabilitation Act, and the Family and Medical Leave Act (FMLA). Dr. Zohoori and the Department moved to dismiss for failure to state a claim and sovereign immunity. The district court denied their motion in part, allowing three claims against Dr. Zohoori in his individual capacity (§ 1983 gender discrimination, FMLA “interference,” and FMLA “retaliation”) and two claims against the Department (Title VII and Rehabilitation Act). They appeal. II. Hager objects to this court’s jurisdiction over Dr. Zohoori’s appeal, arguing it turns on issues of factual sufficiency. A denial of qualified immunity is an appealable “final decision” only “to the extent it turns on an issue of law.” Mitchell v. Forsyth, 472 U.S. 511, 530 (1985). Hager relies on cases reviewing a denial of summary judgment based on qualified immunity. See Johnson v. Jones, 515 U.S. 304, 313-14 (1995) (holding that where a district court’s summary judgment order on qualified immunity turns on the issue of evidence sufficiency—“which facts a party may, or -2- may not, be able to prove at trial”—the order is not appealable); Powell v. Johnson, 405 F.3d 652, 654-55 (8th Cir. 2005). In Ashcroft v. Iqbal, the Supreme Court determined the jurisdiction of a court of appeals in a case like Hager’s—denial of a motion to dismiss based on qualified immunity: As a general matter, the collateral-order doctrine may have expanded beyond the limits dictated by its internal logic and the strict application of the criteria set out in Cohen. But the applicability of the doctrine in the context of qualified-immunity claims is well established; and this Court has been careful to say that a district court’s order rejecting qualified immunity at the motion-to-dismiss stage of a proceeding is a “final decision” within the meaning of § 1291. Behrens, 516 U.S., at 307, 116 S. Ct. 834. Applying these principles, we conclude that the Court of Appeals had jurisdiction to hear petitioners’ appeal. The District Court’s order denying petitioners’ motion to dismiss turned on an issue of law and rejected the defense of qualified immunity. It was therefore a final decision “subject to immediate appeal.” Ibid. Respondent says that “a qualified immunity appeal based solely on the complaint’s failure to state a claim, and not on the ultimate issues relevant to the qualified immunity defense itself, is not a proper subject of interlocutory jurisdiction.” Brief for Respondent Iqbal 15 (hereinafter Iqbal Brief). In other words, respondent contends the Court of Appeals had jurisdiction to determine whether his complaint avers a clearly established constitutional violation but that it lacked jurisdiction to pass on the sufficiency of his pleadings. Our opinions, however, make clear that appellate jurisdiction is not so strictly confined. Iqbal, 556 U.S. 662, 672-73 (2009). -3- Here, Dr. Zohoori challenges the sufficiency of Hager’s pleadings to state § 1983, FMLA “interference,” and FMLA “retaliation” claims. This is an issue of law over which this court has jurisdiction. See id. at 672-74; Bradford v. Huckabee, 394 F.3d 1012, 1015 (8th Cir. 2005). See also Rondigo, L.L.C. v. Township of Richmond, 641 F.3d 673, 679 (6th Cir. 2011). III. This court reviews de novo the denial of a motion to dismiss on the basis of qualified immunity. Bradford, 394 F.3d at 1015. A complaint must “state a claim to relief that is plausible on its face.” Bell Atlantic Corp. v. Twombly, 550 U.S. 544, 570 (2007). Under Federal Rule of Civil Procedure 12(b)(6), the factual allegations in the complaint are accepted as true and viewed most favorably to the plaintiff. Gross v. Weber, 186 F.3d 1089, 1090 (8th Cir. 1999). Courts must not presume the truth of legal conclusions couched as factual allegations. Papasan v. Allain, 478 U.S. 265, 286 (1986). Courts should dismiss complaints based on “labels and conclusions, and a formulaic recitation of the elements of a cause of action.” Twombly, 550 U.S. at 555. Under the doctrine of qualified immunity, a court must dismiss a complaint against a government official in his individual capacity that fails to state a claim for violation of “clearly established statutory or constitutional rights of which a reasonable person would have known.” Harlow v. Fitzgerald, 457 U.S. 800, 818 (1982). See also Iqbal, 556 U.S. at 685; Mitchell, 472 U.S. at 526 (“Unless the plaintiff’s allegations state a claim of violation of clearly established law, a defendant pleading qualified immunity is entitled to dismissal before the commencement of discovery.”). A court considers whether the plaintiff has stated a plausible claim for violation of a constitutional or statutory right and whether the right was clearly established at the time of the alleged infraction. Powell, 405 F.3d at 654-55. See Pearson v. Callahan, 555 U.S. 223, 236 (2009) (“[D]istrict courts and the courts of -4- appeals should be permitted to exercise their sound discretion in deciding which of the two prongs of the qualified immunity analysis should be addressed first in light of the circumstances in the particular case at hand.”). A. The § 1983 claim against Dr. Zohoori individually (Count I) alleges that Hager was “a victim of gender discrimination . . . and has been denied her right of equal protection of the law and due process of the law.” Specifically, she contends she “was discharged under circumstances summarily [sic] situated nondisabled males . . . were not.” “[T]he Equal Protection Clause requires that the government treat such similarly situated persons alike.” Keevan v. Smith, 100 F.3d 644, 648 (8th Cir. 1996), citing City of Cleburne v. Cleburne Living Ctr., Inc., 473 U.S. 432, 439 (1985); Klinger v. Department of Corrs., 31 F.3d 727, 731 (8th Cir. 1994). Absent evidence of direct discrimination, courts apply the McDonnell Douglas burden- shifting analysis to claims of employment discrimination under the Equal Protection Clause. Lockridge v. Board of Trs. of Univ. of Arkansas, 315 F.3d 1005, 1010 (8th Cir. 2003) (en banc). Under McDonnell Douglas, a prima facie case of discrimination requires that a plaintiff prove: “(1) membership in a protected group; (2) qualification for the job in question; (3) an adverse employment action; and (4) circumstances that support an inference of discrimination.” Swierkiewicz v. Sorema N. A., 534 U.S. 506, 510 (2002), citing McDonnell Douglas Corp. v. Green, 411 U.S. 792, 801 (1973). Dr. Zohoori argues that Hager does not state a § 1983 claim for gender discrimination because her allegation—that she “was discharged under circumstances summarily [sic] situated nondisabled males, younger people, or those that did not require leave or accommodation were not”—is a legal conclusion. Hager contends -5- her “similarly situated” allegation is sufficient because McDonnell Douglas is “an evidentiary standard, not a pleading requirement.” Swierkiewicz, 534 U.S. at 510; Ring v. First Interstate Mortg., 984 F.2d 924, 926 (8th Cir. 1993). Under Swierkiewicz, a plaintiff need not plead facts establishing a prima facie case of discrimination under McDonnell Douglas in order to defeat a motion to dismiss. Swierkiewicz, 534 U.S. at 510-11. The complaint “must contain only ‘a short and plain statement of the claim showing the pleader is entitled to relief.’” Id. at 508. “Such a statement must simply ‘give the defendant fair notice of what the plaintiff’s claim is and the grounds upon which it rests.’” Id. at 512, citing Conley v. Gibson, 355 U.S. 41, 47 (1957). In Twombly, the Supreme Court stated that Swierkiewicz did not change the law of pleading. Twombly, 550 U.S. at 569. Rather, courts need “not require heightened fact pleading of specifics, but only enough facts to state a claim to relief that is plausible on its face.” Id. at 570. “[L]egal conclusions can provide the framework of a complaint” but “must be supported by factual allegations,” Iqbal, 556 U.S. at 679, that “raise a right to relief above the speculative level.” Twombly, 550 U.S. at 555. Thus, this court applies “the ordinary rules for assessing the sufficiency of a complaint,” Swierkiewicz, 534 U.S. at 511, to consider whether Hager states a § 1983 claim for gender discrimination. See Twombly, 550 U.S. at 570. Hager relies primarily on Swierkiewicz. However, her complaint has far fewer factual allegations than the complaint there. In Swierkiewicz, the complaint for age and nationality discrimination alleged: the plaintiff was demoted and replaced by a younger employee of the employer’s nationality; the replacement was inexperienced; in promoting the younger, inexperienced employee, the employer wanted to “energize” the department; the employer excluded and isolated plaintiff from business decisions and meetings; plaintiff sent a memo outlining his grievances and tried to -6- meet with the employer to discuss his discontent; and plaintiff was fired. Swierkiewicz, 534 U.S. at 508-09. Hager makes only two conclusory allegations of gender discrimination: (1) she “is a victim of gender discrimination;” and (2) she “was discharged under circumstances summarily [sic] situated nondisabled males . . . were not.” She does not allege any gender-related comments or conduct before her termination. See Rondigo, 641 F.3d at 682 (granting qualified immunity in part because the complaint contained no allegations of gender-based discriminatory actions). She also does not allege facts showing that similarly situated employees were treated differently. See Coleman v. Maryland Court of Appeals, 626 F.3d 187, 190-91 (4th Cir. 2010) (plaintiff’s conclusory allegation that he “was treated differently as a result of his race than whites”—even where plaintiff identified an alleged comparator—was insufficient to sustain a Title VII claim because no factual allegations plausibly suggested the comparator was similarly situated). See also Keevan, 100 F.3d at 648 (“To establish a gender-based claim under the Equal Protection Clause, the appellants must, as a threshold matter, demonstrate that they have been treated differently by a state actor than others who are similarly situated simply because appellants belong to a particular protected class.”). In sum, Hager does not state a § 1983 claim for gender discrimination. Hager’s allegation that she is the victim of gender discrimination fails to give Dr. Zohoori fair notice of the claim and the grounds upon which it rests. See Swierkiewicz, 534 U.S. at 512. Hager’s conclusory assertion that she was discharged under circumstances similarly situated men were not imports legal language couched as a factual allegation and fails to raise a right to relief above the speculative level. See Twombly, 550 U.S. at 555. The district court erred in denying Dr. Zohoori’s motion to dismiss the § 1983 claim. -7- B. Hager alleges a claim for “interfering with exercise of Plaintiff’s rights under the FMLA.” Under the categorization in Pulczinski v. Trinity Structural Towers, Inc., 691 F.3d 996 (8th Cir. 2012), Hager’s “interference” claim is an entitlement claim. Pulczinski, 691 F.3d at 1005-06. “The FMLA entitles an employee to twelve workweeks of leave during any twelve-month period if he or she has a ‘serious health condition that makes the employee unable to perform the functions of the position of such employee.’” Sisk v. Picture People, Inc., 669 F.3d 896, 899 (8th Cir. 2012), quoting Wierman v. Casey’s Gen. Stores, 638 F.3d 984, 999 (8th Cir. 2011), quoting 29 U.S.C. § 2612(a)(1)(D). An FMLA entitlement claim arises when an employer denies or interferes with an employee’s substantive FMLA rights. Scobey v. Nucor Steel-Arkansas, 580 F.3d 781, 785 (8th Cir. 2009). An employee seeking FMLA leave must give the employer notice of the need for leave and indicate when she anticipates returning to work. Id. at 785-86. See also Rynders v. Williams, 650 F.3d 1188, 1196-97 (8th Cir. 2011) (plaintiff must prove she gave timely notice to defendant himself). Although the notice need not specifically invoke the FMLA, an employee “must provide information to suggest that [her] health condition could be serious.” Scobey, 580 F.3d at 786. When the leave is foreseeable, the employee must give at least thirty days notice. 29 C.F.R. § 825.302. When the leave is not foreseeable, “an employee must provide notice to the employer as soon as practicable under the facts and circumstances of the particular case.” 29 C.F.R. § 825.303. Hager alleges that she “saw a physician regularly for her cataracts,” but “[o]n May 13, 2011, [Dr. Zohoori] instructed her to cancel the doctor’s appointment so she and he could discuss a report.” She also avers that she explained “the reason she needed to go to the doctor,” that “she could not cancel the appointment,” and why she could not cancel. These allegations do not state an FMLA entitlement claim. While -8- Hager alleges that she provided information suggesting a serious health condition, she does not allege that she provided timely notice. Hager’s pleadings at best suggest Dr. Zohoori was aware of her leave request immediately prior to the appointment. They do not assert that she provided notice within thirty days or “as soon as practicable under the circumstances.” Nor do they assert that she indicated when she would return. See generally Bosley v. Cargill Meat Solutions Corp., 705 F.3d 777, 780 (8th Cir. 2013) (there is a “rigorous notice standard for employees seeking to use FMLA leave for absences”). The district court erred in denying Dr. Zohoori’s motion to dismiss the FMLA entitlement claim. C. Hager also alleges a claim for “retaliating against her.” Under the categorization in Pulczinski, Hager’s “retaliation” claim is a discrimination claim. Pulczinski, 691 F.3d at 1006. In a discrimination claim, “the employee alleges that the employer discriminated against her for exercising her FMLA rights.” Sisk, 669 F.3d at 899, quoting Wierman, 638 F.3d at 999. Absent direct evidence, an FMLA discrimination claim is analyzed under the McDonnell Douglas burden-shifting framework. Sisk, 669 F.3d at 899. The plaintiff must “show that she exercised rights afforded by the Act, that she suffered an adverse employment action, and that there was a causal connection between her exercise of rights and the adverse employment action.” Phillips v. Mathews, 547 F.3d 905, 912 (8th Cir. 2008), quoting Smith v. Allen Health Sys., Inc., 302 F.3d 827, 832 (8th Cir. 2002). This is an evidentiary, not a pleading, standard. Swierkiewicz, 534 U.S. at 510. Hager alleges that Dr. Zohoori discriminated against her—firing her—because she exercised her FMLA rights—tried to take leave for a doctor’s appointment, which was “necessary to insure that [her] condition did not develop into a serious health -9- condition, cataracts.” If Hager had properly alleged notice, these allegations would be sufficient. See Wehrley v. American Family Mut. Ins. Co., 513 Fed. Appx. 733, 742 (10th Cir. 2013) (“Three other circuits have concluded that notifying an employer of the intent to take FMLA leave is protected activity. . . . We are persuaded to follow these circuits.”), citing Pereda v. Brookdale Senior Living Communities, Inc., 666 F.3d 1269, 1276 (11th Cir. 2012); Erdman v. Nationwide Ins. Co., 582 F.3d 500, 509 (3d Cir. 2009); Skrjanc v. Great Lakes Power Serv. Co., 272 F.3d 309, 314 (6th Cir. 2001). However, because Hager failed to plead notice of intent to take FMLA leave, and that she was qualified for that leave, she has not sufficiently alleged that she exercised FMLA rights. See Nicholson v. Pulte Homes Corp., 690 F.3d 819, 828 (7th Cir. 2012) (“The district court held that because Nicholson did not provide sufficient notice of the need for FMLA-qualifying leave, she never engaged in any activity protected by the FMLA. For the reasons we have explained, we agree.”). The district court erred in denying Dr. Zohoori’s motion to dismiss the FMLA discrimination claim. IV. Although Hager did not move to amend the complaint in the district court—where the relevant pleadings were found sufficient—she requests remand to allow an amended complaint for any claims insufficiently pled. Hager should be no worse off, and no better off, than she would have been if the district court had granted the motion to dismiss. See Horras v. American Capital Strategies, Ltd., 729 F.3d 798, 804-05 (8th Cir. 2013) (evaluating standards applicable to post-judgment motions). This court remands for the district court to consider whether to allow Hager to amend her pleadings. See Zenith Radio Corp. v. Hazeltine Research, Inc., 401 U.S. 321, 330 (1971) (granting leave to amend is within the discretion of the district court). -10- V. The Arkansas Department of Health requests that this court exercise its pendent appellate jurisdiction to review the district court’s partial denial of its motion to dismiss. See Langford v. Norris, 614 F.3d 445, 457 (8th Cir. 2010) (“[W]hen an interlocutory appeal is before us . . . as to the defense of qualified immunity, we have jurisdiction also to decide closely related issues of law, i.e., pendent appellate claims.”) (internal quotation marks omitted), quoting Kincade v. City of Blue Springs, Mo., 64 F.3d 389, 394 (8th Cir. 1995). The Department maintains that Hager’s claims against it are inextricably intertwined with her claims against Dr. Zohoori. The Department reasons that if Hager’s “similarly situated” allegation does not sustain her § 1983 and FMLA discrimination claims against Dr. Zohoori, it cannot sustain her Title VII and Rehabilitation Act claims against the Department. “[A] pendent appellate claim can be regarded as inextricably intertwined with a properly reviewable claim on collateral appeal only if the pendent claim is coterminous with, or subsumed in, the claim before the court on interlocutory appeal—that is, when the appellate resolution of the collateral appeal necessarily resolves the pendent claim as well.” Kincade, 64 F.3d at 394, quoting Moore v. City of Wynnewood, 57 F.3d 924, 930 (10th Cir. 1995). See also Lockridge, 315 F.3d at 1012. Here, resolution of the “similarly situated” issue may illuminate the Department’s argument that Hager failed to state a claim against it. However, the Department’s claims are not coterminous with or subsumed in Dr. Zohoori’s claims. Hager sues under different statutes, and the Department cannot invoke qualified immunity. This court does not have jurisdiction to hear the Department’s appeal. ******* -11- The denial of Dr. Zohoori’s motion to dismiss the § 1983 claim, the FMLA entitlement claim, and the FMLA discrimination claim is reversed. This case is remanded for proceedings consistent with this opinion. ______________________________ -12-
Micro-Loan Program In order to promote economic development in the City of Alamo, the Alamo EDC established the Alamo Small Business Micro-Loan Program (MLP) with assistance from USDA – Rural Development. The MLP is a self-sustaining project that works by lending money to local businesses, with the money paid back plus interest being reused. The following documents must be submitted for a loan application: Application form. An executive summary with three years of financial projections. A project budget. A personal financial statement. Two years of income tax returns – business and/or personal (for the most current years). Year-end financial statement(s) from an existing organization. Balance sheet(s) (yearly). Profit and loss statement(s) (last quarter). A minimum of two bids from non-related third-party vendors/contractors. A credit report (to be conducted by AEDC), steps for loan process. Filling out the loan application: Must submit all documents to AEDC via mail or hand delivery. The AEDC begins the loan application review to determine eligibility. The applicant will be notified of eligibility status. If eligible, the loan application will be presented to the Loan Review Committee. A committee recommendation will be presented to the AEDC Board for final approval. The applicant will be notified of the board’s decision to approve or deny the loan application as well as loan-specific terms, when applicable.
Ability of MR cholangiography to reveal stent position and luminal diameter in patients with biliary endoprostheses: in vitro measurements and in vivo results in 30 patients. Our goal was to evaluate the ability of MR cholangiography to show stent position and luminal diameter in patients with biliary endoprostheses. Susceptibility artifacts were evaluated in vitro in three different stent systems (cobalt alloy-based, nitinol-based, and polyethylene) using two breath-hold sequences (rapid acquisition with relaxation enhancement, half-Fourier acquisition single-shot turbo spin echo) on a 1.5-T MR imaging system. The size of the stent-related artifact was measured, and the relative stent lumen was calculated. In vivo stent position and patency were determined in 30 patients (10 cobalt alloy-based stents, five nitinol-based stents, and 15 polyethylene stents). In vitro, the susceptibility artifact of the cobalt stent caused complete obliteration of the stent lumen. The relative stent lumens of the nitinol-based and polyethylene stents were 38-50% and 67-100%, respectively. In vivo, all stents were patent at the time of imaging. The position of the cobalt alloy-based stent could be determined in nine of 10 patients, but stent patency could not be evaluated. Stent position of nitinol stents could not be adequately evaluated in any of the five patients, and internal stent diameter could be visualized in only one patient. In nine of 15 patients, the fluid column within the implanted polyethylene stent was seen on MR cholangiography. The internal stent lumen could be visualized in most patients with an indwelling polyethylene stent, but not in patients with cobalt alloy- or nitinol-based stents.
The Doom Generation (1995) October 25, 1995 FILM REVIEW;Gory Kitsch in a Parody of Teen-Age Road Movies By JANET MASLIN Published: October 25, 1995 Production notes for Gregg Araki's "Doom Generation" say it is "Araki's first big-budget feature and marks the end of his film adolescence." Well, not exactly. After a promising debut with "The Living End" followed by the angrier, more marginalized "Totally F***ed Up," Mr. Araki is still sounding a note of self-congratulatory teen-age rebellion in a film gruesome and obvious enough to make "Natural Born Killers" look like a model of restraint. It's not even much of a change to find "The Doom Generation" billed as "a heterosexual movie," since it shares the effective homoerotic energy of his earlier work. That this film includes a teen-age girl, Amy Blue (Rose McGowan), as part of its sexual menage only means one especially clear target of contempt ("Don't get your uterus all tied in a knot" is one of the more printable things anyone says to her) in a film overflowing with it. Amy's insolence and Anna Karina hairdo (like Uma Thurman's in "Pulp Fiction") may offer a touch of Godard. But this film's satire of teen-age-wasteland cinema is so coarsely exaggerated that any homage is beside the point. Using outlaw characters named Red, White and Blue to condemn all aspects of unhip America, Mr. Araki indulges in such broad parody that thinking it clumsy means failure to get the joke. Though visibly more polished than his earlier films, "The Doom Generation" clings to a midnight movie sensibility founded on deliberate kitsch. So Amy is a one-note, rude, sulky heroine, saying things like "Life is lonely, boring and dumb" while the two men she's sleeping with enjoy an obvious attraction to each other. Not content to leave this as subtext, Mr. Araki throws in the occasional bumper sticker: "Ditch the bitch. Make the switch." Voluptuous Xavier Red (Johnathon Schaech) is way ahead of charmingly dim Jordan White (James Duval) in getting the hint about this, but it doesn't matter: "The Doom Generation" leads them both to a gory demonstration of America's intolerance toward sexual nonconformists. Obscured by strobe lights and boosted by the alternative-rock soundtrack that's sure to help sell the movie, this already notorious castration sequence is one of several gross-out epiphanies here. Others include the severing of a head that still talks, and even vomits, after it is removed from a vein-spurting body, and a blink-of-the-eye cameo by Heidi Fleiss. The genuine enthusiasm Mr. Araki brings to this film's bedroom scenes, with their whimsical sets and jokey porn ambiance, is matched by the occasionally workable black humor in his screenplay. ("You murdered two people tonight. Doesn't that faze you at all?" "Yeah, I'm bummed. To the max.") But sledgehammer direction, heavy irony and the easiest imaginable targets hardly show talent off to good advantage. THE DOOM GENERATION Written, edited and directed by Gregg Araki; director of photography, Jim Fealy; music by the Jesus and Mary Chain, Nine Inch Nails, Showdive, Curve, Meat Beat Manifesto, Pizzicato Five, Cocteau Twins and others; production designer, Therese Deprez; produced by Andrea Sperling, Mr. Araki and Why Not Productions (France); released by Trimark Pictures. At the Angelika Film Center, Mercer and Houston Streets. Running time: 90 minutes. This film is not rated.
Rakestraw Rakestraw is a surname. Notable people with the surname include: Larry Rakestraw (born 1942), American football player Paulette Rakestraw (born 1967), American politician from the state of Georgia Wilbur Rakestraw (1928–2014), American racing driver W. Vincent Rakestraw (born 1940), Former Assistant Attorney General of the United States, Former Special Assistant to the Ambassador of India See also Rakestraw House, a historic home located near Garrett in Keyser Township, DeKalb County, Indiana. Category:English-language surnames
Characterization of biofilm and encrustation on ureteric stents in vivo. To examine the relationship between encrustation and microbial biofilm formation on indwelling ureteric stents. Ureteric stents from 40 patients were examined for the presence of a microbial biofilm and encrustations. Bacteria in stent biofilms were isolated and identified. A profuse biofilm (> 10(4) c.f.u. cm-3) was identified on 11 (28%) stents. Enterococcus faecalis was the most common biofilm organism identified and Proteus spp. were not present. Encrustation was seen in 23 (58%) of stents and was not associated with the level of urinary calcium. The major risk factor for stent encrustation was the presence of urolithiasis. Importantly, there was no causative link between stent biofilm formation and encrustation. Both biofilm formation and encrustation increased with the duration of stenting. The results indicate that polyurethane is readily encrusted and colonized by bacteria in vivo despite antibiotic prophylaxis. Newer materials must be sought if effective long-term stenting is to be achieved.
From typical sequences to typical genotypes. We demonstrate an application of a core notion of information theory, typical sequences and their related properties, to analysis of population genetic data. Based on the asymptotic equipartition property (AEP) for nonstationary discrete-time sources producing independent symbols, we introduce the concepts of typical genotypes and population entropy and cross entropy rate. We analyze three perspectives on typical genotypes: a set perspective on the interplay of typical sets of genotypes from two populations, a geometric perspective on their structure in high dimensional space, and a statistical learning perspective on the prospects of constructing typical-set based classifiers. In particular, we show that such classifiers have a surprising resilience to noise originating from small population samples, and highlight the potential for further links between inference and communication.
It is a truism that modern cell phones feature a multitude of features that expand on the traditional cell phone functionality. For example, today cell phone users are able to use their phones to connect to the Internet, manage meetings, appointments, and other aspects of their every day lives, listen to music and watch videos, etc. In essence the cell phone—which began as a single-function communicator—has grown into a fully functioning multimedia device. However the fundamental function of a cell phone remains communication. It should be noted that cell phones are also sometimes referred to as mobile phones, which in the proper meaning of the word indicates that the user of that phone is mobile, and is supposedly always available for anyone who might want to contact him or her. The core functionality of mobile/cell phones has been basically the same since the first devices were made available to consumers. Although there has been a rapid expansion in the feature set of most cell phones, the core functionality has not seen a similar expansion. The reasons for the development discrepancy likely have to do with the fact that the core functionality is sufficient for most users and that there are not just that many ways of enhancing the person-to-person communication experience on a mobile device Arguably, the most important enhancement in the cell phone, at least as it relates to interpersonal communication, has been the development of the capability of sending short text messages from one phone to another. Otherwise, the main improvements in communications have been largely concerned with connectivity. For example, communications protocols such as infrared and Bluetooth have become de facto requirements for all but the most inexpensive phones. In addition advances have been made in connectivity to the Internet (for example) and now it is routine for users to be able to access their e-mail and browse the web via their phones. However, these improvements in connectivity, as welcome as they might be, do not expand on the one-to-one personal communication aspect of the phone. One thing that would be a leap forward in such communications would be the ability to quickly and easily assemble a multi-user communication session that is hardware independent and, further, does not require the user to purchase additional hardware. Although the prior art has provided multi-user communications in the form of, for example, conference calls—the present technology of conference calls is quite limiting to the user. For example, it is typically limited to a predetermined number of user connections (e.g., 5). Further, a start time must be communicated to each user so there is little opportunity for spontaneity. Further, adding more users to the session may be very difficult or impossible. Finally, the conference call will ultimately be limited to known users, i.e., those who are known to one of the participants and have been invited. Additionally, exchanging short messages between users is a time-delayed communication mode that typically involves a one-to-one communication. Even though some software providers have offered solutions that allow a user to send one short message to multiple participants, such is not the same as real time voice communication between these same users. Of course, such group messaging is a time-delayed communication mode too, in which at least one participant is always in a waiting position. Thus, this communication option also offers little in the way of spontaneity or flexibility to the user. As was mentioned previously, over the last few years several attempts have been made to enhance the communication options available to owners of mobile devices, for example infrared and Bluetooth have been added but they have been used so far mostly for communication with other devices, i.e. for data transfer—not for direct communication between users. Those of ordinary skill in the art will recognize that infrared is limited to communications over a relative short line-of-sight distance between potential communication partners. As a consequence, the infrared protocol has typically been implemented as a simple data exchange protocol which is useful, for example, in synchronizing data between a mobile phone and a personal computer. On the other hand, the Bluetooth protocol provides for the creation of networks, so called piconets, in which up to 255 participants can be combined, of which only 8 participants can be active simultaneously, these 8 participants consist of one so called “master” device and seven so-called “slave” or secondary devices. The master device controls the communication and assigns so-called “sendslots” to participants. Additionally, communications within a piconet are based on the client server principle, which imposes the restriction that the master (server) is needed for on-going communications. Thus, when a master device looses the connection the piconet ceases to exist until a new master is selected and re-establishes the piconet by starting the creation process at the beginning. Although a Bluetooth device can be registered in multiple piconets, it can only be registered as master in one piconet. Additionally, those of ordinary skill in the art will recognize that the term scatternet is often used to refer to a combination of up to 10 piconets in which each piconet is associated with a different identification frequency. However, the technical specifications of the Bluetooth communication protocol limit the functionality of that communication option. For example, those of ordinary skill in the art will recognize that a piconet can accommodate a maximum of 8 active participants. Further, a piconet will collapse if the server (master) looses the connection. Others have sought, with varying degrees of success, to deliver enhanced communication functionality despite the limitations of the Bluetooth protocol. For example, U.S. Pat. No. 6,674,995 teaches the creation of a virtual ball game that utilizes data that is passed between participants via Bluetooth, thereby delivering to them the illusion that they are playing a ball game. As another example, U.S. patent application No. 20020151320 describes a method of giving users in a user community additional functionality when using a software package in a community environment. That is, certain functions are provided to the users depending on the number of participants, with higher user numbers being associated with the unlocking of additional program functionality. However, these sorts of approaches are still fundamentally limited by the nature of the Bluetooth protocol. As an example of an alternative approach to the use of Bluetooth, consider U.S. patent application 2005/0063409 that teaches a method for allowing users to communicate across several scatternets. However, this invention utilizes multiple interconnected servers and is not suitable for users that wish to quickly arrange and participate in an ad hoc communications group. None of the prior art communication options, however, deliver a flexible way of communicating with an arbitrary number of individual users. In each case either the users are restricted by the technical limitations of the Bluetooth standard or the communication options necessary to create a group chat are too involved for the average user to accomplish. Note that for purposes of the instant disclosure, the term enhancement of the communication options will be taken to refer to any approach that allows a user to communicate with a mobile device in addition to the already existing communication options. Thus what is needed is a method that gives the user of a cell phone or users of mobile devices the ability to create multi-user communications on that device without a need for elaborate equipment configurations, planning, or installation and which is not bound by the technical limitations of a specific communication protocol. Preferably the method will extend an invitation to others to join a communications group and will automatically provide the appropriate software for use by new users who do not already have it. Preferably the method will use a commonly available wireless protocol such as Bluetooth or Wi-Fi. Accordingly it should now be recognized, as was recognized by the present inventors, that there exists, and has existed for some time, a very real need for a system and method that would address and solve the above-described problems. Before proceeding to a description of the present invention, however, it should be noted and remembered that the description of the invention which follows, together with the accompanying drawings, should not be construed as limiting the invention to the examples (or preferred embodiments) shown and described. This is so because those skilled in the art to which the invention pertains will be able to devise other forms of the invention within the ambit of the appended claims.
Udzungwa red colobus The Uzungwa red colobus (Piliocolobus gordonorum), also known as the Udzungwa red colobus or Iringa red colobus, is a species of primate in the family Cercopithecidae. It is endemic to riverine and montane forest in the Udzungwa Mountains in Tanzania. It is threatened by habitat loss. References Uzungwa red colobus Category:Endemic fauna of Tanzania Category:Mammals of Tanzania Category:Endangered fauna of Africa Uzungwa red colobus Category:Taxonomy articles created by Polbot Category:Primates of Africa
Serial imaging and SWAN sequence of developmental venous anomaly thrombosis with hematoma: Diagnosis and follow-up. Developmental venous anomalies (DVAs) are usually asymptomatic. We report a case of DVA thrombosis with recurrent tiny frontal hematoma in a 24-year-old man. The contribution of T2-GRE and SWAN sequences are discussed. Follow-up attested complete recanalization after anticoagulation.
--- abstract: 'The purpose of this article is to study the problem of finding sharp lower bounds for the norm of the product of polynomials in the ultraproducts of Banach spaces $(X_i)_{\mathfrak U}$. We show that, under certain hypotheses, there is a strong relation between this problem and the same problem for the spaces $X_i$.' address: 'IMAS-CONICET' author: - Jorge Tomás Rodríguez title: On the norm of products of polynomials on ultraproducts of Banach spaces --- Introduction ============ In this article we study the factor problem in the context of ultraproducts of Banach spaces. This problem can be stated as follows: for a Banach space $X$ over a field ${\mathbb K}$ (with ${\mathbb K}={\mathbb R}$ or ${\mathbb K}={\mathbb C}$) and natural numbers $k_1,\cdots, k_n$ find the optimal constant $M$ such that, given any set of continuous scalar polynomials $P_1,\cdots,P_n:X\rightarrow {\mathbb K}$, of degrees $k_1,\cdots,k_n$; the inequality $$\label{problema} M \Vert P_1 \cdots P_n\Vert \ge \, \Vert P_1 \Vert \cdots \Vert P_n \Vert$$ holds, where $\Vert P \Vert = \sup_{\Vert x \Vert_X=1} \vert P(x)\vert$. We also study a variant of the problem in which we require the polynomials to be homogeneous. Recall that a function $P:X\rightarrow {\mathbb K}$ is a continuous $k-$homogeneous polynomial if there is a continuous $k-$linear function $T:X^k\rightarrow {\mathbb K}$ for which $P(x)=T(x,\cdots,x)$. A function $Q:X\rightarrow {\mathbb K}$ is a continuous polynomial of degree $k$ if $Q=\sum_{l=0}^k Q_l$ with $Q_0$ a constant, $Q_l$ ($1\leq l \leq k$) an $l-$homogeneous polynomial and $Q_k \neq 0$ . The factor problem has been studied by several authors. In [@BST], C. Benítez, Y. Sarantopoulos and A. Tonge proved that, for continuous polynomials, inequality (\[problema\]) holds with constant $$M=\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}$$ for any complex Banach space. The authors also showed that this is the best universal constant, since there are polynomials on $\ell_1$ for which equality prevails. For complex Hilbert spaces and homogeneous polynomials, D. Pinasco proved in [@P] that the optimal constant is $$\nonumber M=\sqrt{\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}}.$$ This is a generalization of the result for linear functions obtained by Arias-de-Reyna in [@A]. In [@CPR], also for homogeneous polynomials, D. Carando, D. Pinasco and the author proved that for any complex $L_p(\mu)$ space, with $dim(L_p(\mu))\geq n$ and $1<p<2$, the optimal constant is $$\nonumber M=\sqrt[p]{\frac{(k_1+\cdots + k_n)^{(k_1+\cdots +k_n)}}{k_1^{k_1} \cdots k_n^{k_n}}}.$$ This article is partially motivated by the work of M. Lindström and R. A. Ryan in [@LR]. In that article they studied, among other things, a problem similar to (\[problema\]): finding the so called polarization constant of a Banach space. They found a relation between the polarization constant of the ultraproduct $(X_i)_{\mathfrak U}$ and the polarization constant of each of the spaces $X_i$. Our objective is to do an analogous analysis for our problem (\[problema\]). That is, to find a relation between the factor problem for the space $(X_i)_{\mathfrak U}$ and the factor problem for the spaces $X_i$. In Section 2 we give some basic definitions and results of ultraproducts needed for our discussion. In Section 3 we state and prove the main result of this paper, involving ultraproducts, and a similar result on biduals. Ultraproducts ============= We begin with some definitions, notations and basic results on filters, ultrafilters and ultraproducts. Most of the content presented in this section, as well as an exhaustive exposition on ultraproducts, can be found in Heinrich’s article [@H]. A filter ${\mathfrak U}$ on a family $I$ is a collection of non empty subsets of $I$ closed by finite intersections and inclusions. An ultrafilter is maximal filter. In order to define the ultraproduct of Banach spaces, we are going to need some topological results first. Let ${\mathfrak U}$ be an ultrafilter on $I$ and $X$ a topological space. We say that the limit of $(x_i)_{i\in I} \subseteq X$ respect of ${\mathfrak U}$ is $x$ if for every open neighborhood $U$ of $x$ the set $\{i\in I: x_i \in U\}$ is an element of ${\mathfrak U}$. We denote $$\displaystyle\lim_{i,{\mathfrak U}} x_i = x.$$ The following is Proposition 1.5 from [@H]. \[buenadef\] Let ${\mathfrak U}$ be an ultrafilter on $I$, $X$ a compact Hausdorff space and $(x_i)_{i\in I} \subseteq X$. Then, the limit of $(x_i)_{i\in I}$ respect of ${\mathfrak U}$ exists and is unique. Later on, we are going to need the next basic Lemma about limits of ultraproducts, whose proof is an easy exercise of basic topology and ultrafilters. \[lemlimit\] Let ${\mathfrak U}$ be an ultrafilter on $I$ and $\{x_i\}_{i\in I}$ a family of real numbers. Assume that the limit of $(x_i)_{i\in I} \subseteq {\mathbb R}$ respect of ${\mathfrak U}$ exists and let $r$ be a real number such that there is a subset $U$ of $\{i: r<x_i\}$ with $U\in {\mathfrak U}$. Then $$r \leq \displaystyle\lim_{i,{\mathfrak U}} x_i.$$ We are now able to define the ultraproduct of Banach spaces. Given an ultrafilter ${\mathfrak U}$ on $I$ and a family of Banach spaces $(X_i)_{i\in I}$, take the Banach space $\ell_\infty(I,X_i)$ of norm bounded families $(x_i)_{i\in I}$ with $x_i \in X_i$ and norm $$\Vert (x_i)_{i\in I} \Vert = \sup_{i\in I} \Vert x_i \Vert.$$ The ultraproduct $(X_i)_{\mathfrak U}$ is defined as the quotient space $\ell_\infty(I,X_i)/ \sim $ where $$(x_i)_{i\in I}\sim (y_i)_{i\in I} \Leftrightarrow \displaystyle\lim_{i,{\mathfrak U}} \Vert x_i - y_i \Vert = 0.$$ Observe that Proposition \[buenadef\] assures us that this limit exists for every pair $(x_i)_{i\in I}, (y_i)_{i\in I}\in \ell_\infty(I,X_i)$. We denote the class of $(x_i)_{i\in I}$ in $(X_i)_{\mathfrak U}$ by $(x_i)_{\mathfrak U}$. The following result is the polynomial version of Definition 2.2 from [@H] (see also Proposition 2.3 from [@LR]). The reasoning behind is almost the same. \[pollim\] Given two ultraproducts $(X_i)_{\mathfrak U}$, $(Y_i)_{\mathfrak U}$ and a family of continuous homogeneous polynomials $\{P_i\}_{i\in I}$ of degree $k$ with $$\displaystyle\sup_{i\in I} \Vert P_i \Vert < \infty,$$ the map $P:(X_i)_{\mathfrak U}\longrightarrow (Y_i)_{\mathfrak U}$ defined by $P((x_i)_{\mathfrak U})=(P_i(x_i))_{\mathfrak U}$ is a continuous homogeneous polynomial of degree $k$. Moreover $\Vert P \Vert = \displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. If ${\mathbb K}={\mathbb C}$, the hypothesis of homogeneity can be omitted, but in this case the degree of $P$ can be lower than $k$. Let us start with the homogeneous case. Write $P_i(x)=T_i(x,\cdots,x)$ with $T_i$ a $k-$linear continuous function. Define $T:(X_i)_{\mathfrak U}^k \longrightarrow (Y_i)_{\mathfrak U}$ by $$T((x^1_i)_{\mathfrak U},\cdots,(x^k_i)_{\mathfrak U})=(T_i(x^1_i,\cdots ,x^k_i))_{\mathfrak U}.$$ $T$ is well defined since, by the polarization formula, $ \displaystyle\sup_{i\in I} \Vert T_i \Vert \leq \displaystyle\sup_{i\in I} \frac{k^k}{k!}\Vert P_i \Vert< \infty$. Seeing that for each coordinate the maps $T_i$ are linear, the map $T$ is linear in each coordinate, and thus it is a $k-$linear function. Given that $$P((x_i)_{\mathfrak U})=(P_i(x_i))_{\mathfrak U}=(T_i(x_i,\cdots,x_i))_{\mathfrak U}=T((x_i)_{\mathfrak U},\cdots,(x_i)_{\mathfrak U})$$ we conclude that $P$ is a $k-$homogeneous polynomial. To see the equality of the norms for every $i$ choose a norm $1$ element $x_i\in X_i$ where $P_i$ almost attains its norm, and from there is easy to deduce that $\Vert P \Vert \geq \displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. For the other inequality we use that $$|P((x_i)_{\mathfrak U})|= \displaystyle\lim_{i,{\mathfrak U}}|P_i(x_i)| \leq \displaystyle\lim_{i,{\mathfrak U}}\Vert P_i \Vert \Vert x_i \Vert^k = \left(\displaystyle\lim_{i,{\mathfrak U}}\Vert P_i \Vert \right)\Vert (x_i)_{\mathfrak U}\Vert^k .$$ Now we treat the non homogeneous case. For each $i\in I$ we write $P_i=\sum_{l=0}^kP_{i,l}$, with $P_{i,0}$ a constant and $P_{i,l}$ ($1\leq l \leq k$) an $l-$homogeneous polynomial. Take the direct sum $X_i \oplus_\infty {\mathbb C}$ of $X_i$ and ${\mathbb C}$, endowed with the norm $\Vert (x,\lambda) \Vert =\max \{ \Vert x \Vert, | \lambda| \}$. Consider the polynomial $\tilde{P_i}:X_i \oplus_\infty {\mathbb C}\rightarrow Y_i$ defined by $\tilde{P}_i(x,\lambda)=\sum_{l=0}^k P_{i,l}(x)\lambda^{k-l}$. The polynomial $\tilde{P}_i$ is an homogeneous polynomial of degree $k$ and, using the maximum modulus principle, it is easy to see that $\Vert P_i \Vert = \Vert \tilde{P_i} \Vert $. Then, by the homogeneous case, we have that the polynomial $\tilde{P}:(X_i \oplus_\infty {\mathbb C})_{\mathfrak U}\rightarrow (Y_i)_{\mathfrak U}$ defined as $\tilde{P}((x_i,\lambda_i)_{\mathfrak U})=(\tilde{P}_i(x_i,\lambda_i))_{\mathfrak U}$ is a continuous homogeneous polynomial of degree $k$ and $\Vert \tilde{P} \Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert \tilde{P}_i \Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. Via the identification $(X_i \oplus_\infty {\mathbb C})_{\mathfrak U}=(X_i)_{\mathfrak U}\oplus_\infty {\mathbb C}$ given by $(x_i,\lambda_i)_{\mathfrak U}=((x_i)_{\mathfrak U},\displaystyle\lim_{i,{\mathfrak U}} \lambda_i)$ we have that the polynomial $Q:(X_i)_{\mathfrak U}\oplus_\infty {\mathbb C}\rightarrow {\mathbb C}$ defined as $Q((x_i)_{\mathfrak U},\lambda)=\tilde{P}((x_i,\lambda)_{\mathfrak U})$ is a continuous homogeneous polynomial of degree $k$ and $\Vert Q\Vert =\Vert \tilde{P}\Vert$. Then, the polynomial $P((x_i)_{\mathfrak U})=Q((x_i)_{\mathfrak U},1)$ is a continuous polynomial of degree at most $k$ and $\Vert P\Vert =\Vert Q\Vert =\displaystyle\lim_{i,{\mathfrak U}} \Vert P_i \Vert$. If $\displaystyle\lim_{i,{\mathfrak U}} \Vert P_{i,k} \Vert =0 $ then the degree of $P$ is lower than $k$. Note that, in the last proof, we can take the same approach used for non homogeneous polynomials in the real case, but we would not have the same control over the norms. Main result ============= This section contains our main result. As mentioned above, this result is partially motivated by Theorem 3.2 from [@LR]. We follow similar ideas for the proof. First, let us fix some notation that will be used throughout this section. In this section, all polynomials considered are continuous scalar polynomials. Given a Banach space $X$, $B_X$ and $S_X$ denote the unit ball and the unit sphere of $X$ respectively, and $X^*$ is the dual of $X$. Given a polynomial $P$ on $X$, $deg(P)$ stands for the degree of $P$. For a Banach space $X$ let $D(X,k_1,\cdots,k_n)$ denote the smallest constant that satisfies (\[problema\]) for polynomials of degree $k_1,\cdots,k_n$. We also define $C(X,k_1,\cdots,k_n)$ as the smallest constant that satisfies (\[problema\]) for homogeneous polynomials of degree $k_1,\cdots,k_n$. Throughout this section most of the results will have two parts. The first involving the constant $C(X,k_1,\cdots,k_n)$ for homogeneous polynomials and the second involving the constant $D(X,k_1,\cdots,k_n)$ for arbitrary polynomials. Given that the proof of both parts are almost equal, we will limit to prove only the second part of the results. Recall that a space $X$ has the $1 +$ uniform approximation property if for all $n\in {\mathbb N}$, exists $m=m(n)$ such that for every subspace $M\subset X$ with $dim(M)=n$ and every $\varepsilon > 0$ there is an operator $T\in \mathcal{L}(X,X)$ with $T|_M=id$, $rg(T)\leq m$ and $\Vert T\Vert \leq 1 + \varepsilon$ (i.e. for every $\varepsilon > 0$ $X$ has the $1+\varepsilon$ uniform approximation property). \[main thm\] If ${\mathfrak U}$ is an ultrafilter on a family $I$ and $(X_i)_{\mathfrak U}$ is an ultraproduct of complex Banach spaces then 1. $C((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(C(X_i,k_1,\cdots,k_n)).$ 2. $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)).$ Moreover, if each $X_i$ has the $1+$ uniform approximation property, equality holds in both cases. In order to prove this Theorem some auxiliary lemmas are going to be needed. The first one is due to Heinrich [@H]. \[aprox\] Given an ultraproduct of Banach spaces $(X_i)_{\mathfrak U}$, if each $X_i$ has the $1+$ uniform approximation property then $(X_i)_{\mathfrak U}$ has the metric approximation property. When working with the constants $C(X,k_1,\cdots,k_n)$ and $D(X,k_1,\cdots,k_n)$, the following characterization may result handy. \[alternat\] a) The constant $C(X,k_1,\cdots,k_n)$ is the biggest constant $M$ such that given any $\varepsilon >0$ there exist a set of homogeneous continuous polynomials $\{P_j\}_{j=1}^n$ with $deg(P_j)\leq k_j$ such that $$\label{condition} M\left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \Vert P_j \Vert.$$ b\) The constant $D(X,k_1,\cdots,k_n)$ is the biggest constant satisfying the same for arbitrary polynomials. To prove this Lemma it is enough to see that $D(X,k_1,\cdots,k_n)$ is decreasing as a function of the degrees $k_1,\cdots, k_n$ and use that the infimum is the greatest lower bound. \[rmkalternat\] It is clear that in Lemma \[alternat\] we can take the polynomials $\{P_j\}_{j=1}^n$ with $deg(P_j)= k_j$ instead of $deg(P_j)\leq k_j$. Later on we will use both versions of the Lemma. One last lemma is needed for the proof of the Main Theorem. \[normas\] Let $P$ be a (not necessarily homogeneous) polynomial on a complex Banach space $X$ with $deg(P)=k$. For any point $x\in X$ $$|P(x)|\leq \max\{\Vert x \Vert, 1\}^k \Vert P\Vert . \nonumber$$ If $P$ is homogeneous the result is rather obvious since we have the inequality $$|P(x)|\leq \Vert x \Vert^k \Vert P\Vert . \nonumber$$ Suppose that $P=\sum_{l=0}^k P_l$ with $P_l$ an $l-$homogeneous polynomial. Consider the space $X \oplus_\infty {\mathbb C}$ and the polynomial $\tilde{P}:X \oplus_\infty {\mathbb C}\rightarrow {\mathbb C}$ defined by $\tilde{P}(x,\lambda)=\sum_{l=0}^k P_l(x)\lambda^{k-l}$. The polynomial $\tilde{P}$ is homogeneous of degree $k$ and $\Vert P \Vert = \Vert \tilde{P} \Vert $. Then, using that $\tilde{P}$ is homogeneous we have $$|P(x)|=|\tilde{P} (x,1)| \leq \Vert (x,1) \Vert^k \Vert \tilde{P} \Vert = \max\{\Vert x \Vert, 1\}^k \Vert P\Vert . \nonumber$$ We are now able to prove our main result. Throughout this proof we regard the space $({\mathbb C})_{\mathfrak U}$ as ${\mathbb C}$ via the identification $(\lambda_i)_{\mathfrak U}=\displaystyle\lim_{i,{\mathfrak U}} \lambda_i$. First, we are going to see that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$. To do this we only need to prove that $\displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$ satisfies (\[condition\]). Given $\varepsilon >0$ we need to find a set of polynomials $\{P_{j}\}_{j=1}^n$ on $(X_i)_{\mathfrak U}$ with $deg(P_{j})\leq k_j$ such that $$\displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_j \right \Vert .$$ By Remark \[rmkalternat\] we know that for each $i\in I$ there is a set of polynomials $\{P_{i,j}\}_{j=1}^n$ on $X_i$ with $deg(P_{i,j})=k_j$ such that $$D(X_i,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_{i,j} \right \Vert \leq (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert P_{i,j} \right \Vert.$$ Replacing $P_{i,j}$ with $P_{i,j}/\Vert P_{i,j} \Vert$ we may assume that $\Vert P_{i,j} \Vert =1$. Define the polynomials $\{P_j\}_{j=1}^n$ on $(X_i)_{\mathfrak U}$ by $P_j((x_i)_{\mathfrak U})=(P_{i,j}(x_i))_{\mathfrak U}$. Then, by Proposition \[pollim\], $deg(P_j)\leq k_j$ and $$\begin{aligned} \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n)) \left \Vert \prod_{j=1}^{n} P_{j} \right \Vert &=& \displaystyle\lim_{i,{\mathfrak U}} \left(D(X_i,k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{i,j} \right \Vert \right) \nonumber \\ &\leq& \displaystyle\lim_{i,{\mathfrak U}}\left((1+\varepsilon)\prod_{j=1}^{n}\Vert P_{i,j} \Vert \right)\nonumber \\ &=& (1+\varepsilon)\prod_{j=1}^{n} \Vert P_{j} \Vert \nonumber \nonumber \end{aligned}$$ as desired. To prove that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \leq \displaystyle\lim_{i,{\mathfrak U}}(D(X_i,k_1,\cdots,k_n))$ if each $X_i$ has the $1+$ uniform approximation property is not as straightforward. Given $\varepsilon >0$, let $\{P_j\}_{j=1}^n$ be a set of polynomials on $(X_i)_{\mathfrak U}$ with $deg(P_j)=k_j$ such that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon)\prod_{j=1}^{n} \Vert P_j \Vert .$$ Let $K\subseteq B_{(X_i)_{\mathfrak U}}$ be the finite set $K=\{x_1,\cdots, x_n\}$ where $ x_j$ is such that $$|P_j(x_j)| > \Vert P_j\Vert (1- \varepsilon) \mbox{ for }j=1,\cdots, n.$$ Being that each $X_i$ has the $1+$ uniform approximation property, then, by Lemma \[aprox\], $(X_i)_{\mathfrak U}$ has the metric approximation property. Therefore, exist a finite rank operator $S:(X_i)_{\mathfrak U}\rightarrow (X_i)_{\mathfrak U}$ such that $\Vert S\Vert \leq 1 $ and $$\Vert P_j - P_j \circ S \Vert_K< |P_j(x_j)|\varepsilon \mbox{ for }j=1,\cdots, n.$$ Now, define the polynomials $Q_1,\cdots, Q_n$ on $(X_i)_{\mathfrak U}$ as $Q_j=P_j\circ S$. Then $$\left\Vert \prod_{j=1}^n Q_j \right\Vert \leq \left\Vert \prod_{j=1}^n P_j \right\Vert$$ $$\Vert Q_j\Vert_K > | P_j(x_j)|-\varepsilon | P_j(x_j)| =| P_j(x_j)| (1-\varepsilon) \geq \Vert P_j \Vert(1-\varepsilon)^2.$$ The construction of this polynomials is a slight variation of Lemma 3.1 from [@LR]. We have the next inequality for the product of the polynomials $\{Q_j\}_{j=1}^n$ $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert &\leq& D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{j} \right \Vert \nonumber \\ &\leq& (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_{j} \right \Vert . \label{desq}\end{aligned}$$ Since $S$ is a finite rank operator, the polynomials $\{ Q_j\}_{j=1}^n$ have the advantage that are finite type polynomials. This will allow us to construct polynomials on $(X_i)_{\mathfrak U}$ which are limit of polynomials on the spaces $X_i$. For each $j$ write $Q_j=\sum_{t=1}^{m_j}(\psi_{j,t})^{r_{j,t}}$ with $\psi_{j,t}\in (X_i)_{\mathfrak U}^*$, and consider the spaces $N=\rm{span} \{x_1,\cdots,x_n\}\subset (X_i)_{\mathfrak U}$ and $M=\rm{span} \{\psi_{j,t} \}\subset (X_i)_{\mathfrak U}^*$. By the local duality of ultraproducts (see Theorem 7.3 from [@H]) exist $T:M\rightarrow (X_i^*)_{\mathfrak U}$ an $(1+\varepsilon)-$isomorphism such that $$JT(\psi)(x)=\psi(x) \mbox{ } \forall x\in N, \mbox{ } \forall \psi\in M$$ where $J:(X_i^*)_{\mathfrak U}\rightarrow (X_i)_{\mathfrak U}^*$ is the canonical embedding. Let $\phi_{j,t}=JT(\psi_{j,t})$ and consider the polynomials $\bar{Q}_1,\cdots, \bar{Q}_n$ on $(X_i)_{\mathfrak U}$ with $\bar{Q}_j=\sum_{t=1}^{m_j}(\phi_{j,t})^{r_{j,t}}$. Clearly $\bar{Q}_j$ is equal to $Q_j$ in $N$ and $K\subseteq N$, therefore we have the following lower bound for the norm of each polynomial $$\Vert \bar{Q}_j \Vert \geq \Vert \bar{Q}_j \Vert_K = \Vert Q_j \Vert_K >\Vert P_j \Vert(1-\varepsilon)^2 \label{desbarq}$$ Now, let us find an upper bound for the norm of the product $\Vert \prod_{j=1}^n \bar{Q}_j \Vert$. Let $x=(x_i)_{\mathfrak U}$ be any point in $B_{(X_i)_{\mathfrak U}}$. Then, we have $$\begin{aligned} \left|\prod_{j=1}^n \bar{Q}_j(x)\right| &=& \left|\prod_{j=1}^n \sum_{t=1}^{m_j}(\phi_{j,t} (x))^{r_{j,t}}\right|=\left|\prod_{j=1}^n \sum_{t=1}^{m_j} (JT\psi_{j,t}(x))^{r_{j,t}} \right| \nonumber \\ &=& \left|\prod_{j=1}^n \sum_{t=1}^{m_j}((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}\right|\nonumber\end{aligned}$$ Since $(JT)^*\hat{x}\in M^*$, $\Vert (JT)^*\hat{x}\Vert =\Vert JT \Vert \Vert x \Vert \leq \Vert J \Vert \Vert T \Vert \Vert x \Vert< 1 + \varepsilon$ and $M^*=\frac{(X_i)_{\mathfrak U}^{**}}{M^{\bot}}$, we can chose $z^{**}\in (X_i)_{\mathfrak U}^{**}$ with $\Vert z^{**} \Vert < \Vert (JT)^*\hat{x}\Vert+\varepsilon < 1+2\varepsilon$, such that $\prod_{j=1}^n \sum_{t=1}^{m_j} ((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}= \prod_{j=1}^n \sum_{t=1}^{m_j} (z^{**}(\psi_{j,t}))^{r_{j,t}}$. By Goldstine’s Theorem exist a net $\{z_\alpha\} \subseteq (X_i)_{\mathfrak U}$ $w^*-$convergent to $z$ in $(X_i)_{\mathfrak U}^{**}$ with $\Vert z_\alpha \Vert = \Vert z^{**}\Vert$. In particular, $ \psi_{j,t}(z_\alpha)$ converges to $z^{**}(\psi_{j,t})$. If we call ${\mathbf k}= \sum k_j$, since $\Vert z_\alpha \Vert< (1+2\varepsilon)$, by Lemma \[normas\], we have $$\left \Vert \prod_{j=1}^{n} Q_j \right \Vert (1+2\varepsilon)^{\mathbf k}\geq \left|\prod_{j=1}^n Q_j(z_\alpha)\right| = \left|\prod_{j=1}^n \sum_{t=1}^{m_j} ((\psi_{j,t})(z_\alpha))^{r_{j,t}}\right| . \label{usecomplex}$$ Combining this with the fact that $$\begin{aligned} \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} ((\psi_{j,t})(z_\alpha))^{r_{j,t}}\right| &\longrightarrow& \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} (z^{**}(\psi_{j,t}))^{r_{j,t}}\right|\nonumber\\ &=& \left|\prod_{j=1}^{n} \sum_{t=1}^{m_j} ((JT)^*\hat{x}(\psi_{j,t}))^{r_{j,t}}\right| = \left|\prod_{j=1}^{n} \bar{Q}_j(x)\right|\nonumber\end{aligned}$$ we conclude that $\left \Vert \prod_{j=1}^{n} Q_j \right \Vert (1+2\varepsilon)^{\mathbf k}\geq |\prod_{j=1}^{n} \bar{Q}_j(x)|$. Since the choice of $x$ was arbitrary we arrive to the next inequality $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \bar{Q}_j \right \Vert &\leq& (1+2\varepsilon)^{\mathbf k}D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_j \right \Vert \nonumber \\ &\leq& (1+2\varepsilon)^{\mathbf k}(1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_{j} \right \Vert \label{desbarq2} \\ &<& (1+2\varepsilon)^{\mathbf k}(1+\varepsilon) \frac{\prod_{j=1}^{n} \Vert \bar{Q}_j \Vert }{(1-\varepsilon)^{2n}} .\label{desbarq3} \\end{aligned}$$ In (\[desbarq2\]) and (\[desbarq3\]) we use (\[desq\]) and (\[desbarq\]) respectively. The polynomials $\bar{Q}_j$ are not only of finite type, these polynomials are also generated by elements of $(X_i^*)_{\mathfrak U}$. This will allow us to write them as limits of polynomials in $X_i$. For any $i$, consider the polynomials $\bar{Q}_{i,1},\cdots,\bar{Q}_{i,n}$ on $X_i$ defined by $\bar{Q}_{i,j}= \displaystyle\sum_{t=1}^{m_j} (\phi_{i,j,t})^{r_{j,t}}$, where the functionals $\phi_{i,j,t}\in X_i^*$ are such that $(\phi_{i,j,t})_{\mathfrak U}=\phi_{j,t}$. Then $\bar{Q}_j(x)=\displaystyle\lim_{i,{\mathfrak U}} \bar{Q}_{i,j}(x)$ $\forall x \in (X_i)_{\mathfrak U}$ and, by Proposition \[pollim\], $\Vert \bar{Q}_j \Vert = \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert$. Therefore $$\begin{aligned} D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert &=& D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{j} \right \Vert \nonumber \\ &<& \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \bar{Q}_{j} \Vert \nonumber \\ &=& \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert . \nonumber \end{aligned}$$ To simplify the notation let us call $\lambda = \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} $. Take $L>0$ such that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert < L < \lambda \prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert . \nonumber$$ Since $(-\infty, \frac{L}{D((X_i)_{\mathfrak U},k_1,\cdots,k_n)})$ and $(\frac{L}{\lambda},+\infty)$ are neighborhoods of $\displaystyle\lim_{i,{\mathfrak U}} \left \Vert \prod_{j=1}^{n} \bar{Q}_{i,j} \right \Vert$ and $\prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert$ respectively, and $\prod_{j=1}^{n} \displaystyle\lim_{i,{\mathfrak U}} \Vert \bar{Q}_{i,j} \Vert= \displaystyle\lim_{i,{\mathfrak U}} \prod_{j=1}^{n} \Vert \bar{Q}_{i,j} \Vert$, by definition of $\displaystyle\lim_{i,{\mathfrak U}}$, the sets $$A=\{i_0: D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{i_0,j} \right \Vert <L\} \mbox{ and }B=\{i_0: \lambda \prod_{j=1}^{n} \Vert \bar{Q}_{i_0,j} \Vert > L \}$$ are elements of ${\mathfrak U}$. Since ${\mathfrak U}$ is closed by finite intersections $A\cap B\in {\mathfrak U}$. If we take any element $i_0 \in A\cap B$ then, for any $\delta >0$, we have that $$D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} \bar{Q}_{i_0,j} \right \Vert \frac{1}{\lambda}\leq \frac{L}{\lambda} \leq \prod_{j=1}^{n} \Vert \bar{Q}_{i_0,j} \Vert < (1+ \delta)\prod_{j= 1}^{n} \Vert \bar{Q}_{i_0,j} \Vert \nonumber$$ Then, since $\delta$ is arbitrary, the constant $D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\frac{1}{\lambda}$ satisfy (\[condition\]) for the space $X_{i_0}$ and therefore, by Lemma \[alternat\], $$\frac{1}{\lambda}D((X_i)_{\mathfrak U},k_1,\cdots,k_n) \leq D(X_{i_0},k_1,\cdots,k_n). \nonumber$$ This holds true for any $i_0$ in $A\cap B$. Since $A\cap B \in {\mathfrak U}$, by Lemma \[lemlimit\], $\frac{1}{\lambda}D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\leq \displaystyle\lim_{i,{\mathfrak U}} D(X_i,k_1,\cdots,k_n) $. Using that $\lambda \rightarrow 1$ when $\varepsilon \rightarrow 0$ we conclude that $D((X_i)_{\mathfrak U},k_1,\cdots,k_n)\leq \displaystyle\lim_{i,{\mathfrak U}} D(X_i,k_1,\cdots,k_n).$ Similar to Corollary 3.3 from [@LR], a straightforward corollary of our main result is that for any complex Banach space $X$ with $1+$ uniform approximation property $C(X,k_1,\cdots,k_n)=C(X^{**},k_1,\cdots,k_n)$ and $D(X,k_1,\cdots,k_n)=D(X^{**},k_1,\cdots,k_n)$ . Using that $X^{**}$ is $1-$complemented in some adequate ultrafilter $(X)_{{\mathfrak U}}$ the result is rather obvious. For a construction of the adequate ultrafilter see [@LR]. But following the previous proof, and using the principle of local reflexivity applied to $X^*$ instead of the local duality of ultraproducts, we can prove the next stronger result. Let $X$ be a complex Banach space. Then 1. $C(X^{**},k_1,\cdots,k_n)\geq C(X,k_1,\cdots,k_n).$ 2. $D(X^{**},k_1,\cdots,k_n \geq D(X,k_1,\cdots,k_n)).$ Moreover, if $X^{**}$ has the metric approximation property, equality holds in both cases. The inequality $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n)$ is a corollary of Theorem \[main thm\] (using the adequate ultrafilter mentioned above). Let us prove that if $X^{**}$ has the metric approximation property then $D((X^{**},k_1,\cdots,k_n)\geq D(X,k_1,\cdots,k_n)$. Given $\varepsilon >0$, let $\{P_j\}_{j=1}^n$ be a set of polynomials on $X^{**}$ with $deg(P_j)=k_j$ such that $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} P_{j} \right \Vert \leq (1+\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert .\nonumber$$ Analogous to the proof of Theorem \[main thm\], since $X^{**}$ has the metric approximation, we can construct finite type polynomials $Q_1,\cdots,Q_n$ on $X^{**}$ with $deg(Q_j)=k_j$, $\Vert Q_j \Vert_K \geq \Vert P_j \Vert (1-\varepsilon)^2$ for some finite set $K\subseteq B_{X^{**}}$ and that $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert < (1+\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert . \nonumber$$ Suppose that $Q_j=\sum_{t=1}^{m_j}(\psi_{j,t})^{r_{j,t}}$ and consider the spaces $N=\rm{span} \{K\}$ and $M=\rm{span} \{\psi_{j,t} \}$. By the principle of local reflexivity (see [@D]), applied to $X^*$ (thinking $N$ as a subspaces of $(X^*)^*$ and $M$ as a subspaces of $(X^*)^{**}$), there is an $(1+\varepsilon)-$isomorphism $T:M\rightarrow X^*$ such that $$JT(\psi)(x)=\psi(x) \mbox{ } \forall x\in N, \mbox{ } \forall \psi\in M\cap X^*=M,$$ where $J:X^*\rightarrow X^{***}$ is the canonical embedding. Let $\phi_{j,t}=JT(\psi_{j,t})$ and consider the polynomials $\bar{Q}_1,\cdots, \bar{Q}_n$ on $X^{**}$ defined by $\bar{Q}_j=\sum_{t=1}^{m_j}(\phi_{j,t})^{r_{j,t}}$. Following the proof of the Main Theorem, one arrives to the inequation $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \bar{Q_j} \right \Vert < (1+ \delta) \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \bar{Q_j} \Vert \nonumber$$ for every $\delta >0$. Since each $\bar{Q}_j$ is generated by elements of $J(X^*)$, by Goldstine’s Theorem, the restriction of $\bar{Q}_j$ to $X$ has the same norm and the same is true for $\prod_{j=1}^{n} \bar{Q_j}$. Then $$D(X^{**},k_1,\cdots,k_n)\left \Vert \prod_{j=1}^{n} \left.\bar{Q_j}\right|_X \right \Vert < (1+ \delta) \frac{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}{(1-\varepsilon)^{2n}} \prod_{j=1}^{n} \Vert \left.\bar{Q_j}\right|_X \Vert \nonumber$$ By Lemma \[alternat\] we conclude that $$\frac{(1-\varepsilon)^{2n}}{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}}D(X^{**},k_1,\cdots,k_n)\leq D(X,k_1,\cdots,k_n).$$ Given that the choice of $\varepsilon$ is arbitrary and that $\frac{(1-\varepsilon)^{2n}}{(1+\varepsilon)(1+2\varepsilon)^{\mathbf k}} $ tends to $1$ when $\varepsilon$ tends to $0$ we conclude that $D(X^{**},k_1,\cdots,k_n)\leq D(X,k_1,\cdots,k_n)$. Note that in the proof of the Main Theorem the only parts where we need the spaces to be complex Banach spaces are at the beginning, where we use Proposition \[pollim\], and in the inequality (\[usecomplex\]), where we use Lemma \[normas\]. But both results holds true for homogeneous polynomials on a real Banach space. Then, copying the proof of the Main Theorem we obtain the following result for real spaces. If ${\mathfrak U}$ is an ultrafilter on a family $I$ and $(X_i)_{\mathfrak U}$ is an ultraproduct of real Banach spaces then $$C((X_i)_{\mathfrak U},k_1,\cdots,k_n) \geq \displaystyle\lim_{i,{\mathfrak U}}(C(X_i,k_1,\cdots,k_n)).$$ If in addition each $X_i$ has the $1+$ uniform approximation property, the equality holds. Also we can get a similar result for the bidual of a real space. Let $X$ be a real Banach space. Then 1. $C(X^{**},k_1,\cdots,k_n)\geq C(X,k_1,\cdots,k_n).$ 2. $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n).$ If $X^{**}$ has the metric approximation property, equality holds in $(a)$. The proof of item $(a)$ is the same that in the complex case, so we limit to prove $D(X^{**},k_1,\cdots,k_n) \geq D(X,k_1,\cdots,k_n))$. To do this we will show that given an arbitrary $\varepsilon >0$, there is a set of polynomials $\{P_{j}\}_{j=1}^n$ on $X^{**}$ with $deg(P_{j})\leq k_j$ such that $$D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_j \right \Vert \leq (1+\varepsilon) \prod_{j=1}^{n} \left \Vert P_j \right \Vert .$$ Take $\{Q_{j}\}_{j=1}^n$ a set of polynomials on $X$ with $deg(Q_j)=k_j$ such that $$D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert \leq (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert Q_{j} \right \Vert.$$ Consider now the polynomials $P_j=AB(Q_j)$, where $AB(Q_j)$ is the Aron Berner extension of $Q_j$ (for details on this extension see [@AB] or [@Z]). Since $AB\left( \prod_{j=1}^n P_j \right)=\prod_{j=1}^n AB(P_j)$, using that the Aror Berner extension preserves norm (see [@DG]) we have $$\begin{aligned} D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} P_{j} \right \Vert &=& D(X,k_1,\cdots,k_n) \left \Vert \prod_{j=1}^{n} Q_{j} \right \Vert\nonumber \\ &\leq& (1 +\varepsilon)\prod_{j=1}^{n} \left\Vert Q_{j} \right\Vert \nonumber \\ &=& (1 +\varepsilon)\prod_{j=1}^{n} \left \Vert P_{j} \right \Vert \nonumber \end{aligned}$$ as desired. As a final remark, we mention two types of spaces for which the results on this section can be applied. Corollary 9.2 from [@H] states that any Orlicz space $L_\Phi(\mu)$, with $\mu$ a finite measure and $\Phi$ an Orlicz function with regular variation at $\infty$, has the $1+$ uniform projection property, which is stronger than the $1+$ uniform approximation property. In [@PeR] Section two, A. Pełczyński and H. Rosenthal proved that any ${\mathcal L}_{p,\lambda}-$space ($1\leq \lambda < \infty$) has the $1+\varepsilon-$uniform projection property for every $\varepsilon>0$ (which is stronger than the $1+\varepsilon-$uniform approximation property), therefore, any ${\mathcal L}_{p,\lambda}-$space has the $1+$ uniform approximation property. Acknowledgment {#acknowledgment .unnumbered} ============== I would like to thank Professor Daniel Carando for both encouraging me to write this article, and for his comments and remarks which improved its presentation and content. [HD]{} R. M. J. Arias-de-Reyna. *Gaussian variables, polynomials and permanents*. Linear Algebra Appl. 285 (1998), 107–114. R. M. Aron and P. D. Berner. *A Hahn-Banach extension theorem for analytic mapping*. Bull. Soc. Math. France 106 (1978), 3–24. C. Benítez, Y. Sarantopoulos and A. Tonge. *Lower bounds for norms of products of polynomials*. Math. Proc. Cambridge Philos. Soc. 124 (1998), 395–408. D. Carando, D. Pinasco y J.T. Rodríguez. *Lower bounds for norms of products of polynomials on $L_p$ spaces*. Studia Math. 214 (2013), 157–166. A. M. Davie and T. W. Gamelin. *A theorem on polynomial-star approximation*. Proc. Amer. Math. Soc. 106 (1989) 351–356. D. W. Dean. *The equation $L(E,X^{**})=L(E,X)^{**}$ and the principle of local reflexivity*. Proceedings of the American Mathematical Society. 40 (1973), 146-148. S. Heinrich. *Ultraproducts in Banach space theory*. J. Reine Angew. Math. 313 (1980), 72–104. M. Lindström and R. A. Ryan. *Applications of ultraproducts to infinite dimensional holomorphy*. Math. Scand. 71 (1992), 229–242. A. Pełczyński and H. Rosenthal. *Localization techniques in $L_p$ spaces*. Studia Math. 52 (1975), 265–289. D. Pinasco. *Lower bounds for norms of products of polynomials via Bombieri inequality*. Trans. Amer. Math. Soc. 364 (2012), 3993–4010. I. Zalduendo. *Extending polynomials on Banach Spaces - A survey*. Rev. Un. Mat. Argentina 46 (2005), 45–72.
On January 1, 2018, the most significant overhaul to the Internal Revenue Code in decades took effect. High-income taxpayers stand to benefit from lower tax brackets, higher estate tax exemptions and a less stringent alternative minimum tax. However, high-income earners face new limitations on some favored deductions and notable revisions in charitable write-offs. Some of the most noteworthy changes are…
Jake Jones James Murrell "Jake" Jones (November 23, 1920 – December 13, 2000) was a first baseman in Major League Baseball who played between and for the Chicago White Sox (1941–42, 1946–47) and Boston Red Sox (1947–48). Listed at 6'3", 197 lb., Jones batted and threw right-handed. He was born in Epps, Louisiana. Career Jones was a highly decorated World War II veteran. He played 10 games in the American League for Chicago, in part of two seasons, before enlisting in the United States Navy right after Pearl Harbor attack. He joined the service on June 30, 1942, becoming an aviator. In November 1943 he was assigned to the unit on the USS Yorktown (CV-10), flying Grumman F6F Hellcat fighters. Between November and December 1944, Jones destroyed two Japanese A6M Zero and damaged one of them. On February 1, 1945, he shot down another three Zeroes while serving on a mission at northeast of Tokyo, to give him five confirmed victories. A day later, he annihilated other Zero and a Nakajima Ki-43. Then, on February 25 he received a half-share of a probable Ki-43. For his heroic action, Jones was awarded the Silver Star, two Distinguished Flying Cross and four Air Medals. Following his service discharge, Jones returned to play for Chicago in 1946. During the 1947 midseason he was dealt to the Boston Red Sox in exchange for Rudy York, batting a combined .237 with 19 home runs and 96 RBI that season. He hit .200 in 36 games for Boston in 1948, his last major league season, and finished his baseball career in 1949, dividing his playing time between the Texas League and American Association. Jones died in his hometown of Epps, Louisiana at age 80. References Baseball in Wartime Baseball Reference BR Bullpen Category:Boston Red Sox players Category:Chicago White Sox players Category:Major League Baseball first basemen Category:Recipients of the Distinguished Flying Cross (United States) Category:Recipients of the Silver Star Category:Baseball players from Louisiana Category:People from West Carroll Parish, Louisiana Category:1920 births Category:2000 deaths Category:United States Navy pilots of World War II Category:United States Navy officers Category:Recipients of the Air Medal
1. Field of the Invention The present invention relates to multi-chamber process equipments for fabricating semiconductor devices. 2. Description of the Prior Art In recent years, the advance in device miniaturization and IC complexity is increasing the need for more accurate and more complicated processes, and wafers of larger diameters. Accordingly, much attention is focused on multi-chamber process equipments (or systems) in view of increase of complex precesses, and enhancement of throughput in an individual wafer processing system. FIG. 14 shows one conventional example. A multi-chamber process equipment of this example includes a wafer transfer chamber 1, a plurality of process chambers 3 connected with the transfer chamber 1 through respective gate valves 2, a load lock chamber (preliminary evacuation chamber) 5 connected with the transfer chamber 1 through a gate valve 4, and a wafer load chamber 7 connected with the load lock chamber 5 through a gate valve 6. In the wafer transfer chamber 1 and the load lock chamber 5, there are provided wafer transfer arms 9 and 10 for carrying a wafer 8, as shown in FIG. 14. The transfer arm 10 is designed to take each wafer 8 from wafer cassettes 11, 11 placed in the wafer load chamber 7, through the gate valve 6, and bring the wafer into the wafer transfer chamber 1. The transfer arm 9 is arranged to receive the wafer 8 from the arm 10, and insert the wafer through one of the gate valves 2 into a predetermined one of the process chambers. The wafer 8 is shifted from one process chamber to another by the transfer arm 10 according to the sequence of processes. Another conventional example is shown in "NIKKEI MICRODEVICES", May, 1990, page 47. A multi-chamber process equipment of this example includes a wafer transfer chamber, a plurality of parallel PVD or other process chambers connected with the transfer chamber, a cooling chamber, a preclean chamber, a buffer chamber, and RTP/etching/CVD chamber (or chambers), a load lock chamber, and other chambers. The pressure of each chamber is held at a predetermined degree of vacuum (base pressure) according to the object of the chamber. For example, the wafer transfer chamber is held at 10.sup.-8 Torr (1.3.times.10.sup.-6 Pa), the PVD chamber is held at 10.sup.-9 Torr (1.3.times.10.sup.-7 Pa), and the load lock chamber is held at 10.sup.-5 Torr (1.3.times.10.sup.-3 Pa). Japanese Patent Provisional Publication (TOKKAI) No. 61-55926 shows still another conventional example. In these equipments, the pressures of the different chambers are determined so as to ensure the clean wafer processing environment. In general, the pressures are made closer to the atmospheric pressure in the following order; (Process chamber)&lt;(Wafer transfer chamber)&lt;(Load lock chamber). In the conventional process equipments, however, a wafer is readily affected by dew condensation especially in a low temperature etching chamber which is cooled to -20.degree. C..about.-70.degree. C. if the chamber is not evacuated sufficiently before loading of the wafer. Therefore, it is required to reduce the pressure in the chamber below a base pressure of the chamber (10.sup.-6 Torr, for example). Moreover, the degree of vacuum of the wafer transfer chamber is lower (that is, the pressure is higher) than that of the process chamber. Therefore, when the process chamber is opened, there arises a flow of residual water content from the wafer transfer chamber to the process chamber, resulting in the dew condensation. The conventional equipments cannot prevent condensation satisfactorily even if the pressure of the process chamber is decreased sufficiently below the base pressure. On the other hand, cross contamination is caused by a flow of residual gases from a process chamber for heat treatment or photo-assisted CVD, to the wafer transfer chamber if the degree of vacuum in the wafer transfer chamber is too high. Furthermore, the conventional equipments cannot sufficiently reduce variations of wafer properties such as sheet resistance from wafer to wafer, especially when the wafers are processed in a high temperature silicide CVD chamber. It is possible to reduce the variations of the sheet resistance by decreasing the pressure in the load lock chamber below the above-mentioned level. However, the pumping operation must be continued for three hours or more.
.theme-dusk,.theme-midnight { .hljs { display: block; overflow-x: auto; background: #232323; color: #e6e1dc; } .hljs-comment, .hljs-quote { color: #bc9458; font-style: italic; } .hljs-keyword, .hljs-selector-tag { color: #c26230; } .hljs-string, .hljs-number, .hljs-regexp, .hljs-variable, .hljs-template-variable { color: #a5c261; } .hljs-subst { color: #519f50; } .hljs-tag, .hljs-name { color: #e8bf6a; } .hljs-type { color: #da4939; } .hljs-symbol, .hljs-bullet, .hljs-built_in, .hljs-builtin-name, .hljs-attr, .hljs-link { color: #6d9cbe; } .hljs-params { color: #d0d0ff; } .hljs-attribute { color: #cda869; } .hljs-meta { color: #9b859d; } .hljs-title, .hljs-section { color: #ffc66d; } .hljs-addition { background-color: #144212; color: #e6e1dc; display: inline-block; width: 100%; } .hljs-deletion { background-color: #600; color: #e6e1dc; display: inline-block; width: 100%; } .hljs-selector-class { color: #9b703f; } .hljs-selector-id { color: #8b98ab; } .hljs-emphasis { font-style: italic; } .hljs-strong { font-weight: bold; } .hljs-link { text-decoration: underline; } }
58 Cal.App.3d 439 (1976) 129 Cal. Rptr. 797 L. GENE ALLARD, Plaintiff, Cross-defendant and Respondent, v. CHURCH OF SCIENTOLOGY OF CALIFORNIA, Defendant, Cross-complainant and Appellant. Docket No. 45562. Court of Appeals of California, Second District, Division Two. May 18, 1976. *443 COUNSEL Morgan, Wenzel & McNicholas, John P. McNicholas, Gerald E. Agnew, Jr., and Charles B. O'Reilly for Plaintiff, Cross-defendant and Respondent. Levine & Krom, Meldon E. Levine, Murchison, Cumming, Baker & Velpmen, Murchison, Cumming & Baker, Michael B. Lawler, Tobias C. Tolzmann and Joel Kreiner for Defendant, Cross-complainant and Appellant. OPINION BEACH, J. L. Gene Allard sued the Church of Scientology for malicious prosecution. Defendant cross-complained for conversion. A jury verdict and judgment were entered for Allard on the complaint for $50,000 in compensatory damages and $250,000 in punitive damages. Judgment was entered for Allard and against the Church of Scientology on the cross-complaint. Defendant-cross complainant appeals from the judgment. FACTS: The evidence in the instant case is very conflicting. We relate those facts supporting the successful party and disregard the contrary showing. (Nestle v. City of Santa Monica, 6 Cal.3d 920, 925-926 [101 Cal. Rptr. 568, 496 P.2d 480].) In March 1969, L. Gene Allard became involved with the Church of Scientology in Texas. He joined Sea Org in Los Angeles and was sent to San Diego for training. While there, he signed a billion-year contract agreeing to do anything to help Scientology and to help clear the planet of the "reactive people." During this period he learned about written policy directives that were the "policy" of the church, emanating from L. Ron Hubbard, the founder of the Church of Scientology.[1] After training on the ship, respondent was assigned to the Advanced Organization in Los Angeles, where he became the director of disbursements. He later became the Flag Banking Officer. *444 Alan Boughton, Flag Banking Officer International, was respondent's superior. Only respondent and Boughton knew the combination to the safe kept in respondent's office. Respondent handled foreign currency, American cash, and various travelers' checks as part of his job. In May or June 1969, respondent told Boughton that he wanted to leave the church. Boughton asked him to reconsider. Respondent wrote a memo and later a note; he spoke to the various executive officers. They told him that the only way he could get out of Sea Org was to go through "auditing" and to get direct permission from L. Ron Hubbard. Respondent wrote to Hubbard. A chaplain of the church came to see him. Lawrence Krieger, the highest ranking justice official of the church in California, told respondent that if he left without permission, he would be fair game and "You know we'll come and find you and we'll bring you back, and we'll deal with you in whatever way is necessary." On the night of June 7 or early morning of June 8, 1969, respondent went to his office at the Church of Scientology and took several documents from the safe. These documents were taken by him to the Internal Revenue Service in Kansas City; he used them to allege improper changes in the records of the church. He denies that any Swiss francs were in the safe that night or that he took such Swiss francs. Furthermore, respondent denies the allegation that he stole various travelers' checks from the safe. He admitted that some travelers' checks had his signature as an endorsement, but maintains that he deposited those checks into an open account of the Church of Scientology. There is independent evidence that tends to corroborate that statement. Respondent, having borrowed his roommate's car, drove to the airport and flew to Kansas City, where he turned over the documents to the Internal Revenue Service. Respondent was arrested in Florida upon a charge of grand theft. Boughton had called the Los Angeles Police Department to report that $23,000 in Swiss francs was missing. Respondent was arrested in Florida; he waived extradition and was in jail for 21 days. Eventually, the charge was dismissed. The deputy district attorney in Los Angeles recommended a dismissal in the interests of justice.[2] *445 CONTENTIONS ON APPEAL: 1. Respondent's trial counsel engaged in flagrant misconduct throughout the proceedings below and thereby deprived appellant of a fair trial. 2. The verdict below was reached as a result of (a) counsel's ascription to appellant of a religious belief and practices it did not have and (b) the distortion and disparagement of its religious character, and was not based upon the merits of this case. To allow a judgment thereby achieved to stand would constitute a violation of appellant's free exercise of religion. 3. Respondent failed to prove that appellant maliciously prosecuted him and therefore the judgment notwithstanding the verdict should have been granted. 4. The refusal of the trial court to ask or permit voir dire questions of prospective jurors pertaining to their religious prejudices or attitudes deprived appellant of a fair trial. 5. It was prejudicial error to direct the jury, in its assessment of the malicious prosecution claim, to disregard evidence that respondent stole appellant's Australian and American Express travelers' checks. 6. The order of the trial court in denying to appellant discovery of the factual basis for the obtaining of a dismissal by the district attorney of the criminal case People v. Allard was an abuse of discretion and a new trial should be granted and proper discovery permitted. 7. Respondent presented insufficient evidence to support the award of $50,000 in compensatory damages which must have been awarded because of prejudice against appellant. 8. Respondent failed to establish corporate direction or ratification and also failed to establish knowing falsity and is therefore not entitled to any punitive damages. 9. Even if the award of punitive damages was proper in this case, the size of the instant reward, which would deprive appellant church of more *446 than 40 percent of its net worth, is grossly excessive on the facts of this case. 10. There was lack of proper instruction regarding probable cause.[3] DISCUSSION: 1. There was no prejudicial misconduct by respondent's trial counsel, and appellant was not deprived of a fair trial. Appellant claims that it was denied a fair trial through the statements, questioning, and introduction of certain evidence by respondent's trial counsel. Love v. Wolf, 226 Cal. App.2d 378 [38 Cal. Rptr. 183], is cited as authority. We have reviewed the entire record and find appellant's contentions to be without merit. Several of counsel's individual statements and questions were inappropriate. However, there often were no objections by counsel for appellant where an objection and subsequent admonition would have cured any defect; or there was an objection, and the trial court judiciously admonished the jury to disregard the comment. Except for these minor and infrequent aberrations, the record reveals an exceptionally well-conducted and dispassionate trial based on the evidence presented. As in Stevens v. Parke, Davis & Co., 9 Cal.3d 51, 72 [107 Cal. Rptr. 45, 507 P.2d 653], a motion for a new trial was made, based in part upon the alleged misconduct of opposing counsel at trial. (1) What was said in Stevens applies to the instant case. "`A trial judge is in a better position than an appellate court to determine whether a verdict resulted wholly, or in part, from the asserted misconduct of counsel and his conclusion in the matter will not be disturbed unless, under all the circumstances, it is plainly wrong.' [Citation.] From our review of the instant record, we agree with the trial judge's assessment of the conduct of plaintiff's counsel and for the reasons stated above, we are of the opinion that defendant has failed to demonstrate prejudicial misconduct on the part of such counsel." (Stevens v. Parke, Davis & Co., supra, 9 Cal.3d at p. 72.) 2. The procedure and verdict below does not constitute a violation of appellant's First Amendment free exercise of religion. *447 (2) Appellant contends that various references to practices of the Church of Scientology were not supported by the evidence, were not legally relevant, and were unduly prejudicial. The claim is made that the trial became one of determining the validity of a religion rather than the commission of a tort. The references to which appellant now objects were to such practices as "E-meters," tin cans used as E-meters, the creation of religious doctrine purportedly to "get" dissidents, and insinuations that the Church of Scientology was a great money making business rather than a religion. The principal issue in this trial was one of credibility. If one believed defendant's witnesses, then there was indeed conversion by respondent. However, the opposite result, that reached by the jury, would naturally follow if one believed the evidence introduced by respondent. Appellant repeatedly argues that the introduction of the policy statements of the church was prejudicial error. However, those policy statements went directly to the issue of credibility. Scientologists were allowed to trick, sue, lie to, or destroy "enemies." (Exhibit 1.) If, as he claims, respondent was considered to be an enemy, that policy was indeed relevant to the issues of this case. That evidence well supports the jury's implied conclusion that respondent had not taken the property of the church, that he had merely attempted to leave the church with the documents for the Internal Revenue Service, and that those witnesses who were Scientologists or had been Scientologists were following the policy of the church and lying to, suing and attempting to destroy respondent. Evidence of such policy statements were damaging to appellant, but they were entirely relevant. They were not prejudicial. A party whose reprehensible acts are the cause of harm to another and the reason for the lawsuit by the other cannot be heard to complain that its conduct is so bad that it should not be disclosed. The relevance of appellant's conduct far outweighs any claimed prejudice.[4] We find the introduction of evidence of the policy statements and other peripheral mention of practices of the Church of Scientology not to be error. In the few instances where mention of religious practices may have been slightly less germane than the policy statements regarding fair game, they were nonetheless relevant and there was no prejudice to appellant by the introduction of such evidence. *448 3. The trial court properly denied the motion for judgment notwithstanding the verdict. (3) Appellant claimed that it had probable cause to file suit against respondent. The claim is made that even if Alan Boughton did take the checks from the safe, knowledge of that act should not be imputed to appellant church. Based on the policy statements of appellant that were introduced in evidence, a jury could infer that Boughton was within the scope of his employment when he stole the francs from the safe or lied about respondent's alleged theft. Inferences can be drawn that the church, through its agents, was carrying out its own policy of fair game in its actions against respondent. Given that view of the evidence, which as a reviewing court we must accept, there is substantial evidence proving that appellant maliciously prosecuted respondent. Therefore, the trial court did not err in denying the motion for the judgment notwithstanding the verdict. 4. The trial court performed proper voir dire of prospective jurors. (4) Appellant claims that the trial court refused to ask or permit voir dire questions of prospective jurors pertaining to their religious prejudices or attitudes. The record does not so indicate. Each juror was asked if he or she had any belief or feeling toward any of the parties that might be regarded as a bias or prejudice for or against any of them. Each juror was also asked if he or she had ever heard of the Church of Scientology. If the juror answered affirmatively, he or she was further questioned as to the extent of knowledge regarding Scientology and whether such knowledge would hinder the rendering of an impartial decision. One juror was excused when she explained that her husband is a clergyman and that she knows a couple that was split over the Church of Scientology. (5) The trial court's thorough questioning served the purpose of voir dire, which is to select a fair and impartial jury, not to educate the jurors or to determine the exercise of peremptory challenges. (Rousseau v. West Coast House Movers, 256 Cal. App.2d 878, 882 [64 Cal. Rptr. 655].) 5. It was not prejudicial error to direct the jury, in its assessment of the malicious prosecution claim, to disregard evidence that respondent stole appellant's Australian and American Express travelers' checks. *449 (6) Appellant submits that evidence of respondent's purported theft of the Australian and American Express travelers' checks should have been admitted as to the issue of malicious prosecution as well as the cross-complaint as to conversion. If there were any error in this regard, it could not possibly be prejudicial since the jury found for respondent on the cross-complaint. It is evident that the jury did not believe that respondent stole the travelers' checks; therefore, there could be no prejudice to appellant by the court's ruling. 6. Appellant suffered no prejudice by the trial court's denial of discovery of the factual basis for obtaining of the dismissal by the district attorney. (7) Prior to trial, appellant apparently sought to discover the reasons underlying the dismissal of the criminal charges against respondent. This was relevant to the instant case since one of the elements of a cause of action for malicious prosecution is that the criminal prosecution against the plaintiff shall have been favorably terminated. (Jaffe v. Stone, 18 Cal.2d 146 [114 P.2d 335, 135 A.L.R. 775].) Whether or not the lower court was justified in making such an order, the denial of discovery along these lines could not be prejudicial. During the trial, counsel for all parties stipulated that the criminal proceedings against Allard were terminated in his favor by a dismissal by a judge of that court upon the recommendation of the district attorney. In addition, there was a hearing outside the presence of the jury in which the trial court inquired of the deputy district attorney as to the reasons for the dismissal. It was apparent at that time that the prospective witnesses for the Church of Scientology were considered to be evasive. There was no prejudice to appellant since the deputy district attorney was available at trial. Earlier knowledge of the information produced would not have helped defendant. We find no prejudicial error in the denial of this discovery motion. 7. The award of $50,000 compensatory damages was proper. Appellant contends that based upon the evidence presented at trial, the compensatory damage award is excessive. In addition, appellant contends that the trial court erred in not allowing appellant to introduce evidence of respondent's prior bad reputation. *450 (8a) There was some discussion at trial as to whether respondent was going to claim damaged reputation as part of general damages. The trial court's initial reaction was to allow evidence only of distress or emotional disturbance; in return for no evidence of damaged reputation, appellant would not be able to introduce evidence of prior bad reputation. The court, however, relying on the case of Clay v. Lagiss, 143 Cal. App.2d 441 [299 P.2d 1025], held that lack of damage to reputation is not admissible. Therefore, respondent was allowed to claim damage to reputation without allowing appellant to introduce evidence of his prior bad reputation. In matters of slander that are libelous per se, for example the charging of a crime, general damages have been presumed as a matter of law. (Douglas v. Janis, 43 Cal. App.3d 931, 940 [4] [118 Cal. Rptr. 280], citing Clay v. Lagiss, supra, 143 Cal. App.2d at p. 448. Compare Gertz v. Robert Welch, Inc., 418 U.S. 323 [41 L.Ed.2d 789, 94 S.Ct. 2997].)[5] (9) Damages in malicious prosecution actions are similar to those in defamation. Therefore, damage to one's reputation can be presumed from a charge, such as that in the instant case that a person committed the crime of theft. (8b) In any event, as the trial court in the instant case noted, there was no offer of proof regarding respondent's prior bad reputation; any refusal to allow possible evidence on that subject has not been shown to be error, much less prejudicial error. (10) Appellant further contends that the amount of compensatory damages awarded was excessive and that the jury was improperly instructed regarding compensatory damages. The following modified version of BAJI Nos. 14.00 and 14.13 was given: "If, under the court's instructions, you find that plaintiff is entitled to a verdict against defendant, you must then award plaintiff damages in an amount that will reasonably compensate him for each of the following elements of loss or harm, which in this case are presumed to flow from *451 the defendant's conduct without any proof of such harm or loss: damage to reputation, humiliation and emotional distress. "No definite standard or method of calculation is prescribed by law to fix reasonable compensation for these presumed elements of damage. Nor is the opinion of any witness required as to the amount of such reasonable compensation. Furthermore, the argument of counsel as to the amount of damages is not evidence of reasonable compensation. In making an award for damage to reputation, humiliation and emotional distress, you shall exercise your authority with calm and reasonable judgment, and the damages you find shall be just and reasonable." The following instruction was requested by defendant and was rejected by the trial court: "The amount of compensatory damages should compensate plaintiff for actual injury suffered. The law will not put the plaintiff in a better position than he would be in had the wrong not been done." Accompanying the request for that motion is a citation to Staub v. Muller, 7 Cal.2d 221 [60 P.2d 283], and Basin Oil Co. v. Baash-Ross Tool Co., 125 Cal. App.2d 578 [271 P.2d 122]. The Supreme Court has recognized that "Damages potentially recoverable in a malicious prosecution action are substantial. They include out-of-pocket expenditures, such as attorney's and other legal fees ...; business losses ...; general harm to reputation, social standing and credit ...; mental and bodily harm ...; and exemplary damages where malice is shown...." (Babb v. Superior Court, 3 Cal.3d 841, 848, fn. 4 [92 Cal. Rptr. 179, 479 P.2d 379].) While these damages are compensable, it is the determination of the damages by the jury with which we are concerned. Appellant seems to contend that the jury must have actual evidence of the damages suffered and the monetary amount thereof. "`The determination of the jury on the issue of damages is conclusive on appeal unless the amount thereof is so grossly excessive that it can be reasonably imputed solely to passion or prejudice in the jury. [Citations.]'" (Douglas v. Janis, supra, 43 Cal. App.3d at p. 940.) The presumed damage to respondent's reputation from an unfounded charge of theft, along with imprisonment for 21 days, and the mental and emotional anguish that must have followed are such that we cannot say that the jury's finding of $50,000 in compensatory damages is unjustified. *452 That amount does not alone demonstrate that it was the result of passion and prejudice. 8. Respondent is entitled to punitive damages. (11) Appellant cites the general rule that although an employer may be held liable for an employee's tort under the doctrine of respondeat superior, ordinarily he cannot be made to pay punitive damages where he neither authorized nor ratified the act. (4 Witkin, Summary of Cal. Law. (8th ed.) § 855, p. 3147.)[6] Appellant claims that the Church of Scientology, which is the corporate defendant herein, never either authorized or ratified the malicious prosecution. The finding of authorization may be based on many grounds in the instant case. For example, the fair game policy itself was initiated by L. Ron Hubbard, the founder and chief official in the church. (Exhibit 1.) It was an official authorization to treat "enemies" in the manner in which respondent herein was treated by the Church of Scientology. Furthermore, all the officials of the church to whom respondent relayed his desire to leave were important managerial employees of the corporation. (See 4 Witkin, Summary of Cal. Law (8th ed.) supra, § 857, p. 3148.) The trier of fact certainly could have found authorization by the corporation of the act involved herein. 9. The award of punitive damages. (12) Any party whose tenets include lying and cheating in order to attack its "enemies" deserves the results of the risk which such conduct entails. On the other hand, this conduct may have so enraged the jury that the award of punitive damages may have been more the result of *453 feelings of animosity, rather than a dispassionate determination of an amount necessary to assess defendant in order to deter it from similar conduct in the future. In our view the disparity between the compensatory damages ($50,000) and the punitive damages ($250,000) suggests that animosity was the deciding factor. Our reading of the decisional authority compels us to conclude that we should reduce the punitive damages. We find $50,000 to be a reasonable amount to which the punitive damages should be reduced. We perceive this duty, and have so modified the punitive damages award not with any belief that a reviewing court more ably may perform it.[7] (13) Simply stated the decisional authority seems to indicate that the reviewing court should examine punitive damages and where necessary modify the amount in order to do justice. (Cunningham v. Simpson, 1 Cal.3d 301 [81 Cal. Rptr. 855, 461 P.2d 39]; Forte v. Nolfi, 25 Cal. App.3d 656 [102 Cal. Rptr. 455]; Shroeder v. Auto Driveaway Company, 11 Cal.3d 908 [114 Cal. Rptr. 622, 523 P.2d 662]; Livesey v. Stock, 208 Cal. 315, 322 [281 P. 70].) 10. Instruction on probable cause. Appellant requested an instruction stating: "Where it is proven that a judge has had a preliminary hearing and determined that the facts and evidence show probable cause to believe the plaintiff guilty of the offense charged therefore, ordering the plaintiff to answer a criminal complaint, this is prima facie evidence of the existence of probable cause." The trial court gave the following instruction: "The fact that plaintiff was held to answer the charge of grand theft after a preliminary hearing is evidence tending to show that the initiator of the charge had probable cause. This fact is to be considered by you along with all the other evidence tending to show probable cause or the lack thereof."[8] Appellant claimed for the first time in its reply brief that the trial court's lack of proper instruction regarding probable cause was prejudicial error. Since this issue was raised for the first time in appellant's reply brief, we decline to review the issue.[9] *454 The judgment is modified by reducing the award of punitive damages only, from $250,000 to the sum of $50,000. As modified the judgment is in all other respects affirmed. Costs on appeal are awarded to respondent Allard. Roth, P.J., and Fleming, J., concurred. A petition for a rehearing was denied June 17, 1976, and the petitions of both parties for a hearing by the Supreme Court were denied July 15, 1976. NOTES [1] One such policy, to be enforced against "enemies" or "suppressive persons" was that formerly titled "fair game." That person "[m]ay be deprived of property or injured by any means by any Scientologist without any discipline of the Scientologist. May be tricked, sued or lied to or destroyed." (Exhibit 1.) [2] Leonard J. Shaffer, the deputy district attorney, testified outside the presence of the jury that members of the church were evasive in answering his questions. He testified that the reasons for the dismissal were set forth in his recommendation; the dismissal was not part of a plea bargain or procedural or jurisdictional issue. [3] This issue is raised for the first time in appellant's reply brief. [4] The trial court gave appellant almost the entire trial within which to produce evidence that the fair game policy had been repealed. Appellant failed to do so, and the trial court thereafter permitted the admission of Exhibit 1 into evidence. [5] The Supreme Court held in Gertz v. Robert Welch, Inc., supra, 418 U.S. 323, 349 [41 L.Ed.2d 789, 810], an action for defamation, that "the States may not permit recovery of presumed or punitive damages, at least when liability is not based on a showing of knowledge of falsity or reckless disregard for the truth." (Italics added.) The instant case is distinguishable from Gertz. Initially, the interests protected by a suit for malicious prosecution include misuse of the judicial system itself; a party should not be able to claim First Amendment protection maliciously to prosecute another person. Secondly, the jury in the instant case must have found "knowledge of falsity or reckless disregard for the truth" in order to award punitive damages herein. Therefore, even under Gertz, a finding of presumed damages is not unconstitutional. [6] We again note that Gertz v. Robert Welch, Inc., supra, precludes the award of punitive damages in defamation actions "at least when liability is not based on a showing of knowledge of falsity or reckless disregard for the truth." The facts of the instant case fall within that categorization, so a finding of punitive damages was proper. Moreover, as we noted above, an egregious case of malicious prosecution subjects the judicial system itself to abuse, thereby interfering with the constitutional rights of all litigants. Punitive damages may therefore be more easily justified in cases of malicious prosecution than in cases of defamation. The societal interests competing with First Amendment considerations are more compelling in the former case. [7] See dissent in Cunningham v. Simpson, 1 Cal.3d 301 [81 Cal. Rptr. 855, 461 P.2d 39]. [8] This instruction was given on the court's own motion. [9] We note that given the circumstances of the instant case, the juror could have easily been misled by the requested instruction. If the evidence showed that the agents and employees of appellant were lying, then the preliminary hearing at which they also testified would not be valid. While the jurors may of course consider that the magistrate at the preliminary hearing found probable cause, that should be in no way conclusive in the jury's determination of probable cause.
Social deprivation and primary hyperparathyroidism. To investigate the potential relationship between social status or deprivation and the prevalence of primary hyperparathyroidism (PHPT). We retrospectively identified a cohort of patients diagnosed as having PHPT between 1981 and 2007 from the Scottish Morbidity Records database. The Scottish Index of Multiple Deprivation (SIMD) 2006 quintiles were derived for these patients by using the postal codes. The distribution of the SIMD quintiles was examined to determine the possible influence of deprivation on the incidence of PHPT. In Scotland between 1981 and 2007, 3,039 patients were diagnosed as having PHPT, in accordance with the International Classification of Diseases code for PHPT. The distribution of the PHPT cohort across the SIMD 2006 quintiles was significantly different from that expected, with a higher representation (27.2%) among the most deprived and a lower representation (14.5%) in the least deprived quintile, in comparison with the 20% expected in each quintile (P<.0001). The findings in this study suggest that socioeconomic deprivation is associated with an increased risk of developing PHPT.
Alexander Bell Donald Alexander Bell Donald (18 August 1842–7 March 1922) was a New Zealand seaman, sailmaker, merchant and ship owner. He was born in Inverkeithing, Fife, Scotland on 18 August 1842. References Category:1842 births Category:1922 deaths Category:Scottish emigrants to New Zealand Category:People from Inverkeithing
4 ideas for improving your e-commerce ​website 2019 is here, and the new year provides an excellent opportunity to refresh your e-commerce website, by adding new features and updating content. Adding web banners Web banners are a great way to keep your e-commerce website homepage looking fresh, and making viewers aware of the latest news about products and offers. They can be easily modified to serve a range of purposes, are potentially eye-catching if they are placed in the appropriate area and are a good way of promoting a specific product or offer on a homepage while also retaining the core brand visuals elsewhere. The image below is an example of a web banner in development. We’ve put together a handy guide for creating your banners – click here. Adding new features New features on your e-commerce website can add value through improved functionality, which in turn enhances the usability for customers and users. Features that allow for easy modification to products that they wish to purchase, such as different colours or quantities, or a social login function that enables users to create an account through their Facebook credentials. Such responsive features make the e-commerce process as painless and easy-to-use as possible, limiting the barriers between browsing and purchase, in turn improving conversion rates and the chances of customers returning for more in the future. A positive experience can often leave the customer wanting more, and it’s the websites job to ensure that their features and functionality are kept updated and fit for purpose, in response to the ever-changing demands of the modern e-commerce customer. For example, one of the new features we’ve recently added from Amasty is the Social Login, which allows users to set up their account using login credentials from Facebook. To find out more about this feature, click here. Improve optimisation While you’re reading this, grab a smartphone or tablet and have a browse around your website. How does it look? Are the images stacked or overlapping, or is there text missing? These issues mean your website has not been optimised for mobile devices, making it unusable for a large percentage of potential customers browsing with their iPhones or Samsungs. Users are extremely unlikely to want to fight through images and texts to find the products they want, and will quickly become frustrated and depart for a different site. Don’t neglect these customers! Get your site optimised for different devices to reach as wide an audience as possible. Data from 2018, shown below in the graph, from Statista.com, shows that 52.2% of all browsing online was done on a mobile device, a trend which has grown exponentially year-upon-year. This graph underlines the importance of ensuring that your website is fit for use for all potential users. You’re potentially missing out on reaching these customers if your site doesn’t meet their demands, and, with the trend of mobile browsing only set to rise, optimising your website to ensure it’s fit for use is quickly become a necessity for online retailers. Our Liquidshop e-commerce platform is designed to provide the best user experience for your customers, though responsive e-commerce. Optimisation on devices of all sizes allows your website to be user-friendly for as many potential visitors as possible, expanding your reach and enhancing the user experience, leading to increased sales as part of the smooth and responsive overall e-commerce experience. Keeping branding updated and consistent There are few things more off-putting when navigating onto an e-commerce website that a poorly designed logo at the top of the page, or old, pixilated imagery taking up the homepage. A consistent brand image across the pages improves brand recognition for customers and gives the impression of a modern, well designed and cared for website and business as a whole. You can also create special themed logos for holiday times such as Christmas or winter, like we did with our logo below. What’s most important is too put time and effort into keeping your website updated. Whether that’s imagery, information or branding, putting the time into maintaining an attractive and cohesive e-commerce site means you keep your customers, and new visitors engaged and ensure that there as few barriers as possible between browsing and purchase. Magento is ending support for version 1 in June 2020. After initially announcing that November 2018 would be the cut-off point, this was revised to the later date, to allow for the vast amount of v1 websites around the world to be upgraded and rebuilt in v2. What... We closely monitor all areas of e-commerce, always on the lookout for developments that offer additional functionality and improved performance for our Liquidshop clients and their customers. One development that is a growing concern throughout the industry is... In case you missed the news last week, Liquidshop has become an official partner of Magento feature developer Amasty. Liquidshop has been recognised by Amasty as a company who have significant expertise and skills in Magento web development. This underlines the...