content
stringlengths
91
663k
score
float64
0.5
1
source
stringclasses
1 value
Life cycle assessment of permanent magnet electric traction motors Journal article, 2019 Ongoing development of electrified road vehicles entails a risk of conflict between resource issues and the reduction of greenhouse gas emissions. In this study, the environmental impact of the core design and magnet material for three electric vehicle traction motors was explored with life cycle assessment (LCA): two permanent magnet synchronous machines with neodymium-dysprosium-iron-boron or samarium-cobalt magnets, and a permanent magnet-assisted synchronous reluctance machine (PM-assisted SynRM) with strontium-ferrite magnets. These combinations of motor types and magnets, although highly relevant for vehicles, are new subjects for LCA. The study included substantial data compilation, machine design and drive-cycle calculations. All motors handle equal take-off, top speed, and driving conditions. The production (except of magnets) and use phases are modeled for two countries – Sweden and the USA – to exemplify the effects of different electricity supply. Impacts on climate change and human toxicity were found to be most important. Complete manufacturing range within 1.7–2.0 g CO2-eq./km for all options. The PM-assisted SynRM has the highest efficiency and lowest emissions of CO2. Copper production is significant for toxicity impacts and effects on human health, with problematic emissions from mining. Resource depletion results are divergent depending on evaluation method, but a sensitivity analysis proved other results to be robust. Key motor design targets are identified: high energy efficiency, slender housings, compact end-windings, segmented laminates to reduce production scrap, and easy disassembly. Life cycle assessment (LCA) Magnet Electric motor Neodymium Samarium Ferrite
0.9378
FineWeb
Scientists in Germany recently published a study in which they took a new approach to analyzing nanofiltration membranes. They used a methodology called “Thinking in terms of Structure-Activity-Relationships” (AKA T-SAR) that was first introduced in 2003 to determine the properties and the effects of different substance classes on biological systems. T-SAR was applied here to see if it could provide them with a better understanding of the NF membrane as well as predict the membrane’s performance for the recovery of ionic fluids. T-SAR analysis makes it possible to analyze a chemical compound using only its three-dimensional chemical structure, but the process is made more difficult and complex as the size of the molecule increases. This characteristic of T-SAR creates a problem for NF materials. In order to overcome it, the researchers combined T-SAR methods with traditional membrane characterization procedures to gather more conclusive evidence on the importance of chemical structure for separation performance. The algorithm to conduct the T-SAR analysis of a chemical compound includes 17 steps in the areas of: Chemical Structure, Stereochemistry, Molecular Interaction Potentials, and Reactivity. The materials involved for this experiment included two NF polyamide membranes (FilmTec NF-90 and NF-270) and three ionic liquids. In order to prep these membranes for T-SAR analysis, they were first subjected to some baseline analysis such as confirming their composition through spectroscopy and determining their pure water capability with an HP4750 stirred cell. The ionic fluids were tempered with deionized water to reduce the influence of additional ions and then cycled through the HP4750 to make samples of the feed, retentate, and permeate for ion-chromatography analysis. After this preparation and traditional analysis, the materials were then subject to the full T-SAR analysis procedure to determine if it really can be used to understand NF membranes and predict their performance. You’ll have to look at the full report for all of the detailed results of the T-SAR analysis. After all this work, the authors concluded that, “the experimental values obtained for the filtration of such ionic liquids are in good agreement with the predictions.” So it looks like T-SAR methodology might be used more often in NF membrane experiments! Sehr gut! Read the complete report here.
0.9245
FineWeb
Next-to-next-to-leading-order Collinear and Soft Gluon Corrections for T-channel Single Top Quark Production I present the resummation of collinear and soft-gluon corrections to single top quark production in the t channel at next-to-next-to-leading logarithm accuracy using two-loop soft anomalous dimensions. The expansion of the resummed cross section yields approximate next-to-next-to-leading-order cross sections. Numerical results for t-channel single top quark (or single antitop) production at the Tevatron and the LHC are presented, including the dependence of the cross sections on the top quark mass and the uncertainties from scale variation and parton distributions. Combined results for all single top quark production channels are also given.
0.5659
FineWeb
Sports anemia refers to a period in early training when athletes may develop low blood hemoglobin for a while, and likely reflects a normal adaptation to physical training. Aerobic training enlarges the blood volume and, with the added fluid, the red blood cell count per unit of blood drops. While true anemia requires treatment, the temporary reduced red blood cell count seen early in training goes away by itself after a time. Physically active young women, especially those who engage in such endurance activities as distance running, are prone to iron deficiency. Research studies show that as many as 45% of female runners of high school age have low iron stores. Iron status may be affected by exercise in a number of ways. One possibility is that iron is lost in sweat and, although the sweat of trained athletes contains less iron than the sweat of others, it is usually simply an adaptation to conditioning. Still, athletes sweat more copiously than sedentary people. Another possible route to iron loss is red blood cell destruction is that blood cells are squashed when body tissues (as the soles of the feet) make high-impact contact with an unyielding surface (such as the ground). In addition, in some athletes at least, physical activity may cause small blood losses through the digestive tract. Thirdly, the habitually low intake of iron-rich foods, combined with iron losses aggravated by physical activity, leads to iron deficiency in physically active individuals. Iron deficiency impairs physical performance because iron is crucial to the body’s handling of oxygen. Since one consequence of iron deficiency anemia is impaired oxygen transport, aerobic work capacity is going to be reduced because the person is likely to tire very easily. Whether marginal deficiency without anemia impairs physical performance remains a point of continual debate among researchers. Physical activity can also produce a hemolytic anemia caused by repetitive blows to the surfaces of the body. This condition was first noticed in soldiers after long forced marches (march hemoglobinuria). Today, it is more often seen in long-distance runners since soldiers are now better equipped with protective foot gear. March hemoglobinuria can also result from repeated blows to other body parts, and has been observed in martial arts and players of conga and bongo drums.
0.9555
FineWeb
F# Minor 7th Piano Chord The Notes in an F# Minor 7th Chord The root is the bottom note of the chord, the starting point to which the other notes relate. The root of an F# Minor 7th chord is F#. The Min 3rd The minor third of an F# Minor 7th chord is A. The minor third is up three half-steps from the Root. Finding A from F# step by step: - Start on: F# - Step 1: move up to G - Step 2: move up to G# - Step 3: Land on A - G is a minor second above F#. - G# is a major 2nd above F#. - A is a minor third above F#. The Min 7th The minor seventh of an F# Minor 7th chord is E. The minor seventh is down two half-steps from the Root. Finding E from F# step by step: - Start on: F# - Step 1: move down to E# - Step 2: Land on E - E# is a minor second below F#. - E is a major 2nd below F#. The min 7th is down a major 2nd? Confusing, right? The note E is down 2 half-steps from F#, but up 10 half-steps from F#: The Inversions of F# Minor 7th How to find F# Minor 7th with my three-finger-method This is the method taught in my book "How to Speed Read Piano Chord Symbols" Step 1) Use the Fourth Find the Root and the Fourth up from the Root. (See my tutorial on finding fourths). Step 2) Move the right hand down Move both fingers in right hand a whole-step down (two keys to the left on the piano). (A half-step is the next key.) How to Find 7th Chords with Nate's Three Finger Method - Major 7th chords: bring both fingers down a half-step - Minor 7th chords: bring both fingers down a whole-step - Dominant 7th chords: bring the Root down a whole-step, the fourth down a half-step - Diminished 7th chords: bring the Root down a minor third, the fourth down a whole-step If you would like to learn more about my method, pick up "How to Speed Read Piano Chord Symbols".
0.9891
FineWeb
Problem: Find the largest size set of edges S \in E such that each vertex in V is incident to at most one edge of S. Excerpt from The Algorithm Design Manual: Consider a set of employees, each of whom is capable of doing some subset of the tasks that must be performed. We seek to find an assignment of employees to tasks such that each task is assigned to a unique employee. Each mapping between an employee and a task they can handle defines an edge, so what we need is a set of edges with no employee or job in common, i.e. a matching. Efficient algorithms for constructing matchings are based on constructing augmenting paths in graphs. Given a (partial) matching M in a graph G, an augmenting path P is a path of edges where every odd-numbered edge (including the first and last edge) is not in M, while every even-numbered edge is. Further, the first and last vertices must not be already in M. By deleting the even-numbered edges of P from M and replacing them with the odd-numbered edges of P, we enlarge the size of the matching by one edge. Berge's theorem states that a matching is maximum if and only if it does not contain any augmenting path. Therefore, we can construct maximum-cardinality matchings by searching for augmenting paths and stopping when none exist. |Algorithms in Java, Third Edition (Parts 1-4) by Robert Sedgewick and Michael Schidlowsky||Network Flows : Theory, Algorithms, and Applications by Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin||Computational Discrete Mathematics: Combinatorics and Graph Theory with Mathematica by S. Pemmaraju and S. Skiena| |Introduction to Algorithms by T. Cormen and C. Leiserson and R. Rivest and C. Stein||The Stable Marriage Problem: structure and algorithms by D. Gusfield and R. Irving||Introduction to Algorithms by U. Manber| |Matching Theory by L. Lovasz||Data Structures and Network Algorithms by R. Tarjan||Combinatorial Optimization: Algorithms and Complexity by C. Papadimitriou and K. Steiglitz|
0.7436
FineWeb
The BehaviorType is one of the foundational MAEC types, and serves as a method for the characterization of malicious behaviors found or observed in malware. Behaviors can be thought of as representing the purpose behind groups of MAEC Actions, and are therefore representative of distinct portions of higher-level malware functionality. Thus, while a malware instance may perform some multitude of Actions, it is likely that these Actions represent only a few distinct behaviors. Some examples include vulnerability exploitation, email address harvesting, the disabling of a security service, etc. The required id field specifies a unique ID for this Behavior. The ordinal_position field specifies the ordinal position of the Behavior with respect to the execution of the malware. The status field specifies the execution status of the Behavior being characterized. The duration field specifies the duration of the Behavior. One way to derive such a value may be to calculate the difference between the timestamps of the first and last actions that compose the behavior. The Purpose field specifies the intended purpose of the Behavior. Since a Behavior is not always successful, and may not be fully observed, this is meant as way to state the nature of the Behavior apart from its constituent actions. The Description field specifies a prose textual description of the Behavior. The Discovery_Method field specifies the method used to discover the Behavior. The Action_Composition field captures the Actions that compose the Behavior. The Associated_Code field specifies any code snippets that may be associated with the Behavior. The Relationships field specifies any relationships between this Behavior and any other Behaviors.
0.8862
FineWeb
DR. MARTIN LUTHER KING, JR. 50 years ago, Martin Luther King Jr. gave one of the most famous and influential speeches in American history. The “I Have a Dream” speech was effective not just for its words, but also for Dr. King’s impassioned delivery. It represented the feelings of millions of people fighting for civil liberties. The speech, given by a lesser man in a lesser setting may not have earned the same attention. Dr. King knew if he were to truly help bring about change, he would need a speech and setting that would inspire. The March on Washington and “I Have a Dream” speech caught the attention of a nation, and brought it closer to the much-needed change. eSpeakers believes in the power of great speeches like the “I Have a Dream” speech, and in great speakers like Dr. Martin Luther King, Jr. To honor his speech given 50 years ago, eSpeakers has created an infographic in commemoration of that great moment in American history. You can view the infographic below. Click this link to see the full inspiring infographic: Celebrating 50 years of the “I Have A Dream Speech” Infographic To find great and inspiring speakers for your own event, consider searching eSpeakers Marketplace.
0.8576
FineWeb
As part of our Sonic Kayak project, we have been looking at adding new sensors to the system. These are our notes from our research and prototyping. Since we started the Sonic Kayak project, a few people have asked us whether we could add a turbidity sensor – they were interested in using it to monitor algal blooms in an EcoPort, monitor cyanobacteria for a water company, and taking water quality readings for seaweed farming. Turbidity sensors give a measurement of the amount of suspended solids in water – the more suspended solids, the higher the turbidity level (cloudiness) of the water. The most basic approach to working out water turbidity is to use something called a Secchi disk. These are plain white or black and white circular disks that are lowered slowly into the water, and the depth at which the disk is no longer visible is a rough measure of the cloudiness of the water. This is a great low-key approach, but the result is greatly affected by other factors such as the amount of daylight. More accurate equipment tends to use a light source and a light receptor, with the water placed in between – the amount of light that reaches the receptor from the light source gives a reading of how turbid the water is. There are several pre-existing publications on how to make open source turbidity sensors (e.g. this and this). For the Sonic Kayaks, we sonify sensor data in realtime, and record the data every second for environmental mapping. This means we need to make a sensor that logs realtime continuous data and can be integrated into the existing Sonic Kayak kit, as opposed to a system where you take a one-off sample of water and run it through a separate piece of equipment in a laboratory. We based our initial prototyping on the writeup found here. The basic electronics were tested on an Arduino Genuino Uno, with the modification in the code from pin ‘D1’ → ‘1’ (as D1 is not recognised as a pin number), and the addition of a 560Ω resistor for the white LED. We cut the ends off a 50ml Falcon tube as this was the only tube-shaped thing we had available, drilled small holes for the LED and LDR, and sprayed the tube matt black on the inside and outside to reduce reflectivity from the shiny plastic tube. The LED and LDR were fixed in place using hot glue, wires soldered directly to the components, and the whole thing coated in bioresin for waterproofing (Fig 1). For testing, we submerged the sensor in water for 20 minutes to check the waterproofing. We then took a sample of tap water, added a small amount of black acrylic paint, and did a series of arbitrary dilutions. LDRs decrease resistance with light intensity – so when more light hits the sensor, the less resistance there is, and the higher the voltage reading is, resulting in a higher numerical output. The numerical output is related to the voltage coming in, with an analogue to digital conversion (10 bit) applied such that 0V=0 and 5V=1023. If required, it is possible to do a lookup from the specific LDR sensor curve data to work out the voltage from the numerical output. The turbidity sensor v1 prototype returned reasonably consistent numerical values that related well to the types of results we might expect (any turbidity sensor would need to be calibrated with known samples before use). Fig 1. Prototype v1 – Test build and wiring. Fig 2. Test dilutions for prototype v1, with numerical output ranges. Moving on from the proof of principle prototype v1, we made a larger turbidity sensor for prototype v2 using 40mm black plumbing pipe with longer wiring that could reach from under the kayak to the main electronics box on top of the kayak, with a single multicore cable (old network cable) that split to meet the LDR and LED. Once the components were soldered to the wiring, we used liquid electrical tape to waterproof the components and bare wire before glue-gunning the components into the tube. The cable join was then bonded to the pipe using self-amalgamating waterproof tape, just to make this weak point more robust. For this version, a mesh made from a small square of Stay Put was attached to each end of the main tube using cable ties and thin rope, to act as a filter to stop things like seaweed entering the tube. Small fishing weights were also attached to each end of tube to pull the sensor down underwater. Fig 3. Prototype v2. Version 2 of the turbidity sensor was integrated into the Sonic Kayak system for preliminary testing. It survived a 20 minute trip out on a lake, which is a good proof of concept for the electronics waterproofing (which is in some ways the hardest bit of the Sonic Kayak project). When paddling the sensor stayed at a reasonably constant depth, but travels sideways – ideally it would travel in line with the kayak, with the tube entrance/exit facing the front/back of the boat. Some options for improving this include fixing it to the kayak in some way, or designing fins attached to the tube (e.g. by 3D printing the housing as a single piece). The sensor was tested at the same time as two temperature sensors and a hydrophone, and we definitely need to work on making the sonifications from each sensor more distinct, as it became a cacaphony of confusing noise rather than an informative and beautiful sonic experience. The use of mesh over each end served its purpose, but a more robust solution might again be to include this as part of a 3D printed housing, or perhaps find a local bar with a stash of politically-unusable plastic straws, chop these up into small lengths, and fill the ends of the tube with them. As it stands, we have proof-of principle that this DIY sensor approach is viable, but will need to do some more work on correcting the flow direction and sonification integration before we will be happy with it. Fig 4. Turbidity sensor prototype and the Sonic Kayak system - there are 3 kits in this photo, it's the one on the right! Air quality sensors This time we are following a hunch rather than pursuing an externally requested direction. Our studio is based on the edge of Falmouth harbour. This is a working harbour, used by small commercial fishing businesses, a large shipyard, houseboats, and recreational water users including yacht enthusiasts, kayakers and swimmers. Many of these users dump waste, sewage and fuel straight into the water - we routinely see slicks of fuel on the water surface and see/smell clouds of pollution in the air, and then see children jumping in the water for a swim or kayakers paddling through. To the best of our knowledge, nobody has mapped air pollution over water, yet we believe it is likely that the local industry and other users are causing air pollution low lying over the water that could be highly damaging to the health of people and other animals that spend time on the water surface (like birds and seals). So we have started looking at integrating air quality sensors onto the Sonic Kayaks. This process begins with needing to understand the various pollutants. [apologies for the lack of subscript for the molecular formulas, it's a limitation of our web design] Defra (the UK Government Department for Environment, Food & Rural Affairs) says this: "Shipping is a growing sector but one of the least regulated sources of emissions of atmospheric pollutants. Shipping makes significant contributions to emissions of nitrogen oxide (NOx) and sulphur dioxide (SO2) gases, to primary PM2.5 and PM10 (particulate matter, PM with diameter less than 2.5 micrometres and 10 micrometres respectively), which includes emissions of black carbon, and to carbon dioxide. Chemical reactions in the atmosphere involving NOx and SO2, and ammonia (NH3) gas emitted from land sources (principally associated with agriculture), lead to the formation of components of secondary inorganic particulate matter. These primary and secondary pollutants derived from shipping emissions contribute to adverse human health effects in the UK and elsewhere (including cardiovascular and respiratory illness and premature death), as well as environmental damage through acidification and eutrophication." For a little more information on the NOx and SO2 interactions they also say this: "PM2.5 can also be formed from the chemical reactions of gases such as sulphur dioxide (SO2) and nitrogen oxides (NOx: nitric oxide, NO plus nitrogen dioxide, NO2)" This is a totally new area for me, so my first thoughts were to go through these different pollutants and dig into what their health impacts are. The most clear information seems to be about particulate matter pollution, for example I found this from the NHS about particulate matter, saying that ‘safe levels’ are not actually safe: "As a general rule, the lower the PM, the more dangerous the pollutant is, as very small particles are more likely to bypass the body’s defences and potentially cause lung and heart problems." I’ve also gathered together some exposure guidelines from the World Health Organisation, the Environmental Protection Agency, and other reasonably reputable sources – the units of measurement are often different, and the limits differ depending on where you look, but it’s a start and I now feel reasonably confident that these are the pollutants that matter in our context: Exposure guidelines WHO NO2: 40 μg/m3 annual mean, 200 μg/m3 1-hour mean Health impacts Causes inflammation of the airways at high levels. Can decrease lung function, increase the risk of respiratory conditions and increase the response to allergens. Defra estimates that the UK death rate is 4% higher due to nitrogen dioxide pollution – around 23,500 extra deaths per year. Exposure guidelines AEGL-1 (nondisabling – may be problematic for asthmatics) 0.20ppm, AEGL-2 (disabling) 0.75ppm, AEGL-3 (lethal) 30ppm for 10 mins – 9.6ppm for 8h. WHO: 20 μg/m3 24-hour mean, 500 μg/m3 10-minute mean Health impacts Sulfur dioxide irritates the skin and mucous membranes of the eyes, nose, throat, and lungs. High concentrations can cause inflammation and irritation of the respiratory system. The resulting symptoms can include pain when taking a deep breath, coughing, throat irritation, and breathing difficulties. High concentrations can affect lung function, worsen asthma attacks, and worsen existing heart disease in sensitive groups. Exposure guidelines AEGL-1 (nondisabling) – not recommended because susceptible persons may experience more serious effects at concentrations that do not affect general population. AEGL-2 (disabling) 420ppm for 10 mins – 27ppm for 8h. AEGL-1 (lethal) 1800ppm for 10 mins – 130 ppm for 8h Health impacts Carbon monoxide enters your bloodstream and mixes with haemoglobin to form carboxyhaemoglobin. When this happens, the blood is no longer able to carry oxygen, and this lack of oxygen causes the body's cells and tissue to fail and die. A tension-type headache is the most common symptom of mild carbon monoxide poisoning. Other symptoms include: dizziness, feeling and being sick, tiredness and confusion, stomach pain, shortness of breath and difficulty breathing. Long-term exposure to low levels of carbon monoxide can lead to neurological symptoms like difficulty thinking or concentrating, and frequent emotional changes. Exposure guidelines AEGL-1 (nondisabling) 30ppm, AEGL-2 (disabling) 220ppm for 10 mins, 110 ppm for 8h., AEGL-3 (lethal) 2700ppm for 10 mins, 390ppm for 8h. Health impacts Irritation to eyes, nose, throat; dyspnea (breathing difficulty), wheezing, chest pain; pulmonary edema; pink frothy sputum; skin burns, vesiculation. Pollutant Primary PM2.5 Exposure guidelines ‘there is understood to be no safe threshold below which no adverse effects would be anticipated’. 7% increase in mortality with each 5 micrograms per cubic metre increase in particulate matter with a diameter of 2.5 micrometres (PM2.5). European annual mean limit of 25μg/m3. World Health Organisation: 10 μg/m3 annual mean, 25 μg/m3 24-hour mean Health impacts Particles in the PM2.5 size range are able to travel deeply into the respiratory tract, reaching the lungs. Exposure to fine particles can cause short-term health effects such as eye, nose, throat and lung irritation, coughing, sneezing, runny nose and shortness of breath. Exposure to fine particles can also affect lung function and worsen medical conditions such as asthma and heart disease. Scientific studies have linked increases in daily PM2.5 exposure with increased respiratory and cardiovascular hospital admissions, emergency department visits and deaths. Studies also suggest that long term exposure to fine particulate matter may be associated with increased rates of chronic bronchitis, reduced lung function and increased mortality from lung cancer and heart disease. People with breathing and heart problems, children and the elderly may be particularly sensitive to PM2.5. Pollutant Course particulate matter: Primary PM10 Exposure guidelines World Health Organisation: 20 μg/m3 annual mean, 50 μg/m3 24-hour mean. Health impacts As for PM2.5, but these coarser particles are of less risk than PM2.5. The next step is to look at sensors. Via the wonders of Twitter, we were recommended alphasense for pre-made gas sensors. Apparently this technology is hard to calibrate, and cheaper sensors tend to drift in their calibration, so we might end up only being able to look at relative values if we were to produce a map of air quality over water. This might be OK, but it would be nicer to be able to compare against the ‘safe’ exposure limits. One option might be to calibrate against a more professional/pricey sensor setup at a fixed location before and after doing the mapping. Since the world of gas sensing is mainly done using nanotechnology, it’s probably currently a bit out of the scope for in-house DIY approaches. As a compromise, we thought it was worth trying an Enviro+, which is a premade add on for a Raspberry Pi which measures air quality (pollutant gases), temperature, pressure, humidity, light, and noise level. Fig 5. The Enviro+ that we tried and blew up We had a go at integrating an Enviro+ into the Sonic Kayak system (no easy job given the number of different sensors we’re now trying to run), and got it working alongside the prototype turbidity sensor. The analogue to digital converter on the Enviro+ is higher resolution than we had already had on the Arduino or ATmega328 chip that we use, which is great because it means the data is more sensitive. The LCD screen was a nice touch and could prove useful for debugging. There’s an obvious problem with the design limitations though, as all our kit is sealed inside a waterproof box, with cable glands to pass wiring through the box – an air quality sensor needs to be exposed to the air, so we’d need to think about the design practicalities including waterproofing. Sadly we blew up our Enviro+ by later trying to power it from the 5V and ground pins rather than plugging it into the GPIO, as we need that free for our GPS and other sensors. Probably we just blew up the voltage regulator and could re-use the sensor components themselves. Since it seemed technically viable, we looked a bit more into what the Enviro+ is actually measuring, the makers say: “The analog gas sensor can be used to make qualitative measurements of changes in gas concentrations, so you can tell broadly if the three groups of gases are increasing or decreasing in abundance. Without laboratory conditions or calibration, you won't be able to say "the concentration of carbon monoxide is n parts per million", for example.Temperature, air pressure and humidity can all affect particulate levels (and the gas sensor readings) too, so the BME280 sensor on Enviro+is really important to understanding the other data that Enviro+ outputs.” Looking into these ‘three groups of gases’, it turns out that they basically have 3 sensors which detect carbon monoxide (CO, reducing), nitrogen dioxide (NO2, oxidising) and ammonia (NH3). But – these sensors are also sensitive to other very common gases (like hydrogen!) - which means that the output from a sensor doesn’t necessarily reflect the amount of the gas you are interested in, it might reflect a mix of gases. Again calibration is an issue, so we’d only ever be likely to be looking at relative values, and also we wouldn’t be sure what gasses we were actually detecting. It seems like low-cost research-grade gas sensing is still a little way off. The exception seems to be NH3, which might not be worthwhile detecting in its own right, as it only really seems to be an issue because it is a precursor for particulate matter: “As a secondary particulate precursor, NH3 also contributes to the formation of particulate aerosols in the atmosphere. Particulate matter is an important air pollutant due to its adverse impact on human health and NH3 is therefore also indirectly linked to effects on human health” In the interests of getting something up and running quickly, that fits with our open hardware ethos, we may be better off starting by just looking at particulate matter. Our brilliant friend and data visualiser, Miska Knapek pointed us towards Luftdaten, which he is currently working on. They have designed and published plans for a fine particulate matter (PM2.5) sensor that is open source and arduino based. The challenge with this is going to be waterproofing it for use on the boats, as unlike rain, water when kayaking can come from all directions, including all at once if you capsize. There are also pre-made cheap (£25) particulate sensors, for example this one which is small enough to use on a kayak at ~5cm and is designed to work with the Enviro+ and Raspberry Pi. These have fans to suck air through them and a laser to detect the number and size of particles in the air, and they work for various sizes of particulate matter (PM1.0, PM2.5 and PM10). This is all a very new area for us (and it’s a big area!), so if we’ve made any mistakes or missed anything obvious we’d love to hear your ideas. It seems very feasible to add turbidity and particulate matter sensors, so if you’re interested in using these then it would also be helpful to get in touch, as we’ll need examples of practical uses if we’re to look for some funding to support adding these. This R&D work has been funded by Smartline (European Regional Development Fund).
0.8261
FineWeb
What an English homework helper can do for you? Homework helpers are all the rage these days. If you are not familiar with the term, these online services help you with your homework. You will find quite a number of agencies that provide homework help if you Google the term. It is the perfect answer to all the busy students burdened with extra work and studies. It is especially useful for students who work after school or have other responsibilities that do not leave enough time to tackle Mt. homework every day. An English homework helper can help you with: - 1. Guidance: If all you need is some pointers, every now and then, you can as your English homework helper to customize the help in such a way that you can get assistance with the tasks assigned from school. - 2. Lessons: Online lessons can come in many forms. Your homework helper can provide you with Audio/Video lessons. Some homework helpers offer live lessons online by expert tutors. - 3. Notes: Your English homework helper will give you lecture notes and other texts for your use at your leisure. This works best in combination with tuitions and audio/video lessons. - 4. Writing help: This is where things get interesting. Suppose you are given an essay-writing task, or you are required to write a term paper. Suppose you are not in a position to write it due to some reason, or are not very good at English writing. What do you do? You get in touch with a homework helper and get a) writing tips/guidelines, OR b) you get it professionally written! - 5. Doing your homework for you: And this is where it crescendos: You can outsource your homework to a homework helper if you feel that no amount of assistance can solve your English homework problems. While homework help is legal, you will find some bad quality online agencies. These will charge you the average or lower fees, and will provide you with essays and assignments that are either rehashes of past essays or are frank plagiarisms. Do your background research before registering with an agency and paying their fees. Ask friends, acquaintances, and classmates for a recommendation. You can also visit online students’ forums, blogs, and listings to learn about reliable agencies. Do not fall for great-sounding cheap packages. You will avoid a lot of headache and heartache if you select the right agency to assist you with your homework.
0.7093
FineWeb
when i had that same problem it turn out my fuel distributor went bad.the part i keep asking and never get an answer why when the eha reads rich the dutycycle reads lean just like in your pictures.then when the eha reads lean the dutycycles read steve said worried about the eha reading but all the info refers to x11 for all diagnostic if you modified you air intake to bring in more air then you need to richen the fuel a little to match the air coming in. more air than fuel choke the engine and vise versa also why is it everyybody discribe the same exsact problem but the fix is different .
0.5761
FineWeb
Subject: Analysis - Internal Rate of Return (IRR) Last-Revised: 25 June 1999 Contributed-By: Christopher Yost (cpy at world.std.com), Rich Carreiro (rlcarr at animato.arlington.ma.us) If you have an investment that requires and produces a number of cash flows over time, the internal rate of return is defined to be the discount rate that makes the net present value of those cash flows equal to zero. This article discusses computing the internal rate of return on periodic payments, which might be regular payments into a portfolio or other savings program, or payments against a loan. Both scenarios are discussed in some detail. We'll begin with a savings program. Assume that a sum "P" has been invested into some mutual fund or like account and that additional deposits "p" are made to the account each month for "n" months. Assume further that investments are made at the beginning of each month, implying that interest accrues for a full "n" months on the first payment and for one month on the last payment. Given all this data, how can we compute the future value of the account at any month? Or if we know the final value of the account and the investments made over time, what was the interal rate of return? The relevant formula that will help answer these questions is: F = -P(1+i)^n - [p(1+i)((1+i)^n - 1)/i] - "F" is the future value of your investment; i.e., the value after "n" months or "n" weeks or "n" years--whatever the period over which the investments are made) - "P" is the present value of your investment; i.e., the amount of money you have already invested (a negative value - see below) - "p" is the payment each period (a negative value - see below) - "n" is the number of periods you are interested in (number of payments) - "i" is the interest rate per period. Note that the symbol '^' is used to denote exponentiation (for example, 2 ^ 3 = 8). Very important! The values "P" and "p" above should be negative. This formula and the ones below are devised to accord with the standard practice of representing cash paid out as negative and cash received (as in the case of a loan) as positive. This may not be very intuitive, but it is a convention that seems to be employed by most financial programs and spreadsheet functions. The formula used to compute loan payments is very similar, but as is appropriate for a loan, it assumes that all payments "p" are made at the end of each period: F = -P(1+i)^n - [p((1+i)^n - 1)/i] Note that this formula can also be used for investments if you need to assume that they are made at the end of each period. With respect to loans, the formula isn't very useful in this form, but by setting "F" to zero, the future value (one hopes) of the loan, it can be manipulated to yield some more useful information. To find what size payments are needed to pay-off a loan of the amount "P" in "n" periods, the formula becomes this: -Pi(1+i)^n p = ----------- (1+i)^n - 1 If you want to find the number of periods that will be required to pay-off a loan use this formula: log(-p) - log(-Pi - p) n = ---------------------- log(1+i) Keep in mind that the "i" in all these formula is the interest rate per period. If you have been given an annual rate to work with, you can find the monthly rate by adding 1 to annual rate, taking the 12th root of that number, and then subtracting 1. The formula is: i = ( r + 1 ) ^ 1/12 - 1 where "r" is the rate. Conversely, if you are working with a monthly rate--or any periodic rate--you may need to compound it to obtain a number you can compare apples-to-apples with other rates. For example, a 1 year CD paying 12% in simple interest is not as good an investment as an investment paying 1% compounded per month. If you put $1000 into each, you'll have $1120 in the CD at the end of the year but $1000*(1.01)^12 = $1126.82 in the other investment due to compounding. In this way, interest rates of any kind can be converted to a "simple 1-year CD equivalent" for the purposes of comparison. (See the article "Computing Compound Return" for more information.) You cannot manipulate these formulas to get a formula for "i", but that rate can be found using any financial calculator, spreadsheet, or program capable of calculating Internal Rate of Return or IRR. Technically, IRR is a discount rate: the rate at which the present value of a series of investments is equal to the present value of the returns on those investments. As such, it can be found not only for equal, periodic investments such as those considered here but for any series of investments and returns. For example, if you have made a number of irregular purchases and sales of a particular stock, the IRR on your transactions will give you a picture of your overall rate of return. For the matter at hand, however, the important thing to remember is that since IRR involves calculations of present value (and therefore the time-value of money), the sequence of investments and returns is significant. Here's an example. Let's say you buy some shares of Wild Thing Conservative Growth Fund, then buy some more shares, sell some, have some dividends reinvested, even take a cash distribution. Here's how to compute the IRR. You first have to define the sign of the cash flows. Pick positive for flows into the portfolio, and negative for flows out of the portfolio (you could pick the opposite convention, but in this article we'll use positive for flows in, and negative for flows out). Remember that the only thing that counts are flows between your wallet and the portfolio. For example, dividends do NOT result in cash flow unless they are withdrawn from the portfolio. If they remain in the portfolio, be they reinvested or allowed to sit there as free cash, they do NOT represent a flow. There are also two special flows to define. The first flow is positive and is the value of the portfolio at the start of the period over which IRR is being computed. The last flow is negative and is the value of the portfolio at the end of the period over which IRR is being computed. The IRR that you compute is the rate of return per whatever time unit you are using. If you use years, you get an annualized rate. If you use (say) months, you get a monthly rate which you'll then have to annualize in the usual way, and so forth. On to actually calculating it... We first have the net present value or NPV: N NPV(C, t, d) = Sum C[i]/(1+d)^t[i] i=0where: - C[i] is the i-th cash flow (C is the first, C[N] is the last). - d is the assumed discount rate. - t[i] is the time between the first cash flow and the i-th. Obviously, t=0 and t[N]=the length of time under consideration. Pick whatever units of time you like, but remember that IRR will end up being rate of return per chosen time unit. Given that definition, IRR is defined by the equation: NPV(C, t, IRR) = 0. In other words, the IRR is the discount rate which sets the NPV of the given cash flows made at the given times to zero. In general there is no closed-form solution for IRR. One must find it iteratively. In other words, pick a value for IRR. Plug it into the NPV calculation. See how close to zero the NPV is. Based on that, pick a different IRR value and repeat until the NPV is as close to zero as you care. Note that in the case of a single initial investment and no further investments made, the calculation collapses into: (Initial Value) - (Final Value)/(1+IRR)^T = 0 or (Initial Value)*(1+IRR)^T - (Final Value) = 0 Initial*(1+IRR)^T = Final (1+IRR)^T = Final/Initial And finally the quite familiar: IRR = (Final/Inital)^(1/T) - 1 You can probably calculate IRR in your favorite spreadsheet program. A little command-line program named 'irr' that calculates IRR is also available. See the article Software - Archive of Investment-Related Programs in this FAQ for more information. Previous article is Analysis: Goodwill Next article is Analysis: Loan Payments and Amortization Category is Analysis| Index of all articles
0.9039
FineWeb
This section highlights the ways in which new and ongoing National Institute on Aging (NIA)-supported programs, centers, and collaborative efforts are advancing Alzheimer’s research. A key component of the Federal research program for Alzheimer’s disease is to create and sustain an infrastructure that supports and enhances scientific discovery and translation of discoveries into Alzheimer’s disease prevention and treatment. NIA’s coordinating mechanisms and key initiatives are central to this effort. Specifically, important advances are being made by supporting high-quality research, from which data can be pooled and shared widely and efficiently through a well-established Alzheimer’s disease research infrastructure. The infrastructure and initiatives described in this report seek to bring together researchers and Alzheimer’s interests by: - Convening and collaborating in workshops addressing new scientific areas - Working across NIH to vigorously discuss new science and opportunities for new investment - Partnering with other Federal agencies, not-for-profit groups, and industry in the shared goals of improved treatments, new prevention strategies, and better programs for people with Alzheimer’s and their caregivers. The current research infrastructure supported by NIH includes: NIA Intramural Research Program (NIA IRP). In addition to funding a broad portfolio of aging-related and Alzheimer’s research at institutions across the country, NIA supports its own laboratory and clinical research program, based in Baltimore and Bethesda, MD. The NIA IRP focuses on understanding age-related changes in physiology and behavior, the ability to adapt to biological and environmental stresses, and the pathophysiology of age-related diseases such as Alzheimer’s. Laboratory research ranges from studies in basic biology, such as neurogenetics and cellular and molecular neurosciences, to examinations of personality and cognition. The IRP also conducts clinical trials to test possible new interventions for cognitive decline and Alzheimer’s disease. The IRP leads the Baltimore Longitudinal Study of Aging (BLSA), America’s longest-running scientific study of human aging, begun in 1958, which has provided valuable insights into cognitive change with age. The IRP’s Laboratory of Behavioral Neuroscience is identifying brain changes that may predict age-related declines in memory or other cognitive functions. Using brain imaging techniques, such as magnetic resonance imaging, which measures structural changes, and positron emission tomography scans, which measure functional changes, IRP researchers are tracking memory and cognitive performance over time to help identify both risk and protective factors for dementia. For example, an IRP study involving more than 500 BLSA participants uses brain imaging, biomarkers, and cognitive assessments to track changes in cognitive function in people who do not develop Alzheimer’s and in those who develop cognitive impairment and dementia. Additionally, IRP researchers help identify potential drug targets for Alzheimer’s disease, screening candidate drugs for efficacy in cell culture or animal models. The most effective compounds are moved through preclinical studies to clinical trials. IRP researchers also collaborate with academia and industry to develop agents that show promise as an Alzheimer’s intervention. Industry has licensed patents covering a variety of novel compounds from NIA for preclinical and clinical development. NIA funds 27 Alzheimer’s Disease Centers nationwide. See a state-by-state list. Alzheimer’s Disease Centers (ADCs). NIA-supported research centers form the backbone of the national Alzheimer’s disease research effort. These multidisciplinary centers, located at 27 institutions nationwide, promote research, training and education, and technology transfer. Thanks to the participation of people in their communities, the Centers conduct longitudinal, multi-center, collaborative studies of Alzheimer’s disease diagnosis and treatment, age-related neurodegenerative diseases, and predictors of change in people without dementia that may indicate the initial stages of disease development. The ADCs also conduct complementary studies, such as imaging studies and autopsy evaluations. All participants enrolled in the Centers receive a standard annual evaluation. Data from these evaluations are collected and stored by the National Alzheimer’s Coordinating Center (NACC; see below) as the Uniform Data Set. The ADCs serve as sites for a number of major studies, such as national clinical trials and imaging and biomarker research. Alzheimer’s Disease Translational Research Program: Drug Discovery, Preclinical Drug Development, and Clinical Trials. NIA has a longstanding commitment to translational research for Alzheimer’s disease. In 2005, the Institute put this effort into high gear by launching a series of initiatives aimed at supporting all steps of drug discovery through clinical development. The program’s goal is to seed preclinical drug discovery and development projects from academia and from small biotechnology companies and, in doing so, to increase the number of investigational new drug candidates that can be tested in humans. This strategic investment has led to the relatively rapid creation of a large, diverse portfolio of projects aimed at discovery and preclinical development of novel candidate therapeutics. To date, NIA has supported more than 60 early drug discovery projects and 18 preclinical drug development projects through this program. Fifteen of the 18 preclinical drug development projects are for compounds against non-amyloid therapeutic targets, such as tau, ApoE4, pathogenic signaling cascades, and neurotransmitter receptors. Four candidate compounds projects have advanced to the clinical development stage. This program supports outreach and education activities held at regular investigators’ meetings and at an annual drug discovery training course organized by the Alzheimer’s Drug Discovery Foundation. These meetings provide much-needed networking opportunities for NIA-funded investigators and industry and regulatory experts, as well as education of a new cadre of academic scientists. Two major program initiatives are: The Alzheimer’s Disease Cooperative Study develops and tests new Alzheimer’s interventions and treatments that might not otherwise be developed by industry. - Alzheimer’s Disease Pilot Clinical Trials Initiative. This ongoing initiative, begun in 1999, seeks to increase the number and quality of preliminary clinical evaluations of interventions for Alzheimer’s, mild cognitive impairment, and age-associated cognitive decline. These trials are investigating drug and nondrug prevention and treatment interventions. The goal is not to duplicate or compete with the efforts of pharmaceutical companies but to encourage, complement, and accelerate the process of testing new, innovative, and effective treatments. The National Institute of Nursing Research, part of NIH, also participates in this initiative. See Testing Therapies to Treat, Delay, or Prevent Alzheimer’s Disease to learn more about the trials and to see a complete list of treatment and prevention trials. - Alzheimer’s Disease Cooperative Study (ADCS). NIA launched the ADCS in 1991 to develop and test new interventions and treatments for Alzheimer’s disease that might not otherwise be developed by industry. Currently operated under a cooperative agreement with the University of California, San Diego, this large clinical trials consortium comprises more than 70 sites throughout the United States and Canada. The ADCS focuses on evaluating interventions that will benefit Alzheimer’s patients across the disease spectrum. This work includes testing agents that lack patent protection, agents that may be useful for Alzheimer’s but are under patent protection and marketed for other indications, and novel compounds developed by individuals, academic institutions, and small biotech companies. The ADCS also develops new evaluation instruments for clinical trials and innovative approaches to clinical trial design. Since its inception, the ADCS has initiated 32 research studies (25 drug and 7 instrument development protocols.) The ADCS also provides infrastructure support to other federally funded clinical efforts, including the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Dominantly Inherited Alzheimer Network (DIAN). (Read more about these studies below.) National Alzheimer’s Coordinating Center (NACC). NIA established the NACC in 1999 with the goal of pooling and sharing data on participants in ADC studies. By 2005, NACC had collected data, including neuropathological data from 10,000 brain autopsies, from some 77,000 ADC study participants. NACC then added clinical evaluations and annual follow-ups to its protocol, enriching the database with detailed longitudinal data from 26,500 participants and 2,100 brain autopsies. The data are available to Alzheimer’s researchers worldwide. NACC data are helping to reveal different symptom patterns in different subsets of people with Alzheimer’s, patterns that would not have become apparent without analyzing a data set of this size. NACC also helps coordinate other NIA efforts, such as the identification and selection of appropriate post mortem material collected at ADCs to send to the National Cell Repository for Alzheimer’s Disease. National Cell Repository for Alzheimer’s Disease (NCRAD). This NIA-funded repository located at Indiana University Medical Center in Indianapolis, provides resources that help researchers identify the genes that contribute to Alzheimer’s and other types of dementia. NCRAD collects and maintains biological specimens and associated data on study volunteers from a variety of sources, primarily people enrolled at the ADCs as well as those in ADNI, the Alzheimer’s Disease Genetics Consortium, and other studies. NCRAD also houses DNA samples and data from more than 900 families with multiple members affected by Alzheimer’s. Qualified research scientists may apply to NCRAD for samples and data to conduct genetic research. Since it was funded 22 years ago, more than 150,000 biological samples have been requested and sent to more than 120 investigators and cores across the world. NIA Genetics of Alzheimer’s Disease Data Storage Site (NIAGADS). Located at the University of Pennsylvania, NIAGADS is a Web-based warehouse for Alzheimer’s disease genetic data. All genetic data derived from NIA-funded studies on the genetics of late-onset Alzheimer’s are deposited at NIAGADS, another NIA-approved site, or both. NIAGADS currently houses 22 data sets with nearly 44,000 subjects and more than 24 billion genotypes. Data from genome-wide association studies (GWAS) that are stored at NIAGADS are also made available through the database of Genotype and Phenotype (dbGaP) at the National Library of Medicine’s National Center for Biotechnology Information, which was established to archive and distribute the results of large-scale GWAS analyses. Through dbGaP, data sets from multiple GWAS done on different platforms can be merged, and data from thousands of study participants can be analyzed together, increasing the probability of gene discovery. Alzheimer’s Disease Education and Referral (ADEAR) Center. Congress created the ADEAR Center in 1990 to compile, archive, and disseminate information concerning Alzheimer’s disease for people with Alzheimer’s disease, their families, health professionals, and the public. Operated by NIA, the ADEAR Center is a current and comprehensive resource for Alzheimer’s disease information and referrals. All of its information about research and materials on causes, diagnosis, treatment, prevention, and caregiving are carefully researched, evidence-based, and reviewed for accuracy and integrity. NIA supports and participates in several innovative research initiatives that are crucial to the advancement of Alzheimer’s research. These include highly collaborative and international efforts to uncover the basic mechanisms of Alzheimer’s disease, the biomarkers that signal stages of the disease, and efforts to better understand the aging brain. These research initiatives include: The Alzheimer’s Disease Neuroimaging Initiative seeks to identify neuroimaging and other biomarkers that can detect disease progression and measure the effectiveness of potential therapies. Alzheimer’s Disease Neuroimaging Initiative (ADNI). NIA launched this groundbreaking initiative in 2004. It is the largest public-private partnership to date in Alzheimer’s disease research, receiving generous support from private-sector companies and foundations through the Foundation for the National Institutes of Health. ADNI’s goal is to find neuroimaging and other biological markers that can detect disease progression and measure the effectiveness of potential therapies. In the first phase of ADNI, researchers recruited 800 participants, a mix of cognitively healthy people and those with Alzheimer’s disease or MCI. To speed the pace of analysis and findings, ADNI investigators agreed to make their collected data widely available. Magnetic resonance maging and positron emission tomography brain images as well as clinical, genetic, and fluid biomarker data are available to qualified researchers worldwide through a Web-based database. Findings from this initiative have generated excitement about using brain and fluid biomarkers to identify people at risk for developing Alzheimer’s or to characterize the pace of deterioration. Accomplishments include new findings about how changes in the structure of the hippocampus may help gauge disease progression and the effectiveness of potential treatments, and the establishment of biomarker and imaging measures that predict risk for cognitive decline and conversion to dementia. A follow-on effort, ADNI-GO, was launched with American Recovery and Reinvestment Act funds in 2009, followed by ADNI 2 in 2010. ADNI 2 builds on the success of earlier ADNI phases to identify the earliest signs of Alzheimer’s disease. It set a 5-year goal to recruit 550 volunteers, age 55 to 90, at 55 sites in the United States and Canada. The volunteers include people with no apparent memory problems, people with early and late MCI, and people with mild Alzheimer’s disease. The volunteers will be followed to help define the changes in brain structure and function that take place when they transition from normal cognitive aging to MCI, and from MCI to Alzheimer’s dementia. The study uses imaging techniques and biomarker measures in blood and cerebrospinal fluid specially developed to track changes in the living brain. Researchers hope to identify who is at risk for Alzheimer’s, track progression of the disease, and devise tests to measure the effectiveness of potential interventions. ADNI2 continues to follow participants recruited for the other ADNI cohorts. ADNI has been remarkably fruitful. To date, more than 430 papers using ADNI data have been published from investigators around the world, and many more will come as more data are collected and analyzed. The success of ADNI has also inspired similar efforts in Europe, Japan, and Australia. Dominantly Inherited Alzheimer’s Disease Network (DIAN). NIA launched this 6-year study in 2008 to better understand the biology of early-onset Alzheimer’s, a rare, inherited form of the disease that can occur in people in their 30s, 40s, and 50s. People born with a certain gene mutation not only develop Alzheimer’s disease before age 60 but have a 50–50 chance of passing the gene on to their children. When Alzheimer’s disease is caused by a genetic mutation, about 50 percent of the people in the family tree get the illness before age 60. Scientists involved in this collaborative, international effort hope to recruit 300 adult children of people with Alzheimer’s disease to help identify the sequence of brain changes that take place before symptoms appear. By understanding this process, researchers hope to gain additional insights into the more common late-onset form of the disease. Until DIAN, the rarity of the condition and geographic distances between affected people and research centers hindered research. Today, volunteers age 18 and older with at least one biological parent with the disease are participating in DIAN at a network of 13 research sites in the United States, England, Germany, and Australia. Each participant receives a range of assessments, including genetic analysis, cognitive testing, and brain scans, and donates blood and cerebrospinal fluid so scientists can test for biomarkers. DIAN researchers are building a shared database of the assessment results, samples, and images to advance knowledge of the brain mechanisms involved in Alzheimer’s, eventually leading to targets for therapies that can delay or even prevent progress of the disease. The study is led by the ADC at Washington University School of Medicine in St. Louis. Alzheimer’s Disease Genetics Initiative (ADGI) and Alzheimer’s Disease Genetics Consortium (ADGC). The study of Alzheimer’s disease genetics is complicated by the likelihood that the risk of late-onset Alzheimer’s is influenced by many genes, each of which probably confers a relatively small risk. Identifying these genes requires analyzing the genomes of large numbers of people. ADGI was launched in 2003 to identify at least 1,000 families with multiple members who have late-onset Alzheimer’s as well as members who do not. In 2009, NIA funded the ADGC to support the use of large-scale, high-throughput genetics technologies, which allow the analysis of large volumes of genetic data, needed by researchers studying late-onset Alzheimer’s. These initiatives are achieving important results. The ADGC, for example, was one the founding partners of a highly collaborative, international group that announced the identification of 11 new Alzheimer’s risk genes in 2013. Combining previously studied and newly collected DNA data from 74,076 older volunteers with Alzheimer’s and those free of the disease from 15 countries, the research offers important new insights into the disease pathways involved in Alzheimer’s disease. Research Partnership on Cognitive Aging. Through the Foundation for the National Institutes of Health, NIA and the McKnight Brain Research Foundation established the Research Partnership on Cognitive Aging in 2007 to advance our understanding of healthy brain aging and function. The partnership is currently supporting grants funded through two research Requests for Applications: “Neural and Behavioral Profiles of Cognitive Aging” and “Interventions to Remediate Age-related Cognitive Decline.” To date, Partnership-supported researchers have published 107 scientific papers. The Partnership, with co-sponsorship from the National Center for Complementary and Alternative Medicine and the NIH Office of Behavioral and Social Sciences Research, released a new Request for Application in late 2013, “Plasticity and Mechanisms of Cognitive Remediation in Older Adults,” and expects to award grants in summer 2014. This public-private collaboration is expanding its outreach. In 2013, the McKnight Brain Research Foundation, with co-sponsorship from NIA and the National Institute of Neurological Disorders and Stroke, AARP, and the Retirement Research Foundation, contracted with the Institute of Medicine to conduct “Public Health Dimensions of Cognitive Aging.” The study is examining cognitive health and aging with a focus on epidemiology and surveillance, prevention and intervention opportunities, education of health professionals, and new approaches to enhance awareness and disseminate information to the public. The technical report, including commissioned papers, conclusions, and recommendations, will be released in 2015. NIH Toolbox for Assessment of Neurological and Behavioral Function. Supported by the NIH Blueprint for Neuroscience Research and the NIH Office of Behavioral and Social Sciences Research, researchers developed this set of brief tests to assess cognitive, sensory, motor, and emotional function, particularly in studies that enroll many people, such as epidemiological studies and clinical trials. These royalty-free tests, developed under a contract with NIH, were unveiled in September 2012. Available in English and Spanish and applicable for use in people age 3 to 85 years, the measures enable direct comparison of cognitive and other abilities at different ages across the lifespan. Human Connectome Project. The NIH Blueprint for Neuroscience Research, a group of 15 NIH institutes and offices engaged in brain-related research, started the Human Connectome Project in 2010 to develop and share knowledge about the structural and functional connectivity of the healthy human brain. This collaborative effort uses cutting-edge neuroimaging instruments, analysis tools, and informatics technologies to map the neural pathways underlying human brain function. Investigators will map these connectomes in 1,200 healthy adults—twin pairs and their siblings—and will study anatomical and functional connections among regions of the brain. The data gathered will be related to behavioral test data collected using another NIH Blueprint research tool, the NIH Toolbox for Assessment of Neurological and Behavioral Function (see above), and to data on participants’ genetic makeup. The goals are to reveal the contributions of genes and environment in shaping brain circuitry and variability in connectivity and to develop faster, more powerful imaging tools. Advancing our understanding of normal brain connectivity may one day inform Alzheimer’s research. BRAIN Initiative. The NIH Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative is part of a new Presidential focus aimed at revolutionizing our understanding of the human brain. The BRAIN Initiative aims to accelerate work on technologies that give a dynamic picture of how individual cells and complex neural circuits interact in real time. The ultimate goal is to enhance understanding of the brain and improve prevention, diagnosis, and treatment of brain diseases such as Alzheimer’s. The National Science Foundation and Defense Advanced Research Projects Agency are partnering with NIH in this initiative.
0.636
FineWeb
From the World Wide Web to the groundbreaking game Wolfenstein 3D, here are some of the most famous software innovations that were built using the NeXT computer operating system called NeXTSTEP. After Steve Jobs was forced out of Apple in 1985, he helped build two other companies, Pixar and NeXT. Pixar, of course, went on to produce a string of animated blockbuster films starting with Toy Story in 1995. But what about NeXT? In the history of software, the computer company NeXT only existed a short but very influential period of time. But during that time, the NeXTSTEP computer operating system helped create some of the most famous software innovations in history. The First Web Server And Web Browser: 1990 On March 12, 1989, Tim Berners-Lee submitted a proposal titled “Information Management: A Proposal” detailing the first concept of the World Wide Web. His boss, Mike Sendall, found the idea worthy enough to approve the purchase of one of the first NeXTcube computers in 1990. The retail price for the NeXT computer in 1990 was $10,000 each. Adjusting for inflation, $10,000 in the year 2020 is about $20,000. Tim Berners-Lee then used this NeXT computer to create the first-ever web browser and website server. The web-based Internet as we know it today was created using a NeXTcube computer. To help keep the web online, he had to attach a sticker on the side of the computer warning others not to turn it off. At the time, turning off that computer would have essentially turned off the World Wide Web. The actual NeXTcube computer that Tim Berners-Lee’s used to create the web is now on display at the Science Museum in London, UK. Wolfenstein 3D, Doom, And Quake: 1992-1996 In the early 1990s, computer programmer John Carmack used the NeXT operating system to build three of the most groundbreaking video game series of the decade, Wolfenstein 3D (1992), Doom (1993), and Quake (1996). Wolfenstein 3D was the first 3D first-person shooter game in history. It’s successor, DOOM, was a mega-hit and immediately paved the way for a series of popular 3D shooters including Marathon (1994), Star Wars: Dark Forces (1995), Duke Nukem 3D (1996), GoldenEye 007 (1997), Half-Life (1998), Unreal (1998), and Halo (2001), to name a few. Carmack followed up the success of Doom with Quake in the late 1990s. Quake featured real-time 3D rendering technology, multiplayer deathmatches and a soundtrack by Trent Reznor‘s band Nine Inch Nails. Display PostScript (DPS): 1987 In the late 1980s, developers at Adobe and NeXT collaborated to create a new 2D graphics engine system for the NeXT computer operating system called Display PostScript (DPS). At the time in 1987, no other computer system had object-oriented capabilities able to handle this advanced display technology. The technology was originally developed for computer printing but was useful in everything from graphic design to the user interface in applications and operating systems. CyberSlice: The First Online Food Delivery System: 1995 Decades before DoorDash and Grubhub, NeXT technology helped create the first online food delivery system in history called CyberSlice. Steve Jobs got the idea after seeing Sandra Bullock‘s character in the 1995 film, The Net, order a pizza online. Jobs decided to make the Hollywood concept a reality and used NeXT computers and GIS-based geolocation technology to place the first online food order in history. What did he order? A pizza with tomato and basil. Materials connected to the CyberSlice project were curated into the “Inventions of the 20th Century, Computer Science” collection at the Smithsonian Institute in Washington DC. Apple Operating Systems In the mid-1990s, Apple had a serious problem to solve. They needed to make a major advancement in their operating system and were struggling to find a worthy successor to Mac OS 9. Both the BeOS and Copland were contenders but weren’t strong enough to move forward with. However, in the decade that Steve Jobs was away from Apple, his company NeXT created a product so advanced that it had a client list that included Dell, Disney, the National Security Agency (NSA), the Central Intelligence Agency (CIA), BBC, and the National Reconnaissance Office amoung others. NeXTSTEP was the obvious choice to be the successor of Macintosh OS 9. In 1997, Apple acquired NeXT for $429 million dollars. That deal not only gave the company NeXT’s revolutionary operating system called NeXTSTEP, but it also brought Steve Jobs back to Apple. There are countless features and applications from NeXTSTEP that you can still find in the Apple operating system family today including Mac OS X, macOS, iOS, iPadOS, watchOS, and tvOS. Although there’s a lot going on under the hood, visible interface elements like the dock, spinning beach ball, and column view as well as applications such as TextEdit and Chess, are descendants of NeXTSTEP applications. Famous Achievements In Software History That Were Built Using The NeXT Operating System The NeXT operating system only existed from 1998-1997. But during that short time, it was responsible for several noteworthy achievements in computing and software history. Did you own a NeXT computer? Please tell us about your experiences in the comments or tweet us at @methodshop.
0.5139
FineWeb
Where do you want to go on your next family day out? Space? Why not! The Adler Planetarium is a great place in Chicago to explore what is out there above Earth's atmosphere. It's also America's FIRST planetarium, being founded all the way back in 1930, so there is a fascinating history to learn about before you even get to a telescope. If you love all things space, you will love the exhibits that the Adler Planetarium has for you. Have you ever looked up into the sky and wondered what else there is to know about the moon? Mission Moon has the answer! You can take a journey and discover all the dangers and thrills of what is really means for those astronauts to take a trip to the moon. There are exhibits which give kids a chance to learn about the history of cultures of the world too. Astronomy In Culture looks at what other cultures of the past thought about the moon - from South America to Egypt. Even the Middle East! The moon has been around long before even dinosaurs, so just think - all those great figures of the past will have seen the very same moon that you are looking at! You're in the wonderful city of Chicago, so it's only fitting there is an exhibit on what the sky above Chicago in 1913 would have looked like. It's true, for all those space buffs, the stars wouldn't have changed positions. BUT - there was a lot less light pollution, so when you looked up into the sky over 100 years ago - it would have looked like a blanket of stars! Can you imagine looking up now and seeing something so amazing? From planets to the solar system, there are plenty of exhibits covering some pretty amazing topics! Adler Planetarium also has overnight stays, after school hang outs, and three theaters where you can catch some pretty cool films about space! Are you excited to explore space!
0.5252
FineWeb
Double vision (diplopia) When a person experiences double vision, or diplopia, they see two images of the same thing at the same time. Double vision may be a long-term problem, or the symptoms may come and go. Double vision may affect a person's ability to drive safely and the DVLA may need to be told about the condition. What causes diplopia or double vision? Opening your eyes and seeing a single clear image is something you probably take for granted. But that seemingly automatic process depends on the orchestration of multiple areas of the vision system. They all need to work together seamlessly: - The cornea is the clear outermost disc covering the eye. It allows in light. - The lens is behind the pupil. It focuses light onto the retina. - Muscles of the eye, called extraocular muscles, perform the eye's precise movements. - Nerves carry visual information from the eyes to the brain. - The brain is where several areas process visual information from the eyes. Problems with any part of the vision system can lead to diplopia. It makes sense to consider the causes of diplopia according to the part of the visual system that has the problem. Cornea problems. Problems with the cornea often cause double vision in one eye only. Covering the affected eye makes the diplopia go away. The damaged surface of the eye distorts incoming light, causing double vision. Damage can happen in several ways: - Infections of the cornea, such as shingles ( herpes zoster), can distort the cornea. - An uncommon complication of LASIK surgery ( laser eye surgery) can leave one cornea altered, creating unequal visual images. Lens problems. Cataracts are the most common problem with the lens that causes double vision. If cataracts are present in both eyes, images from both eyes will be distorted. Cataracts are often correctable with surgery. Muscle problems. If a muscle in one eye is weak, that eye can't move smoothly with the healthy eye. Gazing in directions controlled by the weak muscle causes double vision. Muscle problems can result from several causes: - Myasthenia gravis is an autoimmune illness that blocks the stimulation of muscles by nerves inside the head. The earliest signs are often double vision and drooping eyelids (ptosis). - Graves' disease is a thyroid condition that weakens the muscles of the eyes. Graves' disease commonly causes vertical diplopia. With vertical diplopia, one image is on top of the other. Nerve problems. Several different conditions can damage the nerves and lead to double vision: - Multiple sclerosis can affect nerves anywhere in the brain or spinal cord. If the nerves controlling the eyes are damaged, double vision can result. - Guillain-Barre syndrome is a nerve condition that causes progressive weakness. Sometimes, the first symptoms occur in the eyes and cause double vision. - Uncontrolled diabetes can lead to nerve damage in one of the eyes, causing eye weakness and diplopia. Brain problems. The nerves controlling the eyes connect directly to the brain. Further visual processing takes place inside the brain. Many different causes for diplopia originate in the brain. They include:
0.9988
FineWeb
What people are saying - Write a review Muhammad Iqbal occupies a unique place in history,not only because of his poetry but due to his political contribution and universal appeal.He was deadly against slavery and termed it a death.Iqbal is of the view that when a peson recognises his hidden potentials he becomes capable of realizing the creator of universeand mystery of universe. Any one who recognises the existence of God, cannot tolerate rule of any other entity human or non human His persian poetry is more penetrating and deep rooted if we compare with his urdu verse.Nicholson has devoted his ceaseless efforts to reach to the depth of the message of Iqbal.From persian he has translated iqbal s poetry into standard English. Many other writers have tried to translate Iqbals verses but Nicholson is great Muhammad Ayub Munir
0.5171
FineWeb
US 7576043 B2 A wellbore fluid comprising a surfactant, the surfactant having the formula (R1—X)nZ, wherein R1 is an aliphatic group—comprising a C18-C22 principal straight chain bonded at a terminal carbon atom thereof to X, and comprising at least one C1-C2 side chain—X is a charged head group, Z is a counterion, and n is an integer which ensures that the surfactant is charge neutral, and wherein the charged head group X is selected to provide that the surfactant is soluble in oil and at least one part of the charged head group is anionic. 1. A wellbore fluid configured for use in hydrocarbon recovery, comprising an aqueous solution of: a surfactant, the surfactant in said solution consisting of a thickening amount of surfactant which is soluble in aqueous solutions and has the formula R1 is an aliphatic group comprising a C16-C24 principal straight chain bonded at a terminal carbon atom thereof to X, and comprising at least one C1 or C2 side chain; and X being a charged head group, Z being a counterion, and n being an integer which ensures that the surfactant is charge neutral; and wherein: the charged head group X is selected to provide that the surfactant is soluble in oil; and at least one part of the charged head group is anionic wherein the wellbore fluid is a viscoelastic gel and wherein said gel undergoes a reduction in viscosity on contact with oil. 2. The wellbore fluid according to 3. The wellbore fluid according to 4. The wellbore fluid according to 5. The wellbore fluid according to This application claims the benefit of and is a continuation of U.S. application Ser. No. 10/343,401 U.S. Pat. No. 7,196,041 filed on Oct. 15, 2003, which is incorporated by reference in its entirety for all purposes. The present invention relates to a surfactant, and in particular to a surfactant thickening agent for use in hydrocarbon recovery. In the recovery of hydrocarbons, such as oil and gas, from natural hydrocarbon reservoirs, extensive use is made of wellbore fluids such as drilling fluids, completion fluids, work over fluids, packer fluids, fracturing fluids, conformance or permeability control fluids and the like. In many cases significant components of wellbore fluids are thickening agents, usually based on polymers or viscoelastic surfactants, which serve to control the viscosity of the fluids. Typical viscoelastic surfactants are N-erucyl-N,N-bis(2-hydroxyethyl)-N-methyl ammonium chloride and potassium oleate, solutions of which form gels when mixed with corresponding activators such as sodium salicylate and potassium chloride. The surfactant molecules are characterized by having one long hydrocarbon chain per surfactant headgroup. In the viscoelastic gelled state these molecules aggregate into worm-like micelles. Gel breakdown occurs rapidly when the fluid contacts hydrocarbons which cause the micelles to change structure or disband. In practical terms the surfactants act as reversible thickening agents so that, on placement in subterranean reservoir formations, the viscosity of a wellbore fluid containing such a surfactant varies significantly between water- or hydrocarbon-bearing zones of the formations. In this way the fluid is able preferentially to penetrate hydrocarbon-bearing zones. The use of viscoelastic surfactants for fracturing subterranean formations is discussed in EP-A-0835983. A problem associated with the use of viscoelastic surfactants is that stable oil-in-water emulsions are often formed between the low viscosity surfactant solution (i.e. broken gel) and the reservoir hydrocarbons. As a consequence, a clean separation of the two phases can be difficult to achieve, complicating clean up of wellbore fluids. Such emulsions are believed to form because conventional wellbore fluid viscoelastic surfactants have little or no solubility in organic solvents. A few anionic surfactants exhibit high solubility in hydrocarbons but low solubility in aqueous solutions. A well known example is sodium bis(2-ethylhexyl) sulphosuccinate, commonly termed aerosol OT or AOT (see K. M. Manoj et al., Langmuir, 12, 4068-4072, (1996)). However, AOT does not form viscoelastic solutions in aqueous media, e.g. the addition of salt causes precipitation. A number of cationic surfactants, based on quaternary ammonium and phosphonium salts, are known to exhibit solubility in water and hydrocarbons and as such are frequently used as phase-transfer catalysts (see C. M. Starks et al., Phase-Transfer Catalysis, pp. 125-153, Chapman and Hall, New York (1994)). However, those cationic surfactants which form viscoelastic solutions in aqueous media are poorly soluble in hydrocarbons, and are characterized by values of Kow very close to zero, Kow being the partition coefficient for a surfactant in oil and water (Kow=Co/Cw, where Co and Cw are respectively the surfactant concentrations in oil and water). Kow may be determined by various analytical techniques, see e.g. M. A. Sharaf, D. L. Illman and B. R. Kowalski, Chemometrics, Wiley Interscience, (1986), ISBN 0471-83106-9. Typically, high solubility of the cationic surfactant in hydrocarbon solvents is promoted by multiple long-chain alkyl groups attached to the head group, as found e.g. in hexadecyltributylphosphonium and trioctylmethylammonium ions. In contrast, cationic surfactants which form viscoelastic solutions generally have only one long unbranched hydrocarbon chain per surfactant headgroup. The conflict between the structural requirements for achieving solubility in hydrocarbons and for the formation of viscoelastic solutions generally results in only one of these properties being achieved. An object of the present invention is to provide a surfactant which is suitable for reversibly thickening water-based wellbore fluids and is also soluble in both organic and aqueous fluids. A first aspect of the present invention provides a surfactant having the formula (R1—X)nZ. R1 is an aliphatic group comprising a principal straight chain bonded at a terminal carbon atom thereof to X, the straight chain having a length such that a viscoelastic gel is formable by the surfactant in aqueous media; and further comprising at least one side chain (the carbon atoms of the side chain not being counted with the carbon atoms of the principal straight chain) which is shorter than said principal straight chain, said side chain enhancing the solubility of the surfactant in hydrocarbons, and being sufficiently close to said head group and sufficiently short such that the surfactant forms micelles in said viscoelastic gel. X is a charged head group, Z is a counterion, and n is an integer which ensures that the surfactant is charge neutral. Preferably the principal straight chain is a C16-C24 straight chain. Preferably the side chain is a C1-C2 side chain. X may be a carboxylate (—COO−), quaternary ammonium (—NR2R3R4 +), sulphate (—OSO3 −), or sulphonate (—SO3 −) charged group; N being a nitrogen atom, and R2, R3 and R4 being C1-C6 aliphatic groups, or one of R2, R3 and R4 being a C1-C6 aliphatic group and the others of R2, R3 and R4 forming a five-or six-member heterocylic ring with the nitrogen atom. When X is a carboxylate, sulphate, or sulphonate group, Z may be an alkali metal cation (in which case n is one) or an alkaline earth metal cation (in which case n is two). Preferably Z is Na+ or K+. When X is a quaternary ammonium group, Z may be a halide anion, such as Cl− or Br−, or a small organic anion, such as a salicylate. In both these cases n is one. Preferably the principal straight chain is a C16-C24 chain. More preferably it is a C18 or a C22 chain. We have found that surfactants of this type are suitable for use as wellbore thickening agents, being soluble in both water and hydrocarbon-based solvents but retaining the ability to form aqueous viscoelastic solutions via micellar aggregation. This combination of properties is believed to be caused by the branching off from the principal straight chain of the C1-C6 side chain. The side chain apparently improves the solubility in hydrocarbon solvents by increasing the hydrophobicity of the R1 aliphatic group. By “viscoelastic”, we mean that the elastic (or storage) modulus G′ of the fluid is greater than the loss modulus G″ as measured using an oscillatory shear rheometer (such as a Bohlin CVO 50) at a frequency of 1 Hz. The measurement of these moduli is described in An Introduction to Rheology, by H. A. Barnes, J. F. Hutton, and K. Walters, Elsevier, Amsterdam (1997). In use, the enhanced solubility of the surfactant in hydrocarbon-based solvents can reduce the tendency for an emulsion to form between reservoir hydrocarbons and a broken surfactant gel based on the surfactant. It may also inhibit the formation of emulsions by natural surfactants in crude oil, such as naphthenic acids and asphaltenes. Additionally, dissolution of at least some of the surfactant molecules into the reservoir hydrocarbons can speed up breakdown of the gel. Preferably, the side chain is a C1-C2 chain. We have found that, surprisingly, the solubility of the surfactant in hydrocarbon tends to increase as the size of the side chain decreases. We believe this is because smaller side chains cause less disruption to the formation of inverse micelles by the surfactant in the hydrocarbon, such inverse micelles promoting solubility in the hydrocarbon. By altering the degree and type of branching from the principal straight chain, the surfactant can be tailored to be more or less soluble in a particular hydrocarbon. However, preferably the side chain is bonded to said terminal (α), neighbouring (β) or next-neighbouring (γ) carbon atom of the principal chain. More preferably it is bonded to the α carbon atom. We believe that locating the side chain close to the charged head group promotes the most favourable combinations of viscoelastic and solute properties. Preferably the side chain is a methyl or ethyl group. There may be two side groups, e.g. a methyl and an ethyl group bonded to the α carbon atom. The principal straight chain may be unsaturated. Preferably the surfactant is an alkali metal salt of 2-methyl oleic acid or 2-ethyl oleic acid. A second aspect of the invention provides a viscoelastic surfactant having a partition coefficient, Kow, of at least 0.05, Kow being measured at room temperature with respect to heptane and water. More desirably Kow is in the range from 0.05 to 1 and most desirably it is in the range 0.05 to 0.5. The surfactant may be a surfactant of the first aspect of the invention. A third aspect of the invention provides an acid surfactant precursor to the surfactant of the first aspect of the invention, the acid surfactant precursor having the formula R1—Y. R1 is an aliphatic group comprising a C10-C25 principal straight chain bonded at a terminal carbon atom thereof to Y, and comprising at least one C1-C2 side chain. Y is a carboxylate (—COOH), sulphate (—OSO3H), or sulphonate (—SO3H) group. In solution, acid surfactant precursors can be converted to the salt form, e.g. by neutralisation with the appropriate alkali or by the addition of the appropriate salt, to form surfactants of the first aspect of the invention. A fourth aspect of the present invention provides a wellbore fluid comprising: (b) a thickening amount of the surfactant of the first or second aspect of the invention, and (c) an effective amount of a water-soluble, inorganic salt thickening activator. Preferably the thickening activator is an alkali metal salt, such as KCl. The surfactant is typically present in the fluid in a concentration of from 0.5 to 10 wt % (and more typically 0.5 to 5 wt %) and the thickening activator is typically present in the fluid in a concentration of from 1 to 10 wt %. Desirably the wellbore fluid has a gel strength in the range 3 to 5 at room temperature, the gel strength falling to a value of 1 on contact with hydrocarbons such as heptane. Desirably the wellbore fluid has a viscosity in the range 20 to 1000 (preferably 100 to 1000) centipoise in the shear rate range 0.1-100 (preferably 0.1-1000) s−1 at 60° C., the viscosity falling to a value in the range 1 to 200 (preferably 1 to 50) centipoise on contact with hydrocarbons such as heptane, the viscosity being measured in accordance with German DIN standard 53019. A fifth aspect of the present invention provides for use of the wellbore fluid of the fourth aspect of the invention as a fracturing fluid, a lubricant or an emulsion breaker. Specific embodiments of the present invention will now be described with reference to the following drawings in which: Synthetic routes to α-, β- and γ-branched derivatives of various fatty acids are shown schematically in A first step in a preparation of an α-branched derivative of a C10-C25 straight chain acid is the formation of an α-branch on the methyl ester of the acid. The α-branched ester can then be saponified with metal hydroxide to generate the acid salt (and thence the acid, if required). The following examples describes in more detail the preparation and characterisation of 2-methyl oleic acid. 1. Preparation of 2-Methyl Methyl Oleate Sodium hydride (60% dispersion, 8 g, 0.2 mol) was washed with heptane (2×15 ml) and then suspended in tetrahydrofuran (THF) (300 ml). 1,3-dimethyl-3,4,5,6-tetrahydro-2(1H)-pyrimidinone (DMPU) (26 g, 0.2 mol) was added and the mixture was stirred under an atmosphere of nitrogen. Methyl oleate (67.46 ml, 0.2 mol) was added dropwise over a period of two hours and the resulting mixture was heated to reflux for 12 hours and then cooled to 0° C. Methyl iodide (0.2 mol) was then added dropwise and the reaction mixture was again heated to reflux for a further two hours. Next the reaction mixture was cooled to 0° C. and quenched with water (15 ml), concentrated in vacuo and purified by column chromatography (SiO2, 1:9, diethyl ether:petroleum ether) to give 2-methyl methyl oleate as a yellow oil (50 g, 0.16 mol, 81%). 2. Preparation of 2-Methyl Oleic Acid The 2-methyl methyl oleate from the above reaction (40 g, 0.13 mol) was dissolved in a (3:2:1) methanol, THF and water mixture (300 ml), and potassium hydroxide (14.4 g, 0.26 mol) was added and the reaction heated to reflux for 15 hours. The reaction mixture was then cooled and neutralised using dilute hydrochloric acid. The organic layer was separated and concentrated in vacuo, and was then purified by column chromatography (SiO2, (2:8) ethyl acetate:petroleum ether) to give 2-methyl oleic acid as an oil. A rigid gel was formed when a 10% solution of potassium 2-methyl oleate (the potassium salt of the 2-methyl oleic acid prepared above) was mixed with an equal volume of a brine containing 16% KCl. Contacting this gel with a representative hydrocarbon, such as heptane, resulted in a dramatic loss of viscosity and the formation of two low viscosity clear solutions: an upper oil phase and a lower aqueous phase. The formation of an emulsion was not observed. Thin-layer chromatography and infrared spectroscopy showed the presence of the branched oleate in both phases. The gel is apparently broken by a combination of micellar rearrangement and dissolution of the branched oleate in the oil phase. Consequently the breaking rate of the branched oleate is faster than that of the equivalent unbranched oleate. This is demonstrated in Gel strength is a semi-quantitative measure of the flowability of surfactant-based gel relative to the flowability of the precursor fluid before addition of the surfactant. There are four gel strength codings ranging from 1 (flowability of the original precursor fluid) to 4 (deformable, non-flowing gel). A particular gel is given a coding by matching the gel to one of the illustrations shown in Using infra-red spectroscopy, the value of Kow for the potassium 2-methyl oleate of the broken branched gel was measured as 0.11. In contrast the value of Kow for the potassium oleate of the broken unbranched gel was measured as effectively zero. The rapid breakdown of the branched oleate surfactant gels, with little or no subsequent emulsion, leads to the expectation that these gels will be particularly suitable for use as wellbore fluids, such as fluids for hydraulic fracturing of oil-bearing zones. Excellent clean up of the fluids and reduced impairment of zone matrix permeability can also be expected because emulsion formation can be avoided. While the invention has been described in conjunction with the exemplary embodiments described above, many equivalent modifications and variations will be apparent to those skilled in the art when given this disclosure. Accordingly, the exemplary embodiments of the invention set forth above are considered to be illustrative and not limiting. Various changes to the described embodiments may be made without departing from the spirit and scope of the invention.
0.6266
FineWeb
June 4, 2009 Apes Help Scientists Discover Origins Of Laughter When researchers set out to study the origins of human laughter, some gorillas and chimps were literally tickled to assist. The scientists tickled 22 young orangutans, chimpanzees, gorillas, and bonobos, as well as three human infants, then acoustically analyzed the laughing sounds they produced.The results led researchers to conclude that people and great apes inherited laughter from a common ancestor that lived more than 10 million years ago. Although the vocalizations varied, the researchers found that the patterns of changes fit with evolutionary splits in the human and ape family tree. "This study is the first phylogenetic test of the evolutionary continuity of a human emotional expression," said Marina Davila Ross of the University of Portsmouth in the United Kingdom. "It supports the idea that there is laughter in apes." A quantitative phylogenetic analysis of the acoustic data produced by the tickled infants and apes revealed that the best "tree" to represent the evolutionary relationships among those sounds matched the known evolutionary relationships among the five species based on genetics. The researchers said that the findings support a common evolutionary origin for the human and ape tickle-induced expressions. They also provide evidence that laughter evolved slowly over the last 10 to 16 million years of primate evolutionary history. Nevertheless, human laughter is acoustically distinct from that of great apes and reached that state through an evident exaggeration of pre-existing acoustic features after the hominin separation from ancestors shared with bonobos and chimps, about 4.5 to 6 million years ago, Ross said. For example, humans make laughter sounds on the exhale. Although chimps do that as well, they can also laugh with an alternating flow of air, both in and out. Humans also use more regular voicing in comparison to apes, meaning that the vocal cords regularly vibrate. Ross said the researchers were surprised to find that gorillas and bonobos can sustain exhalations during vocalization that are three to four times longer than a normal breath cycle -- an ability that had been thought to be a uniquely human adaptation, important to our capacity to speak. "Taken together," the researchers wrote, "the acoustic and phylogenetic results provide clear evidence of a common evolutionary origin for tickling-induced laughter in humans and tickling-induced vocalizations in great apes. While most pronounced acoustic differences were found between humans and great apes, interspecific differences in vocal acoustics nonetheless supported a quantitatively derived phylogenetic tree that coincides with the well established, genetically based relationship among these species. At a minimum, one can conclude that it is appropriate to consider 'laughter' to be a cross-species phenomenon, and that it is therefore not anthropomorphic to use this term for tickling-induced vocalizations produced by the great apes." The research was reported online on June 4th in Current Biology, a Cell Press publication. On the Net:
0.7468
FineWeb
Inaugural Lecture and Reception: Professor Rebecca Sweetman, School of Classics Professor Rebecca Sweetman of the School of Classics will give her Inaugural Lecture 'Sailing the Wine-Dark Sea: the Archaeology of Roman Crete and the Cyclades'. Commonly perceived as pawns in wider imperial machinations, Crete and the Cyclades have often been side-lined as peripheral due to their assumed seclusion. However, even a brief analysis of the archaeological evidence indicates that these islands not only played significant roles within the wider Roman Empire, but in some cases, they flourished as a result. Furthermore, these islands experienced the monumentalized manifestation of Christianity much earlier than their mainland counterparts to the west. This unexpected success can be seen in terms of resilience. To establish why this is the case, it is necessary to shed the bias of preconceived notions of insularity. In doing so, this allows the significant variety of communication networks the islands had to be identified. Following a brief introduction to the methodologies, topography and fieldwork, in this talk I will focus on how island resilience helped shaped the success stories of Crete and the Cyclades in the Roman and Late Antique periods.
0.8689
FineWeb
Marketing plays a vital role in the product or service development process. - Who else is making similar widgets; - How to differentiate the planned widget from all other widgets; - What the potential market is for the planned widget, and; - What the price should be. These last two bullets are vital to ensure that the company can recover the widget development costs and then make a profit. As to differentiation from other widgets, Marketing should be part of all design reviews throughout the life of the widget to ensure that the development roadmap keeps at least one step ahead of the competition. The alternative is a Dilbert-like organization where engineering develops a new product and then tosses it over the fence to marketing. Marketing and Sales are then supposed to sell something that is basically unsellable.
0.9455
FineWeb
In an attempt to prevent shark attacks, the Australian government has proposed a plan that can only be described as horrific. Shark attacks are scary. Despite what “Jaws” taught us, however, they’re also extremely rare. Global statistics show that wasps, toasters, chairs, domestic dogs and even falling coconuts kill far more people every year than sharks. But that didn’t stop Western Australia’s government from buying into the hysteria by proposing a plan that is both barbaric and ecologically devastating. There have been six fatal shark attacks in Australian waters over the past two years. In response, officials in Western Australia have proposed a highly controversial “shark management” plan that calls for the slaughter of any shark longer than 3 meters (9.8 feet) found swimming anywhere near popular beaches. According to the Guardian, sharks unlucky enough to get hooked on baited drum lines will be ‘humanely destroyed’ with a firearm. Them the shark corpses will be then tagged and taken further out to sea and dumped. For just a moment, let’s set aside the glaring fact that sharks have called the ocean home for over 400 million years, and that Australians are encroaching on their habitat, and not the other way around. Instead, let’s focus on huge impact this plan will have on the ocean ecosystem, and the very slim chance it will actually reduce attacks. “As predators, [sharks] shift their prey’s spatial habitat, which alters the feeding strategy and diets of other species,” explains Oceana. “Through the spatial controls and abundance, sharks indirectly maintain the seagrass and corals reef habitats. The loss of sharks has led to the decline in coral reefs, seagrass beds and the loss of commercial fisheries.” Around the global, growing awareness about the sharp decline of shark populations has led to a surge in conservation efforts. Shark finning, spurred by the demand for shark fin soup, has been banned in several significant regions, and there’s been a successful push to establish shark sanctuaries. “While the rest of the world is turning to shark conservation, our government is sticking his head in the sand, ignoring all the experts and employing an archaic strategy,” Ross Weir, founder of Western Australians for Shark Conservation, told TIME magazine. “What they are doing is illegal and violates 15 different United Nations conventions and treaties.” There’s also nothing to suggest that killing sharks will actually stop shark attacks. “…what will the killing of this one shark achieve? There is absolutely no evidence to support the “rogue shark” theory, sharks are no more or less likely to bite a human if they have bitten before. It will not act as a deterrent for other sharks,” blogged Dr. Rachel Robbins, chief scientist of the Fox Shark Research Foundation. “The way to reduce attacks is not to kill anything that poses a threat to us. It is to educate people on how to minimize their risk, the times of day and conditions under which attacks are most likely to occur, put warnings at beaches that these areas are known to be frequented by white sharks.” Related on Ecosalon
0.6673
FineWeb
|Factorization||2 × 17| |Divisors||1, 2, 17, 34| 34 is the ninth distinct semiprime and has four divisors including one and itself. Its neighbors, 33 and 35, also are distinct semiprimes, having four divisors each, and 34 is the smallest number to be surrounded by numbers with the same number of divisors as it has. It is also in the first cluster of three distinct semiprimes, being within 33, 34, 35; the next such cluster of semiprimes is 85, 86, 87. It is the ninth Fibonacci number and a companion Pell number. Since it is an odd-indexed Fibonacci number, 34 is a Markov number, appearing in solutions with other Fibonacci numbers, such as (1, 13, 34), (1, 34, 89), etc. Thirty-four is a heptagonal number. - The atomic number of selenium - One of the magic number in physics. - Messier object M34, a magnitude 6.0 open cluster in the constellation Perseus - The New General Catalogue object NGC 34, a peculiar galaxy in the constellation Cetus - The Saros number of the solar eclipse series which began on 1917 BC August and ended on 384 BC February. The duration of Saros series 34 was 1532.5 years, and it contained 86 solar eclipses. - The Saros number of the lunar eclipse series which began on 1633 BC May and ended on 335 BC June. The duration of Saros series 34 was 1298.1 years, and it contained 73 lunar eclipses. - The jersey number 34 has been retired by several North American sports teams in honor of past playing greats or other key figures: - In Major League Baseball: - The Houston Astros and Texas Rangers, both for Hall of Famer Nolan Ryan. - The Minnesota Twins, for Hall of Famer Kirby Puckett. - The Oakland Athletics and Milwaukee Brewers, both for Hall of Famer Rollie Fingers. - Additionally, the Los Angeles Dodgers have not issued the number since the departure of Fernando Valenzuela following the 1990 season. Under current team policy, Valenzuela's number is not eligible for retirement because he is not in the Hall of Fame. - In the NBA: - In the NFL: - In the NCAA: - In Major League Baseball: - 34th Street (Manhattan), a major cross-town street in New York City - 34th Street (New York City Subway), multiple New York City subway stations In other fields 34 is also: - The traffic code of Istanbul, Turkey - "#34", a song by the Dave Matthews Band - The number of the French department Hérault - +34 is the code for international direct-dial phone calls to Spain - Higgins, Peter (2008). Number Story: From Counting to Cryptography. New York: Copernicus. p. 53. ISBN 978-1-84800-000-1. - "Evidence for a new nuclear ‘magic number’" (Press release). Saitama, Japan: Riken. 2013-10-10. Retrieved 2013-10-14. - Steppenbeck, D.; Takeuchi, S.; Aoi, N.; et al. (2013-10-10). "Evidence for a new nuclear ‘magic number’ from the level structure of 54Ca". Nature 502: 207–210. doi:10.1038/nature12522. Retrieved 2013-10-14. |Wikimedia Commons has media related to 34 (number).|
0.9163
FineWeb
The old wastewater treatment plant in Prague-Bubenec The old wastewater treatment plant in Prague-Bubeneč is an important witness to the history of architecture, technology and water management. Built in 1901-1906, it was used for the treatment of most of the sewage water in the city of Prague until 1967. In the steam engine room one can view the still functioning machines from the early 20th century. The design of the sewer system with the proposed technical parameters of the treatment plant was prepared by a construction engineer of British origin, Sir William Heerlein Lindley. In 2010 his work was declared a cultural monument. Old plant is one of the most important industrial heritage sites in Europe. The well preserved building of the old wastewater treatment plant in Bubeneč is the oldest preserved facility of its kind in Europe, a unique industrial architecture, a unique Eco monument of world importance, which is interesting both from architectural and technological points of view. Already in 1884, the competition was announced for the project of a new sewerage system and wastewater treatment plant, several projects were drafted but only the project of the famous English engineer William Henry Lindley was implemented – he had a lot of practical experiences from other big European cities and used some positive elements of previous projects of Czech designers in his project. His system of Prague sewerage network used catchment ratios so that sewage pumping was not necessary. The sewerage network discharged in the new wastewater treatment plant in Bubeneč. At that time Prague’s sewerage system measured about 90 km. The area of the wastewater treatment plant by Lindley project was built in 1900 – 1906 as a part of the new Prague sewerage system that was designed for 700 000 inhabitants. The sedimentation treatment plant in Bubeneč was the first major water treatment building in Bohemia. It consists of a main operation building with two chimneys, a smoke chimney and a ventilation chimney. Under the ground there is the six feet deep sand trap, ten underground septic tanks, two wells and sewage sludge pump shafts. The sludge from the sedimentation tank was pumped to two sludge tanks on the Emperor’s Island or to ships and those transported it to other sludge tanks, from where they were sold (after drying) as a highly demanded fertilizer. The railway branch led to the sludge tanks on the Emperor’s Island. Then three-stage cleaning efficiency was about 40%. The capacity of the wastewater plant started not to be sufficient from the 1920s and consequently only an extension was built before the World War II. A brand new wastewater treatment plant was built much later, namely in 1967. Today’s sewerage system is about 2,400 km long, whereas a part of sewage conduits is man-sized, i.e. greater than 80 cm; other sewage conduits are lower, i.e. less than man-sized. It has about 55,000 manholes and only 19 pumping stations. Today’s wastewater treatment plants reach the efficiency of 90 to 95%. The original wastewater treatment plant area was still in good condition, and so it has been maintained next to the new one. Thus it was possible to establish a foundation in 1992, the mission of which was to operate the Eco-museum in this precious building. Visitors to the museum come through the inlet crypt, where the water wheel driven by the incoming sludge used to be fitted and consequently the sludge come into the largest underground construction – into a sand trap, where three main municipal sewers discharged. From there they pass to the discharge sluices and mechanical rack catchers and then go down to ten sedimentation tanks, where the primary sludge used as a fertilizer settled. The highlight of the tour is two-storey engine room with two reconstructed steam engines installed in 1904, both still functional, below which there are flood pumps. Also the steam boiler room with the two coal boilers is still functional.
0.5434
FineWeb
Welcome to the 7 Day Challenge. For 7 days, we are testing our Emergency Preparedness and Food Storage Plans. Each day will bring a NEW mock emergency, or situation that will test at least one of the reasons “WHY” we strive to be prepared! REMEMBER: No going to a store, or spending any money for the entire 7 days! And please feel free to adapt the scenarios to fit your own family and situation. You just discovered that you have some kind of allergy to an unknown preservative. Since you aren’t able to isolate what it is exactly, you now need to avoid ALL preservatives and start cooking all of your food from scratch. This includes making a loaf of bread. Remember, no going to the store. **A little rule of thumb: you have to know where the ingredient comes from, and be able to pronounce the ingredients on any canned item you use (meaning a can of tomatoes is ok, but not a can of spaghetti sauce)** - Cook breakfast from scratch - Cook lunch from scratch - Cook dinner from scratch - Bake a loaf of homemade bread - Print out some of your favorite recipes to use in case the internet is down during an emergency - For this day, and ALL days of the challenge: no spending money, no going to stores, and no restaurants. - Do not use ANY pre-packaged or convenience-type foods. No mixes, boxed cereals, canned soups or sauces etc. If you can’t pronounce all the ingredients and say where it came from, it’s probalby a NO go. - Do not buy or borrow ingredients. Use only what you have stored. - Make a delicious dessert from scratch. - Plan an entire week worth of meals you could make out of your current food supplies. - Do some research on the health benefits of eating less preservatives. REMEMBER, TOMORROW’S CHALLENGE WILL BE DIFFERENT. How long would you have lasted under these conditions? Make sure your fill out today’s Report Card to see how well you did, to keep track of areas you can improve, to remember things you need to do, and things you need to buy. Use the data to make a game plan to take you to the next level of preparedness, whatever that may be.
0.5049
FineWeb
Containing large floor discs and smaller handheld matching discs, this material challenges childrens’ sense of touch on both hands and feet. At the same time, it develops the ability to describe sense impressions verbally. Games can be adjusted to fit any child’s Great for those who might have a slight fear of dark places. Children can feel less confined than in a solid wall structure where they can’t see what is happening outside. These floor tiles will create a fun and exciting environment as children see cause and effect of the internal liquids moving. They are excellent for creating sensory play spaces, quiet reading areas and for encouraging exploratory play. Children are encouraged to A versatile toy shaped like a turtle shell, with numerous uses. Sit and rock/spin in it, fill it with sand, use it in water play, upturn and stand on it. Good for balancing skills. A multi-purpose board to assist with exercise, balance and creative play. A multi-purpose board to help with exercise, balance and creative play An inflatable ‘ball’ in the shape of a star. Good for hand/eye coordination when throwing and catching. Can be used for general fun and team building exercises allowing children to work together in a group play situation. Can be played with friends or by oneself against a wall – indoors or outside. It helps children with hand-eye coordination, timing and encourages active play. This is an indoor training tool to develop gross motor skills, balance and posture; it can also help release children’s anxiety and anger in a safe environment. Hang the target mat onto a sturdy and stable place such as the wall
0.6534
FineWeb
Addressing Issues of Diversity in Curriculum Materials and Teacher Education David McLaughlin (MSU), James Gallagher (MSU), Mary Heitzman (UM), Shawn Stevens (UM), and Su Swarat (NU) Aikenhead, G. (2001). Integrating Western and Aboriginal sciences: Cross-cultural science teaching. Research in Science Education, 31, 337-355. The article addresses issues of social power and privilege experienced by Aboriginal students in science classrooms. A rationale for a cross-cultural science education dedicated to all students making personal meaning out of their science classrooms is presented. The author then describes a research and development project for years 6-11 that illustrates cross-cultural science teaching in which Western and Aboriginal sciences are integrated. Ball, D. L., & Cohen, D. K. (1996). Reform by the book: What is – or might be – the role of curriculum materials in teacher learning and instructional reform? Educational Researcher, 25(9), 6-8, 14. The authors describe the uneven role of curriculum materials in practice and adopt the perspective that curriculum materials could contribute to professional practice if they were created with closer attention to processes of curriculum enactment. “Educative curriculum materials” place teachers in the center of curriculum construction and make teachers’ learning central to efforts to improve education. Curriculum use and construction are framed as activities that draw on teachers’ understanding and students’ thinking. Barab, S. A., & Luehmann, A. L. (2003). Building sustainable science curriculum: Acknowledging and accommodating local adaptation. Science Education, 87(4), 454-567. Developing and supporting the implementation of project-based, technology-rich science curriculum that is consistent with international calls for a new approach to science education while at the same time meeting the everyday needs of classroom teachers is a core...
0.8633
FineWeb
When sizing a motor for any application there are a lot of factors to consider. Requirements such as speed, torque, frame-size, ramp-up and load all need to be carefully considered. But the first consideration when choosing a motor is understandably how much work can be performed by said motor. The amount of “work” an electric motor can perform is measured in horsepower. When assisting a customer with sizing a motor we often get asked how to determine horsepower because some motor data plates do not clearly state this value. Luckily by using a simple bit of math you can quickly determine horsepower using minimal information. Specifically, the amperage and voltage rating of a motor. Step One: Determine your Motor’s Wattage The first step toward determining horsepower is first determining another value by which rate of work is measure called a watt. Named after the famous Scottish inventor James Watt, the watt is a unit of measure that is used to quantify energy transfer in a system. To determine wattage in a motor you must multiply amperage rating by the voltage rating. V X A = W Example: 460V X 30A = 13,800 Watts Step Two: Factor in Efficiency Rating At its core an electric motor’s job is converting electrical energy into mechanical energy a machine can use to perform work. Unfortunately, no motor is 100% efficient and there are inherent losses to work potential that must be factored in. When listed on a motor data plate this value is most often represented as a percentage. When you see a motor efficiency rating you must convert from a percentage to a decimal for the purposes of this equation. For instance 85% efficiency would be .85 efficiency. Add it to your wattage calculation like so: V X A X E = W Example: 460V X 30A X .85 = 11,730 Step Three: Converting Wattage into Horse Power Lastly, we need to convert wattage into horsepower. 756 watts roughly equal one horsepower. Taking the example above we can take our calculated wattage of 11,730 and divide it by 756. What we end up with is 15.515 or right around 15 Horsepower rounded down. That would mean if we had an example motor rated at 460 volts, and 30 Amps with an efficiency rating of 85 percent this motor would be a 15 horsepower motor. If you need help with your motor, whether with sizing or if your in need of having it repaired, the professionals at Global Electronic Services are here to help! Be sure to visit us online at www.gesrepair.com or call us at 1-877-249-1701 to learn more about our services. We’re proud to offer Surplus, Complete Repair and Maintenance on all types of Industrial Electronics, Servo Motors, AC and DC Motors, Hydraulics and Pneumatics. Please subscribe to our YouTube page and Like Us on Facebook! Thank you!
0.6721
FineWeb
Llevamos toda la vida escuchando lo buena que era la vitamina C para prevenir el catarro. Gracias a las investigaciones del Dr. Linus Pauling en 1970, la popularidad de la vitamina C fue imparable. La dosis necesaria para evitar el catarro común eran 1000 miligramos al día. Se han publicado numerosas investigaciones que desmontaban ese mito, pero ahora la Cochrane ha publicado una revisión concluyente: Douglas RM, Hemilä H, Chalker E, Treacy B. Vitamin C for preventing and treating the common cold. Cochrane Database of Systematic Reviews 2007, Issue 3. Aquí tenéis el resumen en inglés: This Cochrane review found that taking vitamin C regularly has no effect on common cold incidence in the ordinary population. It reduced the duration and severity of common cold symptoms slightly, although the size of the effect was so small its clinical usefulness is uncertain. The authors investigated whether oral doses of 0.2 g or more daily of vitamin C reduces the incidence, duration or severity of the common cold when used either as continuous prophylaxis or after the onset of symptoms. The review included studies using a vitamin C dose of greater than 0.2g per day and those with a placebo comparison. • For the prophylaxis of colds, the authors carried out a meta-analysis of 30 trials comparisons involving 11,350 study participants. The pooled relative risk (RR) of developing a cold whilst taking prophylactic vitamin C was 0.96 (95% confidence intervals (CI) 0.92 to 1.00). However, a subgroup of six trials involving a total of 642 marathon runners, skiers, and soldiers on sub-arctic exercises reported a pooled RR of 0.50 (95% CI 0.38 to 0.66) i.e. a 50% reduction in the risk of a cold for this group of people.• For the duration of the common cold during prophylaxis, the authors carried out a meta-analysis using 30 comparisons involving 9676 respiratory episodes. They found a consistent benefit with a reduction in cold duration of 8% (95% CI 3% to 13%) for adults and 13.6% (95% CI 5% to 22%) for children.• For the duration of cold during therapy with vitamin C started after symptom onset, the authors carried out a meta-analysis of 7 trials involving 3294 respiratory episodes. No significant differences from placebo were seen.• No significant differences were seen in a meta analysis of 4 trial comparisons involving 2753 respiratory episodes in cold severity during therapy with vitamin C. The authors conclude, “The failure of vitamin C supplementation to reduce the incidence of colds in the normal population indicates that routine mega-dose prophylaxis is not rationally justified for community use. But evidence suggests that it could be justified in people exposed to brief periods of severe physical exercise or cold environments.”
0.9772
FineWeb
Micro PET-CT Camera After success operation of the prototype MDAPET Camera, we developed a low-cost, high-sensitivity and high-resolution dedicated animal PET camera (RRPET). In 2006, we successfully completed the construction of the RRPET camera and it was then commercialized as the world’s first animal PET-CT (XPET) scanner. This camera is based on the PQS concept that was first used in construction of MDAPET Camera - Photomultiplier-Quadrant-Sharing detector design and the SSB technique that was first introduced in construction of HOTPET Human Camera for building the detector blocks more efficiently: - Slab-sandwich-slice (SSS) production technique The RRPET camera consists of 180 BGO (Bismuth Germanate) blocks arranged in 48 rings. See RRPET specifications, RRPET images and RRPET performance.
0.7474
FineWeb
The trade-off between pleiotropy and redundancy in telecommunications networks is analyzed in this paper. They are optimized to reduce installation costs and propagation delays. Pleiotropy of a server in a telecommunications network is defined as the number of clients and servers that it can service whilst redundancy is described as the number of servers servicing a client. Telecommunications networks containing many servers with large pleiotropy are cost-effective but vulnerable to network failures and attacks. Conversely, those networks containing many servers with high redundancy are reliable but costly. Several key issues regarding the choice of cost functions and techniques in evolutionary computation (such as the modeling of Darwinian evolution, and mutualism and commensalism) will be discussed, and a future research agenda is outlined. Experimental results indicate that the pleiotropy of servers in the optimum network does improve, whilst the redundancy of clients do not vary significantly, as expected, with evolving networks. This is due to the controlled evolution of networks that is modeled by the steady-state genetic algorithm; changes in telecommunications networks that occur drastically over a very short period of time are rare.
0.9991
FineWeb
The vegetarian recipe for Easy Eggless Cream Cheese Cupcakes: servings – 24 eggless cupcakes - 1 block cream cheese (suitable for vegetarians) - 1 tbsp butter - 2 cups fresh milk - 1/2 cup sugar - 1 tsp vanilla essence - 1 tsp baking soda (sieve) - 4 cups self-rising flour (sieve) - Mix the cream cheese with butter, fresh milk, sugar, vanilla essence and baking soda. - Add self-rising flour and stir until a smooth batter is formed. - Place the paper cupcake cases in the metal cupcake molds and pour the batter into the cases. - Bake in a preheated oven at 170°C for about 20 minutes. - Remove and let cool. - 2 blocks cream cheese (suitable for vegetarians) - 1/2 block butter - 3/4 cup icing sugar - 4 drops red liquid food coloring (optional) - Mix the cream cheese with the butter, icing sugar and red liquid food coloring. - Stir until a smooth frosting is formed. - Pipe the frosting onto the cooled cream cheese cupcakes.
0.9251
FineWeb
Latest photos on AncientFaces No one from the Quiles-cruz community has shared photos. Here are new photos on AncientFaces: Quiles-cruz Surname History The family history of the Quiles-cruz last name is maintained by the AncientFaces community. Join the community by adding to to this genealogy of the Quiles-cruz: - Quiles-cruz family history - Quiles-cruz country of origin, nationality, & ethnicity - Quiles-cruz last name meaning & etymology - Quiles-cruz spelling & pronunciation - genealogy and family tree Quiles-cruz Country of Origin, Nationality, & Ethnicity No one has submitted information on Quiles-cruz country of origin, nationality, or ethnicity. Add to this section No content has been submitted about the Quiles-cruz country of origin. The following is speculative information about Quiles-cruz. You can submit your information by clicking Edit. The nationality of Quiles-cruz may be very difficult to determine in cases which country boundaries change over time, leaving the original nationality a mystery. The original ethnicity of Quiles-cruz may be in dispute based on whether the name came in to being organically and independently in different locales; for example, in the case of names that come from a professional trade, which can come into being in multiple countries independently (such as the family name "Brewster" which refers to a female brewer). Quiles-cruz Meaning & Etymology No one has submitted information on Quiles-cruz meaning and etymology. Add to this section No content has been submitted about the meaning of Quiles-cruz. The following is speculative information about Quiles-cruz. You can submit your information by clicking Edit. The meaning of Quiles-cruz come may come from a profession, such as the name "Archer" which was given to people who were bowmen. Some of these profession-based last names may be a profession in another language. This is why it is important to research the ethnicity of a name, and the languages spoken by its early ancestors. Many modern names like Quiles-cruz come from religious texts like the Bhagavadgītā, the Quran, the Bible, and so on. Often these surnames are shortened versions of a religious expression such as "Favored of God". Quiles-cruz Pronunciation & Spelling Variations No one has added information on Quiles-cruz spellings or pronunciations. Add to this section No content has been submitted about alternate spellings of Quiles-cruz. The following is speculative information about Quiles-cruz. You can submit your information by clicking Edit. In early history when few people could write, names such as Quiles-cruz were written down based on their pronunciation when people's names were written in court, church, and government records. This could have given rise misspellings of Quiles-cruz. Understanding spelling variations and alternate spellings of the Quiles-cruz name are important to understanding the history of the name. Last names like Quiles-cruz vary in how they're said and written as they travel across tribes, family branches, and countries across time. Last names similar to Quiles-cruzQuilesdmontalvo Quilesfalicea Quilesfcruz Quilesfgandulla Quilesfgonzalez Quilesfmartinez Quilesfmontalvo Quilesfnieves Quilesfperez Quilesframos Quilesfrankie Quilesfreyes Quilesfrivera Quilesfrobles Quilesfruiz Quilesfsoto Quilesfvazquez Quilesgonz Quilesgonzal Quilesgonzalez Quiles-cruz Family Tree Here are a few of the Quiles-cruz genealogies shared by AncientFaces users. Click here to see more Quiles-cruzes
0.5907
FineWeb
Re: Dynamic Userform Design hmmmmmm, I REALLY don't like some of what you're doing, and it's hard to tell if thats just "not the way I'd do it" or actually wrong. Part of your problem could be the unload userform2 command in the cmbClass module. I THINK, the instance of the is object is part of the userform2 object, so when you unload userform2 you are attempting to unload something that is currently executing. When I try it, if I move the msgbox unloading to ABOVE the unload command, I SEE the msg before excel dies, I don't see the message after. So the unload is dying, and I SUSPECT it's sying because you are loading something running.
0.6671
FineWeb
Historical trauma, or intergenerational trauma, refers to the cumulative emotional and psychological wounding of a person or generation caused by traumatic experiences or events. Historical trauma can be experienced by any group of people that experience a trauma. Examples include genocide, enslavement, or ethnic cleansing. It can affect many generations of a family or an entire community. Historical trauma can lead to substance abuse, depression, anxiety, anger, violence, suicide, and alcoholism within the afflicted communities. If you are feeling the effects of historical or intergenerational trauma, reach out to one of TherapyDen’s experts today.
0.9761
FineWeb
And he dreamed, and behold! a ladder set up on the ground and its top reached to heaven; and behold, angels of God were ascending and descending upon it. Last night I dreamed of an atom with a ladder wedged in the nucleus of the atom, with electrons jumping up and down the ladder. For those readers unencumbered by the knowledge of atomic theory, a brief historical introduction may be in order. When the planetary theory of the atom was fir st proposed by Ernest Rutherford in 1909, it depicted an atom as a solar system wherein a nucleus was positioned at the center of the atom, with electrons orbiting around the nucleus as planets orbit the Sun. However, there was a problem. According to Maxwell’s theory of electromagnetism, accelerating electrons emit electromagnetic waves thereby losing their energy. In Rutherford’s model, all electrons were doomed to fall on the nucleus, which, of course, did not happen. In 1913, Niels Bohr solved this problem by postulating that electrons were only allowed to occupy certain orbits with discrete energy levels. An electron can jump on higher or lower orbit (by absorbing or emitting a photon) but normally orbiting the nucleus without losing energy. I don’t know if Niels Bohr read Torah, but if he did, this week’s portion may have inspired his insight. In Jacob’s dream, he saw a ladder wedged in the earth with angels moving up and down the ladder. One may ask, why would angels need a ladder to move up or down? In Ezekiel’s vision of Ma’ase Markava, angels used their wings to fly to and fro, without the need of a ladder. So why did angels need a ladder in the dream of Jacob (Yaakov)? Perhaps it provides the symbolism for Bohr’s model of the atom. In my dream, the earth was the nucleus, angels were electrons, and rungs of the ladder were energy levels corresponding to orbits that electrons are allowed to occupy. In Jacob’s vision, angels didn’t fly (change their energy level continuously) but stepped up or down the ladder—one rung at a time. It seems to me, this is symbolic of electrons not being allowed to change their energy continuously but only being able to jump up or down one orbit, which is symbolized by the rungs of the ladder. To take this metaphor a bit further, let us notice that when an electron jumps to a higher orbit, it absorbs a photon. When the electron jumps to a lower orbit, it emits a photon. According to the Zohar, Jacob’s ladder was the ladder of prayer. Angels going up the ladder brought up the prayers to heaven. Angels going down the ladder brought back the blessings. If photons—quanta of light—are symbolic of prayers and blessings, angels carrying the prayers up the ladder are symbolic of electrons going up the orbit as a result of being irradiated by photons (prayers). Likewise, just as angels going down the ladder carry down blessing, electrons, jumping on lower orbits, irradiate photons of light—blessings. Philo of Alexandria (a.k.a. Philo Judaeus) offered another mystical symbolism of the Jacob’s ladder—angels carrying up souls of departed people ascending to heaven or carrying down to earth souls destined to be born. This interpretation also fits well with our atomic metaphor. Indeed, a photon (symbolic of a person) absorbed by an electron dies, as it were, and only its energy (soul) is carried up by the electron to a higher orbit. Conversely, when an electron jumps down to a lower orbit, the extra energy (soul) causes the electron to emit a photon—symbolic of giving birth to a person in whom the soul incarnates.
0.9352
FineWeb
In Kotlin, the concept of nullable types plays a crucial role in enhancing the safety and expressiveness of the language. This article aims to provide a comprehensive understanding of nullable types in Kotlin and how they contribute to writing more robust and reliable code. What Are Nullable Types? Nullable types in Kotlin allow variables to hold null values, providing a clear distinction between nullable and non-nullable types. This feature helps prevent null pointer exceptions, a common source of bugs in many programming languages. Declaring Nullable Types In Kotlin, to declare a variable as nullable, you append a question mark ( ?) to its type. For example, var name: String? declares a nullable string variable. Safe Calls and the Elvis Operator One of the key features of nullable types is the safe call operator ( ?.). It allows you to safely perform operations on a nullable variable without the risk of a null pointer exception. Additionally, the Elvis operator ( ?:) provides a concise way to handle null values by specifying a default value if the variable is null. Type Checks and Smart Casts Kotlin introduces smart casts, a mechanism that automatically casts a nullable type to a non-nullable type within a certain code block if a null check has been performed. This eliminates the need for explicit casting and enhances code readability. The !! Operator and its Risks While nullable types offer safety, the double exclamation mark ( !!) operator allows you to forcefully assert that a nullable variable is non-null. However, this should be used cautiously, as it may lead to null pointer exceptions if the assertion is incorrect. Working with Nullable Types in Collections Kotlin’s standard library provides powerful tools for working with collections of nullable types. Functions like mapNotNull make it convenient to handle nullable elements within collections. Nullable Types in Function Parameters When defining functions in Kotlin, you can explicitly specify whether parameters accept nullable types. This helps in creating functions that are more flexible and adaptable to different use cases. Migrating Existing Code to Use Nullable Types For developers transitioning to Kotlin or updating existing code, understanding nullable types is crucial. This section explores best practices and strategies for migrating code to leverage the benefits of nullable types.
0.9637
FineWeb
In today’s fast-paced world, productivity is a key aspect that drives success in various domains. Technology continues to evolve, introducing innovative solutions to enhance efficiency and streamline workflows. One such groundbreaking advancement is Microsoft AI Copilot, an intelligent tool that revolutionizes productivity in the digital era. In this article, we will explore the features, installation process, and benefits of Microsoft AI Copilot while shedding light on how it empowers users to accomplish tasks more efficiently than ever before. The Evolution of Productivity Tools Over the years, productivity tools have undergone significant transformations. From the early days of basic word processors to modern-day collaboration platforms, the aim has always been to enhance efficiency and simplify work processes. Microsoft AI Copilot takes productivity to a whole new level by leveraging the power of AI to provide intelligent assistance and automate various tasks, reducing manual effort and boosting productivity. Key Features of Microsoft AI Copilot 1. Real-time Assistance Microsoft AI Copilot offers real-time suggestions and recommendations as you work, helping you complete tasks more efficiently. It analyzes your actions, understands context, and provides relevant suggestions based on best practices and user patterns. 2. Code Completion and Generation For software developers, AI Copilot proves to be an invaluable companion. It assists in code completion, automatically generating code snippets, and offering intelligent suggestions to speed up the development process. This feature significantly reduces the time spent on writing repetitive code and enhances the overall coding experience. 3. Contextual Documentation AI Copilot provides contextual documentation, offering relevant code examples, explanations, and references within the development environment. This feature eliminates the need for constant switching between different resources, enabling developers to access necessary information seamlessly. 4. Task Automation Repetitive and mundane tasks can hinder productivity and creativity. AI Copilot automates such tasks, freeing up valuable time for users to focus on more critical aspects of their work. It can automate tasks like formatting, refactoring, and debugging, enabling users to complete them swiftly and accurately. 5. Natural Language Support AI Copilot understands natural language queries and instructions, making it easier to interact with the tool. Users can simply describe the task or ask for specific assistance, and AI Copilot will provide relevant suggestions or perform the requested action. Installation and Access To install and access Microsoft AI Copilot, follow these simple steps: With these easy steps, you can quickly install and start using Microsoft AI Copilot to boost your productivity. Benefits of Microsoft AI Copilot Enhancing Collaboration and Efficiency Microsoft AI Copilot promotes collaboration by providing suggestions and insights that align with best practices and coding standards. It assists in creating consistent and high-quality code, even when working in teams. By streamlining collaboration and reducing errors, AI Copilot enables developers to work together seamlessly, resulting in increased efficiency and better code quality. Simplifying Complex Tasks Complex tasks often require extensive research and expertise. With AI Copilot, users can simplify such tasks by leveraging its intelligent assistance. Whether understanding complex code structures, navigating detailed documentation, or implementing advanced algorithms, AI Copilot offers the necessary support and guidance to tackle complex challenges easily. Customizing AI Copilot for Personalized Workflows Microsoft AI Copilot understands that every user has unique preferences and work patterns. It provides customization options, allowing users to tailor the tool according to their specific needs. Users can adjust the level of suggestions, enable/disable certain features, and personalize the tool’s behaviour, ensuring it aligns perfectly with their individual workflows. Addressing Privacy and Security Concerns With the increasing reliance on AI technologies, privacy and security are of paramount importance. Microsoft AI Copilot prioritizes user privacy and data protection. It operates within strict security measures, ensuring that sensitive information remains confidential and secure. Microsoft is committed to maintaining the highest privacy and data protection standards across all its products and services. The Future of Microsoft AI Copilot Microsoft AI Copilot represents the future of productivity tools. As technology advances, AI Copilot will evolve and adapt to meet the ever-changing needs of users. We can expect further enhancements, additional integrations with popular software, and improved support for different industries and domains. Microsoft’s dedication to innovation ensures that AI Copilot will remain at the forefront of revolutionizing productivity tools for the future. Microsoft AI Copilot is a game-changer in the realm of productivity tools. Its intelligent assistance, code generation capabilities, and task automation features empower users to accomplish more with greater efficiency. By streamlining workflows, enhancing collaboration, and simplifying complex tasks, AI Copilot proves to be an invaluable companion for professionals across various domains. Embrace the future of productivity with Microsoft AI Copilot and unlock your full potential. Frequently Asked Questions (FAQs) Q: How to install and access Microsoft AI Copilot? A: To install Microsoft AI Copilot, visit the official Microsoft website or the Microsoft Store. Download the AI Copilot plugin or extension and follow the installation instructions provided by Microsoft. Launch the application or software after installation to access AI Copilot features. Q: Does Microsoft AI Copilot support multiple programming languages? A: Yes, Microsoft AI Copilot supports multiple programming languages. It provides code completion and generation features for popular programming languages and development environments. Q: Can AI Copilot be customized to suit individual workflows? A: Yes, AI Copilot offers customization options. Users can adjust the level of suggestions, enable/disable specific features, and personalize the tool’s behaviour to align with their unique workflows. Q: Is Microsoft AI Copilot compatible with popular productivity software? A: Yes, Microsoft AI Copilot is designed to integrate seamlessly with popular productivity software and development environments, enhancing their functionality and productivity. Q: How does Microsoft prioritize privacy and data security with AI Copilot? A: Microsoft prioritizes user privacy and data security. AI Copilot operates within strict security measures, ensuring that sensitive information remains confidential and secure.
0.7597
FineWeb
Normatec Boot Attachment The Normatec Boot Attachments feature five overlapping zones for gapless compression. Composed of premium materials, this leg sleeve is compatible with the Normatec 3, Pulse and Pulse Pro 2.0 air compression devices (sold separately). Three size options; standard for 5’4” to 6’3” individuals, tall for 6’4” and over, and short for 5’3” and under individual. When connected to the Normatec 3 device or one of the Pulse devices, The Normatec Boot Attachment inflates and squeezes sore muscles in the legs and feet to increase circulation, enhance blood flow and reduce soreness. The full-length leg sleeves feature five overlapping zones for effective and gapless air compression technology. The custom foot design applies compression to the bottoms of the feet without uncomfortably squeezing the toes. The three lengths, Short, Standard and Tall are sold both individually and in pairs. - Overlapping Zones: Five overlapping zone sections which allow for a custom and gapless compression. - Compatibility: Compatible with the Normatec 3, Pulse 2.0 and Pulse Pro 2.0 series devices. - High-Quality: Composed of premium grade materials, these attachments are built to last and can be easily wiped down clean. Why You Need It: The Normatec Boot Attachment provides muscle relief in the quads, hamstrings and foot muscles. Once attached to a Normatec device, the air compression helps to reduce pain from sports, fitness or every day activities. The leg sleeve can be used to enhance blood flow, reduce soreness and improve athlete performance! How It Helps: Attach the NormaTec Boot Attachment to your Pulse 2.0 or Pulse Pro 2.0 device to control the time, pressure and zone settings of your leg and foot muscle recovery. These boot attachments include premium grade locking zippers and a specialized foot design to apply compression to the bottom of your feet without squeezing your toes uncomfortably. What You Can Do With It: The Boot Attachment fills with air, zone by zone, to create a lower body muscle massage. With five overlapping zones, and premium grade locking zippers, this high-quality sleeve allows for a gapless compression. Power control device not included. Standard, Short, Tall, Standard Pair, Short Pair, Tall Pair
0.8586
FineWeb
Sujatha Muralidharan is an immunologist with an interest in studying mechanisms of immune suppression (tolerance) induced in hosts in response to stresses such as bacterial infection or alcohol consumption. The goal of her research is to identify key molecules that play a regulatory role in immune tolerance so these could be targeted for development of novel and effective immunotherapies. She is currently a post-doctoral researcher in the lab of Dr. Linden Hu in the Microbiology department of Tufts University. Her research focuses on inflammatory responses to Lyme disease bacteria Borrelia burgdorferi in innate immune cells. She received her PhD in Immunology from Baylor College of Medicine where she studied the role of Wnt signaling in peripheral T cell activation and maturation. Outside of lab, she enjoys reading and watching science-fiction movies.
0.5867
FineWeb
Some fun games for adults include Who Am I and Mail Call. Another fun game is an orange race that uses oranges and pantyhose. Adults can play the orange race as a tournament, with players racing in elimination rounds, ending with a championship round.Continue Reading To play Who Am I, assign each person a name and attach it to his forehead or back. Each person can then ask 20 "yes or no" questions to figure out who he is. To play Mail Call, the group creates a closed circle with one person in the center. People can stand in the circle or use chairs. The person in the center then makes a statement, such as "mail call for everyone who is wearing blue." At that point, everyone in the circle who is wearing blue must switch places in the circle with another person who is wearing blue. Players are not allowed to move to the spot directly next to them. The object of the game is for the person in the middle to find a spot in the circle before someone else does. The next person in the middle continues the game with more "mail calls," choosing identifying factors to make players move. To play the orange relay race game, each racer must have two oranges and a pair of pantyhose. To set up, the racers place one orange on the floor and the other in a leg of the pantyhose, and tie the other leg around their waists. The leg with the orange should hang to the ground, and swing between the racer's legs. The racers then swing the hanging orange to push the second orange across the floor to the finish line. The first person across the finish line wins.Learn more about Group Games
0.6081
FineWeb
This tool converts genome coordinates and annotation files between assemblies. The input data can be entered into the text box or uploaded as a file. For files over 500Mb, use the command-line tool described in our LiftOver documentation. If a pair of assemblies cannot be selected from the pull-down menus, a sequential lift may still be possible (e.g., mm9 to mm10 to mm39). If your desired conversion is still not available, please contact us.
0.7202
FineWeb
ANALYSIS OF THE RELATIONSHIP BETWEEN EMPATHY AND FAMILY FUNCTIONING IN DENTISTRY STUDENTS OF THE LATIN AMERICAN UNIVERSITY OF SCIENCE AND TECHNOLOGY (ULACIT), SAN JOSE, COSTA RICA 1 Paniamor (COSTA RICA) 2 ULACIT (COSTA RICA) 3 Universidad Santo Tomás, Concepción, Chile. (CHILE) 4 Hospital Félix Bulnes, Departamento de Psiquiatría Infantil y del Adolescente (CHILE) 5 Universidad San Sebastián, Facultad de Odontología (CHILE) About this paper: Conference name: 9th International Conference on Education and New Learning Technologies Dates: 3-5 July, 2017 Location: Barcelona, Spain Abstract:Empathy is a fundamental attribute for health science professionals, which has both affective and cognitive components, as well as a complex family influence during its development. This work seeks to establish the relationship between empathy and family functioning, as well as the possible relation with gender, in active students of Dentistry of the Latin American University of Science and Technology (ULACIT). A previous study in this same institution showed that gender is an influential factor in the levels of empathy, favoring women (Sánchez et al, 2013), but by increasing practical and community-related experiences, those differences between men and women decreased (Utsman et al, 2017). This paper analyzes the effect of family functioning on empathy levels using two instruments: the Family Functional Questionnaire (FACES) and the Jefferson Empathy Scale (JSE-S). A total of 159 dental students of ULACIT (Costa Rica), active in 2016, equivalent to 53.7% of the population participated. The statistical analysis used ANOVA, Kolmogorov-Smirnov test and Levene test of equality of variances. The analysis of bifactorial variance, model III, shows that for general empathy there are no significant differences (p> 0.05), but if the dimensions are analyzed individually, “Compassion with Care” was superior in females (effect size of 0.032, test power 0.614). The other dimensions did not show gender differences. Regarding family function, the scale considered three different styles: balanced, intermedium, and extreme; which Interestingly, most of the participants belonged to the third style. Family functionality has been described as responsible for generating sensitization and understanding behaviors towards the patient (Madera et al, 2015). Extreme families may be chaotically attached, chaotically detached, rigidly attached or rigidly detached, and these conditions could generate a strong need of fidelity and loyalty (Olson et al, 1983). The students with extreme styles of family functioning showed higher levels of empathy, which can be explained by the development of a personality structure and dynamics that incorporates both resilience and comprehension and acceptance of the differences between people. A person can overcome adversity, and learn to communicate effectively and recognize values and conditions of others. Another explanation for the result could be associated with the possible cultural bias in the measurement of family functioning that the FACES scale offers. Although there is literature reporting positive results of the adaptation of the scale to Spanish and its corresponding application in Latin American contexts (Costa et al, 2013), it is possible that the scale does not consider the particularities of the Costa Rican families. The results of this study indicated that there is a relationship between the type of family functioning (accordingly with the actual use of the scale) and empathy, were extreme families have higher values of empathy. The development of this communication skill is key for a health science professional, so the recognition of the influence of the student’s background is important to design a learning experience to develop empathy. Keywords: Empathy, Family Function.
0.9313
FineWeb
This job posting is no longer active. Some call it a career, for us it’s a calling. National Jewish Health is currently seeking Clinical Laboratory Scientists to join our motivated and fast-paced COVID testing team. Positions are temporary and will have flexible schedules on days, evenings, and weekends with 10 hour shifts. Base rate of pay is $26.00/hour plus potential for additional shift differential based on schedule. What you’ll do: - Perform high complexity tests that are authorized by the laboratory supervisor, manager or director and reviews testing performance as applicable. - Follow the laboratory and NJH established policies and procedure manuals. - Participate and maintains records that demonstrate that proficiency testing samples are tested in the same manner as patient specimens. - Adhere to and understands the quality control policies of the laboratory documenting all quality control activities, instrument and procedural calibrations and maintenance performed. - Document all corrective actions taken when test systems deviate from the laboratory’s established performance specifications. - Follow GxP (e.g., GLP, GCLP, GCP, etc.) standards as defined by different national and international organizations (e.g., ISO, FDA, OECD, etc.) when appropriate for clinical or preclinical trials. - Performs competencies (including age-specific competencies and/or non-human species) as identified through the departmental competency program. - Monitors and reports on stocks of supplies and equipment, as directed. Makes reagents as necessary. - Performs error correction, photocopy and data entry and compilation as required. - Follow set guidelines to troubleshoot/correct assay problems or instrument malfunctions. Perform maintenance and works with supervisor/manager in troubleshooting QC or instrument problems. - Follow specific biosafety standards for the laboratory and protocols for handling potentially infectious material. What you’ll need: - Bachelor’s degree in Biology, Chemistry or a related scientific field. - 1 year of related laboratory work experience preferred. As the leading respiratory hospital in the nation, National Jewish Health is pioneering a new era of preventive and personalized medicine. By combining our efforts in comprehensive care, academic education and ground-breaking research, we're able to develop treatments that help our patients live more productive lives. If you believe in Breathing Science is Life, we invite you to join our team.
0.6887
FineWeb
What is Character Education? When we think about our students and wonder how we can better prepare them to be good, valuable citizens in the future, the idea of character education comes to my mind. Of course, we want our students to be proficient in math and reading, but we also want them to be proficient in being a productive and beneficial member of society. What better way to do that than introducing character education in the classroom! Character education is the act of instilling the values of kindness, generosity, and integrity in students. It consists of teaching the key components of moral excellence through one’s actions. What is moral excellence? Moral excellence is centered on one thing, and that is doing the right thing. It includes having integrity or doing what is right when no one is looking. It is showing care for others or having empathy when our friends are going through a hard time. Moral excellence is demonstrating kindness to those around you. It is being responsible and taking ownership of one’s actions. As we enter the holiday season, we can find several ways to easily integrate character education into the classroom. The holidays are an excellent time to teach students the value of kindness, charity, empathy, and putting the needs of others above their own. Below are some ways to help develop those invaluable characteristics in your students during the most wonderful time of the year! Character Education Activities for the Holidays Organize a Food, Toy, or Clothing Drive The holidays present a lot of fun, but they also present a lot of needs. There are always needs within every community, but it is especially important to reach out to those less fortunate during the holidays. Many are without family or lack the means necessary to attain items on their own due to financial circumstances or other personal situations. Students can organize food, toy, and/or clothing drives to help families continue to celebrate the holidays despite those unfortunate circumstances. Any drive of this nature requires community involvement and a large amount of responsibility from students in order to be successful. Students must learn to communicate with those in their communities to get the word out and better help those in need. Students learn to be responsible for collected materials and understand their importance. Fundraisers for an Important Cause During the holidays, students can raise money for important causes either locally or nationally. For instance, students may be encouraged to raise funds for cancer research, a local homeless shelter, or animal shelter. As a class, students can learn about the intended recipient of the funds before beginning the fundraising process. In doing so, students gain a better understanding of why it is important to raise money for their chosen organization. This understanding also helps to create a bigger desire in students to make a difference, too! Since students will be collecting money, students will learn to do the right thing even when no one is looking. They must collect money and show integrity to ensure that the money goes to its intended recipient only. One way to extend this idea within your classroom is to research two or three different organizations. Then, students can vote on which organization they would like to raise money for and why. Embracing Charity and Giving In continuing with the idea of drives and fundraisers, another excellent activity for character education is to embrace charity and giving. The central ideas of the holidays that echo all throughout the season are thankfulness and giving. Charity is the act of giving to others in need. Charity helps to develop empathy in students. In school, students could place themselves in another person’s shoes. For example, students could volunteer in the cafeteria or help clean the school building in order to better grasp all that cafeteria workers and custodians do on a daily basis. Outside of school, students could imagine what it must be like to be homeless or without basic needs and decide to do something about it. This may inspire them to volunteer at a local soup kitchen or shelter. Regardless of the location, acts of charity teach students to be sensitive to those around them, and they also remind students to be thankful for all they have. Random Acts of Kindness This is probably my favorite way to instill the values of character in students! It is fun and rewarding. It’s simple. Ask students to participate in random acts of kindness. These “acts” can be performed anonymously or not, but they are sure to put a smile on someone’s face. There are several ways to give acts of kindness while in school. Students could write thank you notes to public service workers, be directed to help a friend when they are having a bad day, clean up a mess that’s not their own, share words of encouragement with one another, or even make gifts for school staff members. Students can even spread kindness outside of school by delivering treats to local businesses, buying someone else’s meal, picking up trash, or surprising a neighbor with a meal. Clearly, providing others with an act of kindness can be as simple or complex as you desire. The main idea is to teach students to be kind to others and realize how it makes them feel in the process! Creatively Encourage Others One of the best aspects of the holiday season is how joyful it is! Students can spread cheer to others in a large number of ways, and in the process, they reinforce the need to care about others and their feelings. Students could go caroling, make holiday cards to share within the school or local nursing home, decorate holiday scenes to share with those in the hospital, etc. All of these activities are both fun and exciting for students, but when they realize the activity serves an additional purpose of providing joy to someone else, it makes it even more rewarding and enjoyable.
0.9529
FineWeb
To achieve this prestigious award a Venturer Scout must be able to set a goal; plan progress towards that goal; organise their self and others; and maintain the determination to overcome difficulties and complete the task. They must also have achieved the Venturing Skills Award and complete the requirements in four award areas: - Adventurous Activities – demonstrates that the Venturer Scout is challenged in initiative, expeditions an outdoor adventures. - Community Involvement – activities centred on citizenship, community service and caring for the environment. - Leadership Development – involvement in Unit management and leadership courses and studying different vocations. - Personal Growth – self development through expressions, ideals, mental pursuits and personal lifestyle. Each year only a few Venturer Scouts achieve this prestigious award, which is presented by the Governor and Chief Scout of New South Wales, as a representative of the Queen, at Government House.
0.6429
FineWeb
Molteni, D., Vitanza, E., & Battaglia, O. R. (2016). Smoothed Particles Hydrodynamics numerical simulations of droplets walking on viscous vibrating fluid. arXiv preprint arXiv:1601.05017. “We study the phenomenon of the “walking droplet”, by means of numerical fluid dynamics simulations using a standard version of the Smoothed Particle Hydrodynamics method. The phenomenon occurs when a millimetric drop is released on the surface of an oil of the same composition contained in a container subjected to vertical oscillations of frequency and amplitude close to the Faraday instability threshold. At appropriate values of the parameters of the system under study, the liquid drop jumps permanently on the surface of the vibrating fluid forming a localized wave-particle system, reminding the behavior of a wave particle quantum system as suggested by de Broglie. In the simulations, the drop and the wave travel at nearly constant speed, as observed in experiments. In our study we made relevant simplifying assumptions, however we observe that the wave-drop coupling is easily obtained. This fact suggests that the phenomenon may occur in many contexts and opens the possibility to study the phenomenon in an extremely wide range of physical configurations.”
0.8548
FineWeb
Dynamics of Infected Snails and Mated Schistosoma Worms within the Human Host G. Besigye-Bafaki and L. S. Luboobi DOI : 10.3844/jmssp.2005.146.152 Journal of Mathematics and Statistics Volume 1, Issue 2 Male and female worms are independently distributed within a human host each with a Poisson probability distribution mass function. Mating takes place immediately when partners are available. It was found that the mated worm function is non-linear near the origin and becomes almost linear as the worms increase. They increase with increase in the worm load due to aggregation of worms. This also increases the infection of snails which are secondary hosts. On the analysis of the model, three equilibrium states were found, two of which were stable and one unstable. A stable endemic equilibrium within a community is very much undesirable. So the main objective of the model was to have the point O(0,0) as the only equilibrium point. This is a situation where there are no worms within the human host and the environment is free of infected snails. A critical point, above which the disease would be chronic and below which the disease would be eradicated, was found and analyzed. The parameters indicated that to achieve a disease free environment, the death rate of worms within the human host should be much greater than the cercariae that penetrate the human. Also the death rate of infected snails should be much higher than the contact rate between the miracidia and the snails. It was concluded that de-worming and killing of snails should be emphasized for disease control and educating the masses on the modes of disease transmission is quite necessary for prevention of the disease. © 2005 G. Besigye-Bafaki and L. S. Luboobi. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
0.8087
FineWeb
Does he call you just to hear your voice, tell you he's glad he made that choice to keep you as his one and only, so even when you're alone you don't feel lonely? Does he hold you close just because he can, making you glad that he's your man? Give kisses in random places, Just to see your random faces? Does he ever cater to your needs, breakfast in bed, and it's you he feeds? Roses just to see you smile, sweet nothings every once in a while? Does he rub you down after a long day, take you out to eat, willing to pay? First bite off his plate is yours, not just the entree, but every course? Does he fill his phone with pics of you, proudly proclaiming, "Yeah, that's my Boo!"? Candid shots of candid times, the first 3 of his "Fave 5"? Does he treat you the way you'd like? Because if not, Daddy will do you right.
0.9924
FineWeb
The mining technique of mountain top removal, and subsequent valley filling, a practice employed in the Appalachian Coal Belt Region of eastern Kentucky, is detrimental to headwater stream systems. The watershed values (i.e. water storage, carbon sequestration, nutrient cycling, habitat, etc.) provided by headwater stream systems are essentially lost once the valley is filled. The development of practical stream restoration and creation techniques for post-mined lands is needed to regain lost headwater stream system value. Important to note is that these techniques must be 1) all encompassing of the valuable functions of headwater stream systems and 2) economically feasible for the mining companies to implement for both currently constructed fills and for future fills. Fortunately, an opportunity to develop head-of-hollow fill stream restoration techniques is present at the University of Kentucky's Robinson Forest. Robinson Forest is an approximately 15,000-acre teaching, research and extension forest administered by the Department of Forestry at the University of Kentucky. Located in the rugged eastern portion of the Cumberland Plateau and largely isolated from human activities, Robinson Forest is unique in its diversity. During the 1990s, a section of Robinson Forest, including the proposed restoration site at Guy Cove, was mined for coal. As part of the mining process, a valley fill was created in Guy Cove, which impacted the headwater stream system in that valley. While there was significant environmental loss, a unique research and demonstration opportunity was created. Currently, the University of Kentucky has received funding from the Kentucky Department of Fish and Wildlife Resources’ In-Lieu-Free Program to conduct a restoration project at Guy Cove. The objectives of the Guy Cove Restoration Project are to: - Recreate headwater stream functions in an economically feasible manner. - Attenuate runoff events to reduce peak discharges and increase base flows. - Promote surface expression of water and enhance wetland treatment efficiency to improve water quality. - Improve habitat through the development of vernal ponds and a hardwood forest. - Establish an outdoor classroom for demonstrating design principles, construction techniques, and measurement of system performance. - Educate a myriad of stakeholders including consulting and mining engineers, land reclamation design professionals, the regulatory community, environmental advocacy groups, and students. The major components of the design included: - Modifications to the head-of-hollow fill geometry, - Compaction of the crown to control infiltration, - Creation of a channel, with a clay underliner, across the crown of the fill, - Use of loose dumped spoil to promote tree growth, - Development and/or enhancement of a variety of ephemeral channels utilizing different materials such as rock from the head-of-hollow fill, rock from natural channels, and woody debris, - Creation of vernal ponds for energy dissipation and habitat enhancement, and - Implementation of a treatment system along with modifications to an existing wetland to improve water quality.
0.9856
FineWeb
To identify, analyze, and prioritize business continuity requirements is crucial to initiate the business continuity management (BCM) program. Which of the following should be conducted first? A. Determining the scope of the BCM program B. Understanding the organization and its context C. Understanding the needs and expectations of stakeholders D. Develop project plans Kindly be reminded that the suggested answer is for your reference only. It doesn’t matter whether you have the right or wrong answer. What really matters is your reasoning process and justifications. My suggested answer is B. Understanding the organization and its context. Stakeholders are identified after the context is determined and analyzed. Their needs and expectations are solicited, collected, analyzed, and managed as requirements, and become the basis of the scope. Alternatives are then proposed to meet stakeholders’ requirements. A business case evaluates the alternatives, selects one as the solution, and supports a program or project to be sponsored and initiated. That said, a program or project is initiated with a charter supported by a business case that evaluates alternatives and determines the solution to meet stakeholders’ needs and expectations identified from the organization and its context, typically through internal and external analysis or environment scanning. The scope of the BCM program is approved, baselined, and documented in the program plan after the program is initiated. PMI OPM and Project Management A BLUEPRINT FOR YOUR SUCCESS IN CISSP My new book, The Effective CISSP: Security and Risk Management, helps CISSP aspirants build a solid conceptual security model. It is not only a tutorial for information security but also a study guide for the CISSP exam and an informative reference for security professionals.
0.7429
FineWeb
Every year during the fall, winter and early spring, we restrict visitation to our Children’s Hospital and our Neonatal Intensive Care Unit. The reason: to protect our patients from viruses like RSV (respiratory syncytial virus). You may have never heard of RSV, but there is a very good chance that you HAVE had it. For most people, RSV acts just like a common cold, but for the very young or immunocompromised, RSV can cause serious problems and may even require mechanical ventilation. Hospitals are places to get well, and it is our job to try to prevent additional illness while patients are in our care. Just a little cold or sniffle for you or an otherwise healthy sibling, can turn into a very bad illness for a young, hospitalized child. This is why during the respiratory viral season we ask that: - Visitors be 12 years of age or older to enter our Children’s Hospital and NICU - You always wash your hands with soap and water or use alcohol hand gel upon entering and leaving a child’s room - You refrain from visiting a child or infant in the hospital if you have fever, a cough or a runny nose Things you may not know about RSV: - It often presents like the common cold in otherwise healthy (older) children and adults - Premature infants and very young children are at greater risk of getting a serious cases of RSV - People infected with RSV are contagious for 3 to 8 days - There are shots high-risk babies can get to help prevent RSV, but is not a vaccine - Once you have RSV, doctors cannot cure the disease they can only treat the symptoms - RSV spreads rapidly among young children - If a case of RSV is serious enough in a young child, it can even continue to cause respiratory issues as the child ages. Now that you know RSV, help us protect young patients from getting the virus and the potentially serious complications.
0.9499
FineWeb
License for MariaDB 10.4.27 Release Notes This page is licensed under both of the following two licenses: - The Creative Commons Attribution/ShareAlike 3.0 Unported license (CC-BY-SA). - The Gnu FDL license (GFDL or FDL). Please seek proper legal advice if you are in any doubt about what you are and are not allowed to do with material released under these licenses.
0.7219
FineWeb
added in the last 7 days A freely available database for major league professional hockey. Covers the following leagues: NHA, NHL, PCHA, WCHL (known as the WHL in its final year), and WHA. - Jan 3, 2007 - This is a public group. - Attachments are permitted. - Members cannot hide email address. - Listed in Yahoo Groups directory. - Membership does not require approval. - Messages require approval. - All members can post messages.
0.8502
FineWeb
The future of solar panel efficiency is expected to continue to improve as research and development in the field progress. Current research is focused on increasing the efficiency of solar cells, developing new materials for use in solar panels, and finding ways to reduce the cost of manufacturing solar panels. Some experts predict that solar panel efficiency could reach as high as 50% in the future, which would be a significant increase from current levels. Additionally, the use of concentrated solar power (CSP) and hybrid solar panels (bifacial, tracking, etc) is expected to become more common in the future, further increasing the overall efficiency of solar power systems. Latest Research In Solar Energy There are many ongoing research efforts in the field of solar energy, with new developments and discoveries being made regularly. Some of the latest research in solar energy includes: - Perovskite solar cells: Perovskite solar cells are a newer type of solar cell that has the potential to be more efficient and less expensive than traditional silicon solar cells. - Dye-sensitized solar cells: Dye-sensitized solar cells use a dye to absorb sunlight and convert it into electricity. They are less efficient than traditional solar cells but are less expensive to produce. - Organic solar cells: Organic solar cells are made from organic materials and they are flexible, lightweight, and can be produced at a lower cost than traditional solar cells. - Tandem solar cells: Tandem solar cells use multiple layers of solar cells to increase efficiency. They have the potential to convert more than 30% of the sunlight into electricity, which is significantly higher than traditional silicon solar cells. - Hybrid solar panels: Hybrid solar panels are a combination of different types of solar cells, such as silicon and perovskite cells, to increase the overall efficiency of the panel. - Concentrated Solar Power (CSP): Concentrated Solar Power (CSP) systems use mirrors to focus sunlight onto a receiver, which converts the heat into electricity. CSP systems are less efficient than traditional solar cells but have the potential to generate electricity during times when the sun is not shining. These are some of the current research that is taking place in the solar energy field. The knowledge on the subject is constantly evolving and new developments may have happened after the cut-off date of the model. Solar Energy Solutions For Residential Homes There are several solutions for using solar energy in residential homes, including: - Solar panels: The most common and well-known solution for residential solar energy is the installation of solar panels on the roof of a home. These panels convert sunlight into electricity, which can be used to power the home or sent back to the grid for a credit on the homeowner's utility bill. - Solar water heaters: These systems use solar energy to heat water for household use, such as for showers and laundry. They can be used in combination with traditional water heating systems for added efficiency. - Solar battery storage: As the cost of batteries continues to decrease, more homeowners are installing battery storage systems to store the electricity generated by their solar panels for use during non-sunlight hours. - Solar Attic Fans: Solar attic fans use solar energy to ventilate the attic and reduce heat build-up in the home. This can reduce the load on air conditioning systems and lower energy costs. - Solar pool heating: Solar pool heating systems use solar energy to heat swimming pools. This can extend the swimming season and reduce the need for electricity or gas to heat the pool. - Hybrid solar systems: Hybrid solar systems are a combination of different types of solar energy solutions, such as solar panels and a backup generator. This provides a reliable source of electricity even when sunlight is not available. Each of these solutions has its own set of benefits and drawbacks, and the best option for a particular home will depend on the homeowner's specific energy needs, budget, and location. Future Prospects Of Solar-Powered Cities The future prospects of solar-powered cities are very promising as more and more cities around the world are turning to solar energy as a way to reduce their dependence on fossil fuels and decrease their carbon footprint. - Increased solar panel installations: In the future, it is likely that we will see more and more solar panel installations in cities, both in residential homes and commercial buildings. This will help to increase the overall amount of electricity generated by solar energy in cities. - Development of smart cities: Smart cities are urban areas that use technology to improve the quality of life for residents and reduce their environmental impact. In a smart solar-powered city, the energy demand and supply will be monitored, and distributed in a more efficient way. Microgrids are local energy systems that can function independently from the traditional power grid. They are becoming more common in cities as a way to increase energy security and reduce dependence on fossil fuels. In a solar-powered city, the microgrid would be powered primarily by solar energy. - Electric vehicles: Electric vehicles are becoming more popular in cities, and as the number of electric vehicles on the road increases, the demand for solar-generated electricity will also increase. - Building integrated photovoltaics (BIPV): Building integrated photovoltaics (BIPV) is a type of solar panel that is integrated into the building, rather than being added on as an afterthought. BIPV has the potential to greatly increase the amount of solar energy generated in cities. - Concentrated solar power (CSP): CSP is a technology that uses mirrors to reflect and concentrate sunlight onto a receiver, which converts the heat into electricity. This technology is more appropriate for large-scale power generation and can be a great solution for solar-powered cities. However, it's worth noting that creating a solar-powered city requires a significant investment in infrastructure and technology, as well as a change in the mindset of the citizens. The implementation of these solutions and the level of success varies from one city to another depending on the factors such as government policies, investment, and public awareness. Solar Energy Market Growth And Trends The solar energy market has experienced significant growth in recent years and is expected to continue to grow in the future. Some of the key trends and drivers of this growth include: - Declining costs: The cost of solar energy has been decreasing in recent years due to advances in technology and economies of scale. As the cost of solar energy continues to decrease, it is becoming more competitive with other forms of energy, making it a more attractive option for both residential and commercial customers. - Government policies: Government policies, such as tax incentives and renewable energy mandates, have played a significant role in driving the growth of the solar energy market. These policies have helped to create a more favorable environment for solar energy development and deployment. - Increasing demand: As concerns about climate change and energy security continue to grow, the demand for solar energy is also increasing. This is especially true in developing countries, where the need for access to electricity is increasing as the population grows. - Innovations in technology: Research and development in solar energy technology have led to new developments, such as perovskite solar cells, which have the potential to be more efficient and less expensive than traditional silicon solar cells. - Battery storage: The decrease in battery storage costs has made it more viable to store the electricity generated by solar panels, this allows for more efficient use of solar energy and increases the overall capacity factor of the solar installation. - Grid Parity: Many countries are reaching grid parity, meaning that the cost of solar energy is becoming comparable to the cost of electricity from the grid. This is making solar energy a more attractive option for many customers. - Utility-scale solar: Utility-scale solar projects are becoming increasingly popular, as they are able to generate large amounts of electricity at a lower cost than smaller, distributed projects. The solar energy market is expected to continue to grow in the future as more countries adopt policies to promote renewable energy and as the cost of solar energy continues to decrease. However, the growth of the market can vary depending on the factors such as government policies and regulations, economic conditions, and technological advancements. Advancements In Solar Energy Systems For Urban Areas There have been several advancements in solar energy systems for urban areas in recent years, including: - Building Integrated Photovoltaics (BIPV): BIPV is a type of solar panel that is integrated into the building, rather than being added on as an afterthought. This type of system can increase the amount of solar energy generated in urban areas, as it allows for more surface area to be used for solar panel installations. - Urban rooftop solar: Rooftop solar panels have become a popular choice for urban areas, as they make use of the limited space available in buildings. Advances in technology have made it possible to install solar panels on a variety of roof types, including flat roofs and metal roofs. - Solar canopy and shading systems: Solar canopies and shading systems are a great solution for urban areas as they provide shade for pedestrians and vehicles while also generating electricity. These systems can be used in parking lots, bus stops, and other outdoor spaces. - Solar-powered street lights: Many cities are replacing traditional street lights with solar-powered street lights. This is a cost-effective solution as it eliminates the need for trenching and underground wiring, and reduces energy consumption. - Solar walls: Solar walls are a type of solar panel that is installed on the walls of buildings, rather than on the roof. They can be used to generate electricity and also provide shading and insulation. - Floating solar: Floating solar systems are installed on bodies of water, such as lakes, reservoirs or canals. These systems can be a great solution for urban areas as they make use of otherwise unused space and can also help to reduce water evaporation. - Community solar: Community solar projects allow multiple customers to share a single solar installation. This can be a great solution for urban areas, as it allows residents who may not have the ability to install solar panels on their own property to still benefit from solar energy. These are some of the advancements in solar energy systems that are being used in urban areas. As the technology continues to evolve, new solutions may be developed and implemented in the future, to make solar energy more accessible, efficient, and cost-effective for urban areas. The future of solar energy is very promising as the technology continues to improve, costs continue to decrease, and demand for clean energy increases. Advances in solar cell technology, such as perovskite solar cells, have the potential to increase efficiency and decrease costs. Additionally, the integration of other technologies like battery storage, microgrids, and smart cities can further improve the overall performance and reliability of solar energy systems. The government's policies and incentives, along with the growth of the electric vehicles market and building integrated photovoltaics (BIPV) will also play a crucial role in the growth of the solar energy market. Furthermore, the development of solar-powered cities, floating solar systems, and community solar projects are some of the advancements that can make solar energy more accessible, efficient, and cost-effective for urban areas. However, the implementation of these solutions and the level of success varies from one place to another depending on the factors such as government policies, investment, and public awareness.
0.9931
FineWeb
The stunning and highly controversial find made by marine treasure hunters using side-scanning sonar to detect shipwrecks in the Baltic Sea has finally been identified as a submerged monumental construction from the Paleolithic era. The giant circular seafloor promontory measuring ~60m in diameter is actually a terraced monument built by the highly advanced Atlantean civilization over 14,000 years ago. Co-discoverer and Ocean X team leader Dennis Aasberg describes just a few of the geometric features presented by the gargantuan disc-shaped temple rising above the sea floor, likening it to concrete: Prohibitive conditions severely limit filming of the ancient monumental structure, especially rough seas and the very poor visibility of <1m near the bottom. animated digital terrain models allow a clearer perspective of massive proportions and complex geometric configuration submerged atlantean monument (above). greatest hindrance to seafloor site investigation is an intense electromagnetic vortex that perpetually interferes with all types electrical equipment situated on or above ancient –in the vertical water column, onboard ships at the sea surface and even affecting low-flying airplanes.
0.6698
FineWeb
Ellen earned a BA in Engineering Science and BE in Biomedical Engineering from Dartmouth College in 2014. She is a PhD student in Dr. George Truskey's Lab. - Email Address: [email protected] Investigating the Effects of Oxidative Stress on the Circulatory System Using TEBVs The vascular system’s response to stress, like oxidation or deformation, mitigates numerous vascular pathologies, atherosclerosis primary among them. A typical blood vessel consists of a layer of endothelial cells, called the endothelium, surrounded by a layer of smooth muscle cells. The endothelium regulates the transport of molecules and fluids into the tissue, while the smooth muscle layer regulates diameter of the blood vessel. Oxidative stress can arise when dysfunctional proteins or immune cells release reactive oxygen into the blood stream or vessel wall. This primarily affects the endothelial cell layer, causing it to adopt a senescent, or aged, phenotype. Endothelial senescence leads to abnormal smooth muscle cell proliferation, reduced vasoreactivity in the presence of chemical regulators, and correlates with higher atherosclerosis risk. The Truskey lab has recently developed tissue-engineered blood vessels (TEBVs), tubular collagen constructs that, when seeded with human endothelial cells and fibroblasts, recreates the three-dimensional structure and properties of an arteriole in vitro. Most platforms for studying the vascular system rely on two-dimensional co-cultures or animal models. TEBVs show greater fidelity to native tissue than two-dimensional systems, and can be tested with the same functional assays used clinically to evaluate vascular health. The central hypothesis of this research is that TEBVs exposed to oxidative stress will have impaired function and increased risk of disease development. The effects of oxidative stress on the vascular system will be explored by characterizing stress-induced changes in (1) vasoreactivity, (2) vascular wound healing, and (3) atherosclerosis risk. Oxidative stress can be modeled in vitro by chronic exposure to hydrogen peroxide. 1: Changes in vasoreactivity will be characterized by evaluating changes in vessel diameter in the presence of vasoconstrictors and vasodilators. qRT-PCR will be used to quantify the changes in endothelial cell gene expression that cause the observed changes in vessel function. 2: To simulate vascular injury, TEBVs will either be exposed to the toxin theophylline or subjected to a scratch injury. Recovery from vascular injury will be evaluated by examining endothelial cell migration into the wound site and recovery of vasoreactivity post-injury. 3: To probe atherosclerosis risk after oxidative stress, TEBVs will be exposed to three atherogenic stimuli: oxidized low-density lipoprotein (oxLDL), activated monocytes, and the soluble protein TNFα. AW ARDS/HONORS/FELLOW SHIPS: Dean’s Graduate Research Fellowship 2014-2016 NSF GRFP Honorable Mention 2015 Center for Biomolecular Tissue Engineering (CBTE) Fellow 2015-2017
0.8998
FineWeb
A Station Eight Fan Web Site 1.You've said that there was someone prior to the archmage who joined together the grimorum, phoenix gate and eye of odin so when did this happen? 2.How did he integrate the grimorum into it? I know he didn't swallow it. 3.What did he do with the power? 1. Not saying. 2. That was only necessary for our Archmage because he was entering Avalon. 3. Entertaining stuff, I tell you. How did the Archmage get the grimorum? How did he become Malcolm's advisor? What did he do before becoming Malcolm's advisor? Why wasn't the Archmage burnt for witchcraft? 1-3. Not saying now. All part of the Dark Ages tapestry. 4. He was too useful for too long. How did the archmage acquire the grimorum arcanorum? In a very entertaining story. Today I watched the "Avalon" episodes and somehow it got me thinking about what we saw in "Long Way Till Morning" and "Shadows of the Past". *cracks knuckles* Okay here goes... In "Long Way Till Morning" during one of the flashback scenes; Demona, Goliath, and Hudson enter the Archmage's cave and pass this wall with a bunch of carvings on it. From what I could tell, these carvings looked ancient and I began to wonder the following questions: 1: Did the Archmage somehow create these? 2: If no, who did then? Now, there is this one carving we get a close up of that shows what looks like the Archmage standing over some gargoyles. So heres my next question: 3: What was the significance of that carving? Now in "Shadows of the Past" we see this huge structure underneath the Archmage's cave. It looks like it has runes etched into it or some strange ancient writing. We know that it has a magical property because Hakon tells us when he is explaining how their ghostly forms could exist. At first I thought of the Archmage some how building it, but then again that leaves me with a bunch of questions. Anyway here is a question pertaining to that: 4: If not the Archmage, which is obvious by now, then who built that structure? Also, I read the Lost Race archive and you stated that there were some artificats of the Lost Race left behind. So.... 5: Is question (1) a Lost Race artifact? 6: Is question (4) a Lost Race artifact? 2. I'm not telling right now. 3. That's subject to interpretation. 4. See answer to question 2. 5. I'm neither confirming or denying this. i just watched "Awakening 1 and 2" and wow, i love these eps, particularly "Awakening 1" its beautiful! anyway, i was wondering some things about the sleep spell: 1. would the sleep spell work on a human? could a human be put to sleep for a thousand years? 2. what is the spell's peculiar attachment with the castle? the spell says "until the castle rises above the clouds", but what if the Magus tried this spell on some rogue gargoyles? would they still sleep til Castle Wyvern rose above the clouds? what if the gargs live in another sort of structure, like the Mayan Pyramid? would the spell still work? i just don't quite understand the spells need for a link to the castle. could Magus have changed the spell to say "sleep until the sky burns or whatever"? 3. was the sleep spell in the Grimorum when Magus first acquiered it? was that spell in the Grimorum there when the Archmage first acquiered it? i guess the Grimorum being transported through time by the Phoinex gate over 900 years really helped it to be presearved, eh? 1. The spell would work on humans. But we age while we sleep. So we'd die long before the castle rose above the clouds. 2. Open ended spells require more power, more energy. Setting a limit (no matter how unreasonable the limit may seem) makes casting the spell easier. Certain spells were written or adapted to certain limits. The Magus may also have adapted the spell to his needs. But basically, it was the equivalent of "til Kingdom come". 3. I imagine so. Was the archemage influenced by someone, apart from his future self, to take the three talismans and his plans to take over avalon and the world? sorry if this is a weird question, but I was wondering about that for a long time He had wanted the three items of power for some time because he had read about their joint use before. The Avalon take-over seemed the idea of his future self. But of course that future self only learned it from his future self. So one might ask, who came up with the actual idea. Was it born of the time stream, whole? 1a.In some cultures knowing the name of a person gives you power over it so is this true in the gargoyles universe? 1b.If so is that why the Archmage and Magus are called by their titles because of their names? 1. It can be true. It certainly can't hurt. b. Perhaps... ;) is the Magus related to the Archmage? Not by blood. What did the Archmage do to get charged with attempted treason? 1) Why did the Weird Sisters spend so much time and effort making Demona and MacBeth their pawns, and keeping them alive for nine-hundred and something years? 2) Why did the Archmage want those two in particular? They seam pretty powerfull but there have got to be people of equal power in the 20th century (even people who would be willing o go the Avalon) 3) If they had over 900 years, why didn't the Weird Sisters get afew more pawns (would have been a good idea, considering ththier attack on Avalon failed) 1. Partially, because the Archmage asked them to. And for other reasons, I'm not yet revealing. 2. I don't think the Archmage fully knew the answer (or thought to care). Demona, he thought he was punishing for an earlier ("Vows") betrayal. But even that argument is specious. And he didn't know Macbeth from Adam. 3. The Archmage didn't ask for any others. That restricted them, vis-a-vis Oberon's Law.
0.8892
FineWeb
Al col·loqui, intervindran: Manuel Cruz, filosof i escriptor. Xavier Pedrol, professor de Filosofia del Dret. Alicia García Ruiz, filosofa i traductora. Francisco Fernández Buey, professor de Filosofia. César de Vicente Hernando, historiador. (gamoia) Dr. Martin Luther King, Jr. was the son of Reverend Martin Luther King; the father (Rev. King) was the author of Daddy King: An Autobiography (1980). Please preserve the distinction between these authors. Author pictures (3) Improve this author Martin Luther King is currently considered a "single author." If one or more works are by a distinct, homonymous authors, go ahead and split the author. Martin Luther King is composed of 1 name.
0.7759
FineWeb
Students and teachers face problems in the teaching-learning processes of matrix algebra, due to the level of abstraction required, the difficulty of calculation and the way in which the contents are presented. Problem-based Learning (PBL) arises as a solution to this problem, as it contextualizes the contents in everyday life, allows students to actively build that knowledge and contributes to the development of skills. The proposal describes a didactic sequence based on PBL, which uses cooperative techniques and MATLAB, as instruments that facilitate the resolution of problems close to the student experience. The features of the Moodle platform are used to support the face-to-face educational process. The perception of students, in relation to the activity shows that 83% believe that it contributed to the understanding of the topics covered and 79% think that it allowed them to develop their creativity and capacity for expression. |Number of pages||6| |Journal||International Journal of Advanced Computer Science and Applications| |State||Published - 2020| Bibliographical noteFunding Information: We want to express the authors' our sincere gratitude to the National University of San Agust?n de Arequipa for the support received for the realization of the proposal and we hope that the results will benefit the institution. © 2020, Science and Information Organization. - Cooperative techniques - Matrix algebra - Problem-based learning
0.8809
FineWeb
The Environmental Radiation Pollution in Urban Buildings The British University in Dubai (BUiD) Indoor environmental quality is essential in urban buildings. Health of occupants depends greatly on factors related to air quality and space maintenance. However, a new form of pollution has emerged in developed nations due to the excessive use of electric and electronic products. This pollution is caused by the unavoidable emanation of electromagnetic fields in the free air and space penetrating the living organisms and causing adverse health effects over the long term. The design and construction studies do not sufficiently account for the radiation-intoxication issues within the indoor environments due to the lack of awareness on the importance of the electromagnetic radiation field. This dissertation reviews the latest literature on the indoor environmental radiation and its health implications. The research experimental study gives an overview on simple procedures to measure, identify and discuss the RF radiation in two residential apartments. Results of this study are compared to the international standards on exposure limits and addressed accordingly in the analysis. The findings discussion and research outcomes should urge project planners to initiate guidelines and recommendations in order to mitigate radio-frequency fields in indoor environments throughout the design phases. The end-users should also be capable of using measurement methodologies to perform a preliminary assessment of the radiation exposure level at their own premises. environmental radiation pollution, urban buildings, indoor environmental quality
0.9743
FineWeb
At Gastrointestinal & Liver Specialists of Tidewater, we know that patients and families want to know as much as they can about the GI system and disorders that affect their daily lives. Refer to the list below to find the information that is most helpful to you. If you still have questions, please contact us through our website. GERD (Gastroesophageal Reflux Disease) What is GERD (Gastroesophageal Reflux Disease)? To understand GERD, we need to understand how our digestive system works. Normally when we eat, our food is chewed into small pieces that are easy to swallow. As we swallow the food it travels down our esophagus (tube between our mouth and stomach) to the stomach. Near the top of our esophagus is an area of muscle called the upper esophageal sphincter (UES). When we swallow, the UES relaxes and allows food to pass into the esophagus. The food then travels down the esophagus to another area of specialized muscle tissue called the lower esophageal sphincter (LES). The LES is located at the junction of the esophagus and the stomach. The job of the LES is to act as a one-way valve, allowing food to enter the stomach and prevent it from coming back up into the esophagus. GERD occurs when the LES is too relaxed and does not prevent stomach fluids (stomach acid) and food from backing up into the esophagus. The lining of our esophagus is not protected from stomach acid, unlike the stomach’s lining. The acid contact with the esophagus causes inflammation, and may cause irritation of the esophageal tissue. This leads to the symptoms of GERD. What are GERD symptoms? - Heartburn–this is a burning sensation felt under the breastbone. The frequency of this varies from person to person. It can be 1-2 times a month or even a daily occurrence. - Acid regurgitation–this is the sensation that acid and food contents within the stomach is backing up into the esophagus and at times, even into the mouth, causing a bitter or sour taste. - Hoarse or scratchy voice, coughing, some types of asthma, sinus problems, and dental erosions–these symptoms can be caused by the stomach acid backing up into the esophagus and traveling into the breathing tube causing irritation of the voice box (larynx) or vocal cords which can lead to changes in voice. The acid can also travel further down our airway (trachea) and cause spasms of the airways, which can cause asthma symptoms such as wheezing or coughing. The acid can even travel up into the sinuses, which can lead to sinus problems. The stomach acid can also break down tooth enamel. - Difficulty swallowing or food becoming stuck in the esophagus–this is often caused by stomach acid backing up into the esophagus. If this occurs frequently and over long periods of time, it can cause irritation and inflammation of the tissue and lead to a narrowing of the esophagus, making the passage of food difficult. How common is GERD? Over 60 million Americans have GERD. About one fourth of these individuals have symptoms every day. Factors contributing to the incidence of GERD include pregnancy, being overweight and older age. How is GERD diagnosed? There are a number of tests that are used to diagnose GERD. They include the following: - Upper endoscopy or EGD (esophagogastroduodenoscopy): is a procedure where a small lighted tube is passed through your mouth into the esophagus, stomach and first portion of the small intestine. This test allows the doctor to see the lining of your upper GI tract. Sometimes biopsies (tissue samples) may be taken. - Barium swallow, and or an Upper GI x-ray: involves drinking barium, which coats the esophagus, stomach and first portion of the small intestine. X-rays are then taken to show the lining and structures of these areas. Sometimes the test will involve special x-ray video that records the actions of swallowing. - A small measuring device (Bravo capsule) can be placed (via EGD or through the mouth) in the lower esophagus to record acid events over a 48 hour period. The device has special sensors that measure how often you have acid backing up into your esophagus and how long it stays there. Alternatively, a 24-hour pH monitoring test is also available. During the 24-hour test, a thin tube is placed through the nose and into the esophagus. The tube remains in place for 24 hours and the information is recorded on a small computer monitor. - Manometry: is a test where a thin tube is placed through the nose into the esophagus. This tube has special sensors to measure pressure in the esophagus. This test is used to evaluate your swallows. It can show the strength and the coordination of your swallows. The tube is left in place for a short period of time while you are instructed to swallow, drink, and/or cough. How is GERD treated? Initial treatment of GERD involves lifestyle changes. Other treatments include: medications, endoscopic treatment of the esophagus, and surgery. - Avoid foods that cause symptoms, that cause the lower esophageal sphincter to relax or that are irritating to the GI tract and may cause an increase in acid production. These foods include: caffeinated drinks (coffee, some teas, colas, and other sodas high in caffeine) chocolate, tomato based products (spaghetti, lasagna, pizza, and chili), spicy foods, citrus, garlic, onions, peppers, fatty foods, and mint/peppermint. - Avoid using tobacco products: this means no smoking or chewing tobacco. - If you are over weight, weight loss is encouraged. - Avoid eating for 3 hours before you go to bed or lie down. - Avoid wearing clothing that is tight around your abdomen. - Eat smaller meals. - Avoid vigorous exercise within 2 hours after eating. - Avoid alcohol. - Avoid the use of aspirin and other non-steroidal anti-inflammatory medicines like ibuprofen. - Elevate the head of your bed by using 4-6 inch blocks to help prevent acid from rising up at night. - Don’t bend over after eating if you are prone to regurgitation after meals. - Antacids: There are liquid and tablet types of antacids used to neutralize the acid in your stomach. They offer relief relatively quickly, but don’t last long. Some antacids can cause diarrhea while others may cause constipation. Let your doctor know if this becomes a problem. If you are using these medications frequently, you should speak to your doctor about alternative therapies. - H2 Blockers: These include medications like cimetidine (Tagamet), ranitidine (Zantac), famotidine (Pepcid), and nizatidine (Axid). These medications come in both over the counter and prescription strengths. These drugs work by blocking some of the acid production in our stomach. It is recommended that if you need these medications for longer then a few weeks you should see your doctor. - Proton Pump Inhibitors: These include omeprazole (Prilosec), lansoprazole (Prevacid), dexlansoprazole (Kapidex), pantoprazole (Protonix), rabeprazole (Aciphex) and sodium bicarbonate (Zegerid) and esomeprazole (Nexium). These medications work by stopping the production of acid by different types of cells in the stomach that make acid. If medications fail to resolve symptoms, endoscopic or surgical interventions may be necessary. New endoscopic treatment options are available to control acid reflux as an alternative to chronic medications or to avoid surgery. These options can be discussed with your gastroenterologist. This option is reserved for when the above measures aren’t working. It can also be used as an alternative to chronic medication therapy. The surgery is called “Nissen Fundoplication”. This is a surgical procedure where the upper portion of the stomach is wrapped around the lower esophageal sphincter area to prevent reflux. Are there any complications from GERD? The main gastrointestinal complications related to GERD include Barrett’s esophagus and swallowing problems. - Barrett’s esophagus is a precancerous condition where the lining of the esophagus changes. These changes can lead to cancer. It is recommended that individuals with Barrett’s esophagus be followed with regular endoscopies for screening. Barrett’s esophagus is found in only 10% of individuals with GERD, of those that have Barrett’s esophagus, only 1% will develop esophageal cancer. - Swallowing problems are often described as the sensation that food is getting stuck or is slow to pass through the esophagus. This may occur with liquids or solid foods. Symptoms can also include painful swallowing. Trouble swallowing usually happens as the result of acid backing up into the esophagus and causing irritation. Over time, a narrowing of the esophagus can make it difficult for food to pass. Let your doctor know if this is happening to you as correcting the problem can be done with the use of an Upper Endoscopy (EGD). What kind of follow up will I need? If you have GERD with no other problems, you can follow with your primary physician. Most, if not all, primary physicians are comfortable treating individuals who have GERD once the diagnosis has been made. If you wish to follow with our clinic, we will want to see you once a year to evaluate how you are doing, discuss any new treatment options as well as review lifestyle modifications. It is at this time that you will be given an updated prescription. If you have Barrett’s esophagus, we will want to see you for follow up on a yearly basis and will want to do an upper endoscopy every 3 years to evaluate for the risk of cancer. If you are having swallowing problems, we ask that you call our office and speak with the patient coordinator that works with your gastroenterologist. If you have any questions, please call our office at 612-871-1145.
0.9352
FineWeb
Good morning all, As I noted on Tuesday, we will not meet for lecture today. We have only one textbook chapter remaining, which we will save for our first meeting after the Thanksgiving break. Instead of lecture today, I'll offer you a reading instead, one that encompasses several of our recent topics. In recent classes, we have considered the degree of cooperation and conflict between reproductive partners, as well as the signaling that occurs to influence each other. When sexual investment is strongly different between the sexes, we expect that sexual selection can drive exaggerated displays, enhance female 'choosiness' of mates, and promote unequal reproductive tactics. But, curiously, sexual displays also are common within pair-bonded species, in which males and females have equal (or nearly equal) roles and should be in cooperative agreement over parental investment, rather than in conflict. An explanation for this paradox has been lacking. A very recent paper sheds some light on this problem, and present a mathematical model which supports the idea that inter-sexual signaling displays which originate to exploit a sensory bias in the signal receiver can evolve into a cooperative exchange, suggesting that sexual conflict can morph into sexual cooperation. This has significant implications for parental investment and care, as we've noted that the degree of sexual conflict is one of the primary drivers of sexual dimorphism in parental investment. This paper was published in the Proceedings of the National Academy of Science (PNAS), our national body of 'science experts'. Election to the Academy is reserved for the top thinkers in one's field, and is a prestigious badge of honor. Their Proceedings journal publishes papers submitted by Academy members, as well as those that Academy members recommend for publication. If you access this link from an IUP campus computer, you can obtain access to the full article and its associated material, through IUP library subscription. If you try to access the article from off-campus, you will be blocked. I've attached the PDF of the article, just in case. The math of the authors' model is well beyond us. If we accept their model as being sound, it suggests that, instead of females being 'lured' into over-investment in their offspring by male displays, females instead evolve to require (or at least benefit from) the male display in terms of stimulating female condition/motivation to a level of investment which is optimal for the female (but less than that which is maximally optimal for the male). This causes males to remain invested in the pair-bond and their role in parental investment, and reinforces the pair-bond between mating partners. In a sense, the females are now requiring the males to remain present, remain attentive, and to offer displays, in order to ensure that their female partner is providing enough investment of her own. As do many science journal, PNAS occasionally offers peer commentary on papers which are especially important, or especially difficult (this one is perhaps both). The associated commentary on this paper (link below, PDF attached), describes this result in the context of dove mating pairs, for which male stimulation of female reproductive condition is a well-understood and very necessary component to the reproductive cycle. Interestingly, as the commentary notes, the capitulation of this male-female exchange may ultimately be female self-stimulation of reproductive condition, a result which has been suggested to occur in doves. That may be the current evolutionary end-point to this exchange, but it also has the potential to serve as a type of an "escape clause", which males may now be selected to exploit. It would be interesting to see how much variation exists in this end-point, and whether males can benefit from females which perform more of their own reproductive stimulation. I hope that you find this article interesting - it represent a nice, theoretical treatment of a difficult (= interesting) problem, and should set the stage for experimental work to come. I hope that you all have an excellent Thanksgiving break - please be safe, rest, relax, eat, and enjoy. See you early in December for our last chapter. As I mentioned in lecture on Tuesday, we are caught-up with our lecture material and will not meet for lecture on Thursday. Instead, I am offering a reading (attached) that I had described earlier, along with some explanation of one of the more important points described in this study. Last weekend, I sent to our class description (below) of a recent publication examining behavioral-genetic associations in domestic dogs. I hadn't yet seen the original research when I wrote to you last weekend, but forwarded a news report about it that came from the home institution of the senior author on the study. I described in my message to you that some of the behavioral-genetic associations the authors reported were as high as 0.7, near to the limit of those ever reported for narrow-sense heritabilities of behavior. Over the weekend, I requested a copy of the actual research paper from its senior author, and, upon seeing it, wanted to offer some interpretation. Early in the term, in chapter 03, we discussed trait variation within species, and we noted (using the canine example) that artificial selection has created an abnormally high amount of trait variation within the single species of domestic dog Canis lupus familiaris. In our next lecture (Chapter 04), we discussed behavioral genetics and narrow-sense estimates of heritability, describing the upper limit of such associations as around 0.7. We saw in that same chapter (as well as in later chapters, including Chapter 10) some estimates of narrow-sense behavioral-genetic heritability estimates that all were < 0.3, which is typical. In this new report, the authors report behavioral-genetic associations that are much higher than those typically reported. How can this be? It stems from the artificial (and unusually large) degree of trait variation within this domestic species. Typically, when one examines associations between traits within a natural (e.g., not artificially-selected) species, we expect some small, defined range of trait values, with correlation (association) between traits of some relatively low magnitude. Here in my Figure 1, I show the trait values and the within-species bivariate trait association for two (hypothetical) different species, such as a fox and a wolf. Within either species, there is some defined range of values for trait X (such as body length) and some defined range of values for trait Y (such as body mass). In my hypothetical example, these traits are correlated somewhat weakly within species A, and uncorrelated within species B. Notice that the two species do not overlap in trait values - a small wolf is always larger than than a large fox. If domestic dogs were a natural species, chances are good that their trait values would fall somewhere in between these two species, perhaps closer to the 'wolf' end of the spectrum. Nonetheless, they would be expected to occupy only a small potion of the overall trait ranges. Now, consider what the authors have done in their analysis. They have considered all domestic dog breeds to be of the same species, a fact that is technically true but which ignores the other fact that their range of trait values is anything but normal. They have analyzed behavioral-genetic associations across breeds within this single species, but here the individual breeds represent much more trait variation than natural species might, as size variation across domestic dog breeds is much greater than size variation across canine species in the wild. When associations are evaluated across multiple species (such as in my hypothetical Figure 2), the associations are often of higher magnitude. In the current study, analysis across the very artificially-distributed dog breeds behaves in the same way, resulting in behavioral-genetic associations much higher that those reported within single, natural species. 13/14 of their within-breed estimates (their Figure 1) are <0.3, just as one would expect. This study is quite interesting, and represent the application of some very modern techniques (canine SNP chip, anyone?) to this interesting question of the heritability of behaviors. It also serves as a very useful reminder of - the power of artificial selection - modern dog breeds are estimated to have been developed only over the last 300-500 years. For a natural species to evolve as much trait variation in this short time is unheard of. - the danger of reliance upon secondary news sources - the original news story that I sent to you accurately describes the gist of this research study, and highlights the very strong associations found. But, it also leaves out enough detail that it is not possible to immediately assess why the associations are of such magnitude. - the importance of proper modeling of evolutionary constraint - as shown in my hypothetical example Figure 2, trait associations across species can be artificially inflated if simple, linear techniques are used instead of methods that account for shared evolutionary history, such as independent contrasts analysis or nested ANOVA. The authors do have a phylogenetic model for their dog breeds; I am not schooled well-enough in the jargon of their analytical models to know if they have fully controlled for relatedness. Whether they have, or have not, these types of broad comparisons should always be examined with an eye for that type of concern. - the imperfection of any one study - this is a research report describing one body of work on this topic, and I'm certain we could find other, similar/related studies. Is this study perfect? Certainly not. Is it still interesting, and useful? Absolutely. Any one research study can only advance our understanding incrementally. It's too easy, and too common, to dismiss work outright for containing flaws - it's more important to ask, given such flaws, is there anything that we can learn? The latter approach is more fruitful, and provides a much better return on one's investment of time and effort. Here, the traits with the highest across-breed heritabilities are trainability, aggression, and attachment - exactly those traits we might expect to have been key in the artificial selection/shaping of the human-dog relationship. It's a nice confirmation that these are strongly heritable, in ways that have translated into very powerful differences among breeds. I've spent perhaps too much time dissecting some of these points, but I do so because they put some of our lecture material into sharp relief. Textbook examples are often too carefully culled to represent cutting-edge investigation; it's both fun as well as useful to see where current researchers in these areas actually are working. Have a great rest of the week - see you on Tuesday for Chapter 11. Good morning all, At several points this term, we have discussed the genetics of behavior, including both the ability of single genes to influence behavior, as well as the heritability of individual behaviors and how traits can potentially be mapped onto phylogenetic histories. In the recent behavioral news is a report of a study that used large databases on dog behavior and genetics to look for behavioral traits that were associated with consistent genetic features. The researchers found >100 potential sites in the genome that were strongly associated with dog breed characteristics, including train-ability, aggression, excitability, and others. One of the strengths of the method used here was that the researchers restricted themselves to a subset of the data pertaining to purebred dogs. This has the advantage of eliminating cross-breed variation which could dilute the strength of the genetic signals they were trying to detect. Dogs also are an advantageous species for a study like this, because they are popular, have long been bred in relatively pure lines, and have been artificially selected for a range of behavioral characteristics. Some of the associations reported are quite strong, with heritability estimates as high as 60-70%. Those are very high values, near the limit reported for animal behavior-genetic comparisons. It's also surprising, in that, while this study has several strengths in its design, it also has one specific weakness: the researchers did not have genetic and behavioral information from the same individual animals, but instead were relying on databases (and breed averages) assessed across different individuals. That suggests that some of the associations, if tested within individual subjects, could be even stronger. The human-dog relationship is a long one, and our artificial selection of dogs has been enormously powerful - when you think about all of the different dog breeds in the world, from Danes to dachshunds, Newfoundlands to chihuahuas, they all are the same species. That is testament to an enormous phenotypic plasticity (reaction norm) within their development. I'm going to request a copy of the original research article that this news report references, if anyone would like to see it - I'll bet it is interesting reading. Perhaps it will shed some light on my dog's (a rescue Rottweiler) behavior... Have a great rest of the weekend - see you on Tuesday. We've considered recently the concept of aposematism, the display of warning coloration to indicate to potential predators that one is unpalatable or otherwise unsuitable as a prey item. As we have seen, there are many implications to this type of signaling, including the costs involved, the degree to which it is effective, and its potential to be mimicked (and thus rendered potentially less effective) by palatable species. The issue of aposematic costs is one that has been considered for some time, particularly the metabolic costs of producing warning coloration as well as the predation cost of being conspicuous. In addition to these are the metabolic costs of actually being unpalatable, and in no system has this been better explored than in monarch butterflies, conspicuous in both larval and adult forms, as well as highly unpalatable in each for the glycosidic compounds they acquire and sequester from milkweed plants (their near-exclusive forage). These compounds are highly toxic disruptors of Na+ channels, and being able to ingest and store them has required some evolutionary tinkering. In the recent science news is consideration of this phenomenon, with some genetic work that explains the evolution of caterpillar resistance to these glycosides. The plant defenses have evolved to deter caterpillar feeding, but the caterpillars were able to evolve resistance with as few as three genetic mutations. These researchers were able to induce these same mutations in fruit flies, rendering them resistant to the glycosides as well - a very powerful experimental demonstration. The researchers also demonstrate some of the costs associated with the evolution of resistance to glycosides, including reduced ability to withstand physical shock. No evolutionary benefit is free, and beneficial changes to genes often are paired with deleterious side-effects. Here, the benefit (unpalatability) appears to outweigh the costs (reduced ability to withstand physical rotation). Many of the plants and animals around us are conspicuous, while many others are cryptic. Those that are colorful and eye-catching may be silently playing potentially-deadly games of chemical warfare. Nature has been described as 'red in tooth and claw' (William Congreve); we might expand that to '... tooth, and claw, and toxin', for many toxins (including these glycosides) are quite deadly. What is remarkable to me is the role of simple sugars in glycosides, forming one side of the glycosidic bond. This is why some dangerous chemicals (such as automotive antifreeze, ethylene glycol) taste sweet and thus are dangerously attractive to the uninitiated. It makes me wonder whether glycosides have ever been used in nature as deadly bait, to lure, and then poison, potential prey. I'm willing to bet that it has... Have a great weekend- Good morning everyone, In the recent science news are articles related to several of the topics we have considered recently - this is a nice confirmation that our course topics are 'up-to-date'! Early in the term we considered the behavior of parasitic wasps, that stun prey and then oviposit eggs within them so that their larvae have a ready food supply during early growth. In the news this week is description of a different kind of parasitic wasp, one which parasitizes other wasps. Here, the form of parasitism is less direct, in that the parasite deposits its eggs into the same plant gall that its host occupies. The parasite larvae then can attack the host, and in doing so, they accomplish a form of behavioral and physiological 'hypermanipulation'. Not only do they use the host tissues for their own nourishment, but they actually trigger a malformed version of the hosts normal escape behavior, which ensures that the host itself doesn't escape the gall but which provides the parasite an escape route. The degree to which parasites manipulate their hosts can be extraordinary. We are used to thinking that parasites can make use of host tissues, but examples like this reveal more complicated interactions, with some parasites hijacking host behavior as well. There are plenty of examples, such as these: All are good reminders that host behavior, as well as host tissues, can be exploited by parasites. Even more recently, I sent you some information about humans who have developed some ability to perform echolocation. Just this week came a report on this topic, suggesting real, functional remapping of the brain's visual cortex to support this new capability: At some level, neural plasticity is responsible for all that we can learn, but to have whole-scale re-functioning of a part of the brain from one sense to another is very impressive. Have a good weekend - Good morning all, As our term comes to a close, I'll use my last news message to send along the latest news from two ongoing news stories in genetics: The first bit of news is about a newly reported fossil find, from a branch of ancestral hominins known as the Denisovans. While scientists and anthropologists have been studying our Neanderthal relatives for decades, Denisovans are only recently discovered. They are thought to have represented a 3rd lineage of ancestral hominin, that co-existed with and likely inter-bred with both Neanderthals as well as early humans. Until very recently, all information on Denisovans came from fossils collected from a single location, the Denisova cave in modern Siberia (Russia). This new report describes a Denisova fossil from much farther south, in modern Tibet, which suggests that Denisovans were more broadly distributed, expanding the ranges of times and locations over which they may have interacted with modern humans. We know so little about Denisovans that this new information has been described as 'game changing'. If you recall the patterns of early human migration we considered, the first humans may well have had opportunity to interact with the last Denisovans. We all likely have some 'Neanderthal DNA' in us; we may come to realize that we all have a little 'Denisovan DNA', too. The second news story I will send here relates to the promise, and difficulty, of genome editing. We've discussed a number of times the concept of genes and alleles, and we've considered both gene therapy as well as some of the news related to human genome editing. Recently, a group of prominent scientists has argued that, given our current state of knowledge, the use of gene editing to produce 'designer babies' is more fiction than fact. Even apart from the difficulty of successfully edited the human genome, they suggest that the likelihood of finding individual genes with pronounced effects is very, very low. If you remember, genome-wide association studies (GWAS) can be used to identify genes associated with particular aspects of our physiology and health, but the strength of these associations normally is very low (e.g., often <1%). As such, we may not yet have good, individual targets for gene manipulation. That said, it is very likely that both our gene-editing as well as our genome evaluation skills are going to improve over time, so perhaps the current limitations on the likelihood of 'designer genome editing' are just that: current, but not permanent. It seems impossible that this topic, or interest in it, is going away any time soon. I'm signing off for the term now. I hope that these weekly news messages have been useful to you. This is the first semester that I have used them to this extent, and it has been a learning experience for me. In particular, In the end, though, I remain very optimistic. Science is "mankind's organized quest for knowledge" (Floyd Bloom), and we already know that "knowledge is power" (Francis Bacon). It is science that offers us the best hope to deeper understanding, new therapies and treatments, new cures, and new adventures. We will encounter many speed-bumps along the way, to be sure. I hope that our course has inspired you to be a part of this quest, and to make the best use of the knowledge that you gain while on it. Have a great weekend, and best of luck with all of your exams next week.
0.5097
FineWeb
Submitted to: Soil & Tillage Research Publication Type: Peer Reviewed Journal Publication Acceptance Date: October 13, 1997 Publication Date: N/A Interpretive Summary: Stubble mulch tillage and no-tillage are suited for dryland crops in the Great Plains, but there is concern if using no-tillage for a long time will harm the soil and reduce crop yields. We measured soil bulk density, penetration resistance, and water content in 1994 in plots of a study started in 1982 on Pullman soil at Bushland, Texas, where stubble mulch and dno-tillage were used for growing dryland winter wheat and grain sorghum. Soil density and penetration resistance always increased and water content often increased with depth. Soil density and pentration resistance were lower in the tillage layer in stubble mulch plots that were loosened by tillage than in no-tillage plots that were not loosened. No definite trends for soil density were found below 10 cm (4 inches). Soil penetration resistance often was greater with no-tillage than with stubble mulch tillage below 10 cm, even though water content was greater with no- tillage. Penetration resistance was related to bulk density and water content of the entire soil profile and for most depth increments with stubble mulch tillage. With no-tillage, density and water content of the profile were related to penetration resistance, but it was related only to water content for the different soil depths. We believe this is due to the stable soil pores that develop in no-tillage soils due to root channels and soil organisms. Even though penetration resistance was greater with no- tillage, crop yields have been as good or better with no-tillage than with stubble mulch tillage. Based on our measurements and crop yields for the study, we believe long-term use of no-tillage will not reduce crop yields or harm this and similar soils under dryland cropping conditions. Technical Abstract: Stubble mulch tillage (SMT) and no-tillage (NT) are suitable for dryland crops in the Great Plains, but there is concern if long-term use of NT will affect crop yields, soil quality, and production sustainability. We determined effects of using SMT and NT in several dryland winter wheat and grain sorghum cropping systems on soil bulk density (BD), penetration resistance (PR), and water content (WC) in 1994 in plots of a study starte in 1982 on Pullman (Torrertic Paleustoll) soil at Bushland, TX. Data were analyzed to compare tillage method, cropping system, rotation phase, land condition (level or nonlevel), and crop effects on BD, PR, and WC. Soil BD and PR always increased with depth and WC often increased. The tillage X depth interaction effect also was significant. Soil BD and PR were lower in the tillage layer in SMT than in NT plots, but no definite trends occurred for BD below 10 cm. The PR often was greater with NT than with SMT below 10 cm, even though WC was greater with NT. Soil BD, PR, and WC differed also for some comparisons other than those involving tillage. Regression analyses showed PR was related to profile BD and WC and most depth increments with SMT, but to profile BD and WC and only to WC for depth increments with NT. This indicates a strength factor largely independent of BD and affected by WC influences PR of NT soil. We concluded that stable biopores reduced effects of BD differences among NT plots and that NT soils developed a rigid structure independent of BD. Although PR was greater with NT, it has not resulted in lower crop yields in plots used for the study. Results of this study and associated crop yields, therefore, suggest long-term use of NT will not impair the quality and production sustainability of this and similar soils under dryland cropping conditions.
0.7742
FineWeb
This New Scientist essay by Debora MacKenzie criticizes agricultural practices by painting a scenario of a great die off of humanity during the 21st century due to agricultural collapse on a global scale. The scenario is presented as an address to the Edinburgh Science Festival on the first day of the 22nd century. By the 1990s it was apparent that population growth had slowed, and in 1994 demographers predicted that numbers would stabilise at 9 billion by 2050. Many people stopped worrying about a population crisis. As we all now know, the demographers were half right. The population did reach 9 billion. But it didn't stay there long. By the 2050s, food production was declining sharply, and in many places, high-yield agriculture collapsed completely. This led to the great famines. Meanwhile, population density triggered two other agents of decline: the great migrations and the plagues. World population plummeted. For many people these sorts of pseudo-scientific screeds are as close as they ever get to actual scientific knowledge or agronomic practice. Such essays just seem silly to those who have a smattering of knowledge but the majority of people can't refute them and tend to give them credence since they are published in decent quality popular periodicals which also publish sound popularizations of recent scientific papers. This is an example of political subversion of science of the kind posted about in Creative Darwinism. MacKenzie is not an environmentalist, she's a politician using environmental doom scenarios as a wedge issue to advance a political agenda. Fomenting an atmosphere of crisis and impending disaster to persuade people to empower authoritarian governments that promise to save them from calamity is a practice as old as civilization but still sometimes successful. As MacKenzie demonstrates it's really quite easy to do. All that is required is an assumption of worst case outcomes for all current issues and a careful selection of cited data points to omit any contrary information. In doing so the scenario can be defended as being plausible, though it's not, and immunize the creator of the scenario from criticism. She will not be held accountable for her public assertions though she has done harm to society by releasing a mental plague virus. We do not consistently punish this crime and have no good methods to do so that would not stifle social speech. It is interesting to note that this wanton act of destruction could well be more dangerous to humanity in the coming decades than any biological replicator. A few extra data points may help slow the spread of this virus. The most important one is that though all of the threats to agriculture noted by MacKenzie are real they are all well known and techniques to alleviate them are already in use. Agriculture is not static or monolithic. New methods emerge from universities as well as other public and private research centers faster than anyone can consume them. Some ideas are at the theoretical stage, some are in laboratory test, some are in field trial, some are in tentative commercial use and some are current state of practice on a regional basis. Each region has its own current evolutionary state and changes at different rates. This broad diversity of practice and continuous state of change reveals the naivety of scenarios like MacKenzie's. I won't do a full frontal fisking of the essay but a few specific data points to refute MacKenzie's overwrought assertions may be useful. Chemical fertilisers could replace the mineral nutrients taken by the plants, but couldn't restore the soil's fine microstructure. True, but they help quite a lot when used as part of an integrated system. Agricultural soils are exhausted only a few years after first being put into cultivation. For thousands of years farmers coped with this by serial use of land, first by slash and burn migration of field use and later by rotation and fallowing. It wasn't until the fifteenth century in Europe that western farmers began to consciously amend their soils by importing fertility. They added chemicals such as lime and gypsum to increase calcium and sulfur, rock dust to increase phosphorous, and grew nitrogen fixing legumes as a cover crop, inter crop or sub crop. They consciously managed their fields by rotating from crop back to pasture and so further improved their fields by the beneficial addition of dung and urine from grazers. These farmers grew quite skilled in the chemical analysis of soil using the most sensitive instruments available - their noses and tongues. An experienced farmer can tell you the PH of soil and it's calcium content to a high degree of accuracy by taste of the soil and the plants growing there. Scientists and naive observers often misunderstand this due to conflicting terminology and ways of knowing. When a farmer says that soil is sweet or sour he is commenting on the PH. Not surprisingly, some of the best farmers using these new methods were accused of witchcraft, especially if there was an existing grievance such as religious conflict. These good farmers were driven off and their lands seized. One especially competent group, the various sects of Anabaptists, fled Europe for the new world and can still be found in many parts of north and south America, often with prosperous and healthy farms. Recent advances in the development of soil analysis equipment have made sophisticated chemical testing widely and cheaply available, and continuing progress with remote sensing and communication capabilities allow continuous real time monitoring. When used as part of integrated systems specifically designed to optimize soil fertility and structure it becomes possible to maintain fields in good tilth while in continuous use. By 2000 we had pushed the plants to their limits. Thirsty farm animals increased the demand for water. But their major impact was on grain reserves: it takes 3 kilograms of grain to produce 1 kilogram of meat. Nonsense. A full grown cow drinks 30 gallons of water on a hot, dry day. The average urban American uses over ten times that much, often 20 times as much, just for domestic purposes. The inflated claims for livestock water use are based on the water used to grow the grain fed to them. But, they are only fed grain because it is abundant, cheap and high in carbohydrates that make them fat. Slow maturing animals, such as cattle, only get grain for a short period while they are finished (fattened) for market. Where grain is not abundant or cheap livestock eat natural diets. Ruminants such as cattle, sheep and goats - animals with cloven hooves that chew their cud - evolved to eat grasses and thrive on them. Birds, such as chickens, do eat seeds but specialize in bugs and worms which are sources of high quality protein, as well as getting a significant percentage of their diet from grasses. Pigs are omnivores, like people, and can be pastured just like cows. Supplementing these animals with grains is a marketing decision, a way to fatten them faster and increase productivity that makes sense when the cost of grain is low. In many parts of the world animals get little or no supplements. They are grazers all their lives. The fields they graze are among the most healthy and fertile agricultural lands on the planet. New Zealand, Australia and S. America are masters at pastoral agriculture and produce some of the finest meat, dairy and fiber products. In many parts of N. America and Europe the post WWII practice of massive grain supplementation is being curtailed or abandoned. Animals are being let out of confinement and returned to pastures for both economic, agronomic and nutritional reasons. These animals are healthier, more flavorful and produce more healthful meat and milk. As this practice increases less land is cropped. Returning tired crop land to pasture restores it and increases total biomass produced as well as biodiversity. What makes this transition viable is the development of a suite of techniques and technologies. The techniques involve close coordination of pasture growth and consumption to maximize nutritional content and volume. Pastures and grazers coevolved, adapted to one another, and both thrive in the presence of the other. It may seem contradictory that grass thrives when grazed, that it needs its predators, but grass evolved in the (anthropomorphizing) expectation of being grazed, trampled, shat upon and then abandoned for a time to recover while the grazing herd moved on to greener pastures. Herd migration is not possible where there are cropped fields, roads and other human land uses. Land is fenced and livestock have less room to roam. Migration can be simulated by subdividing fields into small paddocks and moving animals from one to the next in rotations that allow the herd effect - intensive use followed by periods of rest. But, fences are expensive and the required size of a paddock varies with the season as grasses grow at different rates depending on day length, temperature and moisture. Recent advances in portable electric fence technologies have solved this problem. Grazing managers can quickly and cheaply set up light weight temporary fences and vary paddock size as conditions require. This is a management intensive activity that requires skill, knowledge and attention but it requires comparatively few resources. Some managers simulate natural conditions to an even greater extent by managing multiple species on the same land. For example cattle, goats and chickens can be rotated through the same paddocks and each species finds their own type of preferred foods. They benefit one another because they don't share diseases or parasites. Gut worms that infect cattle spend part of their lives in the grass and depend on being consumed along with grass to get into a host. When a goat eats them they are foiled. The opposite is true for goat parasites. Chickens eat every bug and worm they can catch. They'll pick apart dung pats to get at any larvae excreted by cattle or deposited in the pats by flying insects such as flies. This further reduces the parasite load. Multi-species grazing increases the productivity of the land and the health of all species. At the end of the article MacKenzie finally gets to the real subject; world domination. Sometimes I wonder whether it would have been different if, when industry globalised at the start of the millennium, political power had globalised too. I know the idea of global government is a heresy. But so many of our crises were outside the realm of corporate concern, and beyond the power of national and regional governments. A global authority might have been able to monitor and perhaps stem the spread of human, animal and crop diseases. This is pure rubbish. People, corporations and governments are all aware of the issues and are actively working to address them. Their diversity and personal interest in the issues is the perfect match for the types of problems they face. They are quicker to identify problems and quicker to respond. Diversity in analysis and response is a discovery machine that allows parallel development of varied methods and selection of superior solutions which can be shared about. The absolute worst thing we could do is to impede this form of organization which so perfectly matches the natural systems in question which are themselves developing new attacks in a distributed fashion. The natural threats of globalization arise from increased communication and transportation. Pests have more contacts in more places with more various life forms. This presents them with more challenges and more opportunities from which they develop more varied methods to thrive. The same factors operate for farmers. They get more information at lower cost about varied techniques and technologies they can use to pursue their interests. The competition between life forms is eternal. MacKenzie's worst case exaggerations for political purposes are foolish and despicable but so would be an opposite scenario of some fantasy future where all problems are solved by some combination of technology and organizational wisdom. It won't happen either way, the future will be like the present and the past. People will continue to have problems and continue to develop solutions. There will be material progress as there has been in the past, the trend to do more with less will continue, the trend of substituting knowledge for material resources will continue but things will never be easy. We are our only worthy opponents. The only truly dangerous threat is people like MacKenzie. If we come to ruin it will be for social reasons not natural or technological reasons. We have the choice. We can learn to live well together as we must to be so numerous and powerful or we can shrink from the task and retreat into seductive utopian illusions that will end in grief. posted by back40 | 8/30/2003 10:08:00 PM Post a Comment Links to this post:
0.6231
FineWeb
London – Worldwatch Institute today released its report “State of the World 2011: Innovations that Nourish the Planet”, which spotlights successful agricultural innovations and unearths major successes in preventing food waste, building resilience to climate change, and strengthening farming in cities. The report provides a roadmap for increased agricultural investment and more-efficient ways to alleviate global hunger and poverty. Drawing from the world’s leading agricultural experts and from hundreds of innovations that are already working on the ground, the report outlines 15 proven, environmentally-sustainable prescriptions. “The progress showcased through this report will inform governments, policy-makers, NGOs, and donors that seek to curb hunger and poverty, providing a clear roadmap for expanding or replicating these successes elsewhere,” said Worldwatch Institute President Christopher Flavin. “We need the world’s influencers of agricultural development to commit to longstanding support for farmers, who make up 80 percent of the population in Africa.” ‘State of the World 2011’ comes at a time when many global hunger and food security initiatives – such as the Obama administration’s Feed the Future Program, the Global Agriculture and Food Security Program (GAFSP), the United Nations World Food Programme (WFP), and the Comprehensive Africa Agriculture Development Programme (CAADP) – can benefit from new insight into environmentally-sustainable projects that are already working to alleviate hunger and poverty. Nearly a half-century after the Green Revolution, a large share of the human family is still chronically hungry. While investment in agricultural development by governments, international lenders and foundations has escalated in recent years, it is still nowhere near what’s needed to help the 925 million people who are undernourished. Since the mid 1980s, when agricultural funding was at its height, the share of global development aid has fallen from over 16 percent to just 4 percent today. In 2008, $ 1.7 billion dollars in official development assistance was provided to support agricultural projects in Africa, based on statistics from the Organization for Economic Co-operation and Development (OECD), a miniscule amount given the vital return on investment. Given the current global economic conditions, investments are not likely to increase in the coming year. Much of the more recently pledged funding has yet to be raised, and existing funding is not being targeted efficiently to reach the poor farmers of Africa. “The international community has been neglecting entire segments of the food system in its efforts to reduce hunger and poverty,” said Danielle Nierenberg, co-director of Worldwatch’s Nourishing the Planet Project. “The solutions won’t necessarily come from producing more food, but from changing what children eat in schools, how foods are processed and marketed, and what sorts of food businesses we are investing in.” Serving locally-raised crops to school children, for example, has proven to be an effective hunger- and poverty-reducing strategy in many African nations, and has strong parallels to successful farm-to-cafeteria programs in the United States and Europe. Moreover, “roughly 40 percent of the food currently produced worldwide is wasted before it is consumed, creating large opportunities for farmers and households to save both money and resources by reducing this waste,” according to Brian Halweil, Nourishing the Planet Co-Director. ‘State of the World 2011’ draws from hundreds of case studies and first-person examples to offer solutions to reducing hunger and poverty. These include: - In 2007, some 6,000 women in The Gambia organized into the TRY Women’s Oyster Harvesting producer association, creating a sustainable co-management plan for the local oyster fishery to prevent over-harvesting and exploitation. Oysters and fish are an important, low-cost source of protein for the population, but current production levels have led to environmental degradation and to changes in land use over the last 30 years. The government is working with groups like TRY to promote less-destructive methods and to expand credit facilities to low-income producers to stimulate investment in more-sustainable production. - In Kibera, Nairobi, the largest slum in Kenya, more than 1,000 women farmers are growing “Vertical” Gardens in sacks full of dirt poked with holes, feeding their families and communities. These sacks have the potential to feed thousands of city dwellers while also providing a sustainable and easy-to-maintain source of income for urban farmers. With more than 60 percent of Africa’s population projected to live in urban areas by 2050, such methods may be crucial to creating future food security. Currently, some 33 percent of Africans live in cities, and 14 million more migrate to urban areas each year. Worldwide, some 800 million people engage in urban agriculture, producing 15–20 percent of all food. - Pastoralists in South Africa and Kenya are preserving indigenous varieties of livestock that are adapted to the heat and drought of local conditions – traits that will be crucial as climate extremes on the continent worsen. Africa has the world’s largest area of permanent pasture and the largest number of pastoralists, with 15–25 million people dependent on livestock. - The Food, Agriculture and Natural Resources Policy Analysis Network (FANRPAN) is using interactive community plays to engage women farmers, community leaders, and policy-makers in an open dialogue about gender equity, food security, land tenure, and access to resources. Women in sub-Saharan Africa make up at least 75 percent of agricultural workers and provide 60–80 percent of the labor to produce food for household consumption and sale, so it is crucial that they have opportunities to express their needs in local governance and decision-making. This entertaining and amicable forum makes it easier for them to speak openly. - Uganda’s Developing Innovations in School Cultivation (DISC) Program is integrating indigenous vegetable gardens, nutrition information, and food preparation into school curricula to teach children how to grow local crop varieties that will help combat food shortages and revitalize the country’s culinary traditions. An estimated 33 percent of African children currently face hunger and malnutrition, which could affect some 42 million children by 2025. School nutrition programs that don’t simply feed children, but also inspire and teach them to become the farmers of the future, are a huge step toward improving food security.
0.5179
FineWeb
Calculating Pre-Heat according to BS EN 1011-2 Part 1 deals with the production and control of arc welding of metallic materials and is appropriate for all types of fabrication. It gives general guidance for the arc welding of metallic materials in all forms of product. The BS EN 1011 series also contains the following titles: - Part 2: Arc welding of ferritic steels - Part 3: Arc welding of stainless steels - Part 4: Arc welding of aluminium and aluminium alloys - Part 5: Welding of clad steel - Part 6: Laser beam welding - Part 7: Electron beam welding - Part 8: Welding of cast irons. We are going to focus on 1011-2 and how this specification is used to calculate pre heat temperatures of ferritic steels. Preheat for a welding process. Preheating should be considered when there is a significant risk of hydrogen cracking in the welded joint. Preheating is a process that is applied to increase the temperature of the work piece. According to BS EN ISO 13916:2017 section 3.1, the pre heat temperature should be measured ‘in the weld zone immediately prior to any welding operation’. Why do we pre heat? Preheating is carried out for the following reasons. - It slows down the cooling rate of weld metal, HAZ (heat affected zone) and adjacent base metals, which yields a good microstructure to the metal, prevents martensite formation at microstructural level and prevents from cracking of the weld metal and HAZ. - Preheating removes the diffusible Hydrogen from the base metal and hence prevents the chances of Hydrogen induced cracking (HIC). - It helps in reducing the expansion and contraction rate of the metal. - It Burns the unwanted material or impurities (if any) present on the joint surface. - Preheating also helps in achieving better mechanical properties such as notch toughness. How to apply pre heat Pre heat can be applied using various methods, the 3 most common are: - Propane Torch- Operators use a fuel gas and compressed air to apply flame directly to the metal component. - Induction- Induction creates a magnetic subject that generates eddy currents inside the base metal, heating it internally from within. Induction accessories, such as cables or blankets, are positioned on the phase to generate the magnetic field. - Resistance heating- Resistance heating uses electrically heated ceramic pads, placed on the base metal. The heated tiles transfer heat to the part through radiant heat and conductive heat where the pads are in contact with the part. Point of measurement. According to BS EN ISO 13916:2017 section 4.1, ‘The temperature measurement shall normally be made on the surface of the workpiece facing the welder, at a distance of A = 4 × t, but not more than 50 mm, from the longitudinal edge of the groove. This shall apply for workpieces thickness t not exceeding 50 mm in the weld’. When the thickness exceeds 50 mm, the required temperature shall exist in the parent metal for minimum distance of 75mm or as otherwise agreed in any direction from the joint preparation. Where practicable, the temperature shall be measured on the face opposite to that being heated. Otherwise, the temperature shall be confirmed on the heated face at a time after removal of the heat source related to parent metal thickness to allow for temperature equalization. Equipment used for temperature measurement should be specified in the welding procedure specifications, for example: — - Temperature sensitive materials (e.g. crayons or paints) - Contact thermometer (CT) - Thermocouple (TE) - Optical or electrical devices for contactless measurement (TB). Calculating Pre heat. When calculating pre heat there are 4 main essentials that should be taken into consideration. These are: - Combined material thickness in mm. - Heat Input measured in KJ/mm. - Diffusible hydrogen content per ml/100g of deposited metal (Welding Consumables). - Material CEV. CEV- The carbon equivalent is a measure of the tendency of the weld to form martensite on cooling and to suffer brittle fracture. When the carbon equivalent is between 0.40 and 0.60 weld preheat may be necessary. When the carbon equivalent is above 0.60, preheat is necessary, post heat may also be necessary. The CEV is usually found on the material mill cert, however the precision of the CE value can be estimated by using the below equation: Diffusible Hydrogen Content- Using table C.2, the operator must select a hydrogen scale for the consumables being used. As above, this is determined by identifying the diffusible hydrogen content of the consumable and should be stated by the consumable manufacturer in accordance with the relevant standard. Combined Thickness- Next you must determine the combined thickness of the welded joint. Combined thickness is used to assess the heat sink of a joint for the purpose of determining the cooling rate. For the same metal thickness, the preheating temperature is higher in a fillet weld than in a butt weld because the combined thickness, and therefore the heat sink, is greater. If the thickness of the welded joint increases greatly beyond 75 mm from the weld line, it may be necessary to use a higher combined thickness value. See the image below for guidance on calculating combined thickness. Heat Input- BS EN ISO 1011-1 For many steels, abrupt cooling from the heat of welding should be avoided, this is due to the risk of hardening or cracking. For this reason, depending on the type of material, thickness of material and heat input, preheating and the maintenance of an upper or lower interpass temperature may be required, as listed in the relevant parts of EN 1011. The heat input shall be chosen, so as to be matched to the welding process. The heat input during welding can be viewed as a main influencing factor on the properties of ferritic and ferritic-austenitic stainless steel welds in particular. This influences the time/temperature cycle occurring during welding. Where appropriate, the heat input value may be calculated as follows: Q is the heat input. k is the thermal efficiency. U is the arc voltage, measured as near as possible to the arc, in V. I is the welding current, in A v is the travel speed in mm/s. Once the data has been collected for the four main essentials (Combined material thickness in mm, Heat Input measured in KJ/mm, Diffusible hydrogen content, CEV) you should refer to Figures C.2 (a- m)- Conditions for welding steels with defined carbon equivalents.
0.8449
FineWeb
The children enter the classroom, take out their special Early-Bird books (alternate lined and blank pages) and follow the task described on the board. This is usually self-explanatory, and sometimes there is a choice. For some tasks, targets are set or children are told to set their own target. Early-Bird work is done in silence while the register is taken. Books are left out, and after assembly the children continue for a minute or two while the class settles down. The work is normally marked by the child, using the system of a tick with an equals sign (what is expected of you), a tick with a plus sign (very good work), or a tick with two plus signs (excellent). Sometimes, children swap and mark each other's work, and occasionally the teacher will ask for the books in. These are some of the tasks which have been used successfully by many teachers: * Design: a chair you never have to leave, a watch that has everything, a pair of shoes that takes you to the moon and back, a perfect park, a letter-box for the new millennium, a flag for Jupiter, a healthy eating bar, a Christmas tree with a difference. (Hint: One design activity a week goes down really well.) * How many words can you make from these phrases: hymn practice, centre of gravity, London Zoo?(Introduce topical variations.) * Use your dictionary to find: words ending with "y", containing "ou", starting with "st'. * Devise sums where the answer is 333: using the digits 5, 8 and 2; using addition, subtraction, multiplication and division. * Write the questions for the following answers: London, glass, badger. * Write or draw a list of things you could do in: one minute, one hour, one day. For example, run around the school field, butter a slice of bread,and finish four maths questions. * Write or draw as many opposites as you can with or without dictionaries. * Write the following numbers in words or the following words in numbers. The bigger numbers are a good test. * What are your feelings about a controversial issue. * Write a poem or haiku on Diwali, the rain, or bullying. * Copy a particular passage in your neatest handwriting and illustrate. * Compile a list of: similes, homonyms. * Things I'm looking forward to in: 1998, 1999, 2000. * How many minutes until the next millennium? * Write sentences including: too, they're, six words. The list is endless. A useful task is to ask the children to write down as many Early-Bird activities as they can, then use them! Sheila Tuli is a class teacher at Dollis Junior School, Mill Hill, London
0.9912
FineWeb
Thrilled to Announce our Link with Youth Sport Trust May 24, 2018 N is for Nutrition. Can good nutrition really help you do better at school? May 1, 2013 The human brain relies on a constant supply of blood glucose to function at its best.... Read Kate Percy's recent contribution to the "A-Z for School Improvement through Sport and PE", edited by Shaun Dowling, Head of Sport, United Learning and David Woods CBE: N is for Nutrition Can the application of sports nutrition principles enhance academic performance? It is now widely accepted that achieving potential in sport is not just about training hard. Healthy eating and making the right food choices is inextricably linked to better performance in sport, enabling practitioners train, compete and recover stronger and faster. Active young people need good nutrition to support basic growth and development as well as additional nutrient-rich calories to fuel their sport. A balanced and varied diet, with particular emphasis on slow-release carbohydrates, lean protein, unsaturated fats and adequate hydration not only provides sustained energy, but concentration and focus. The principles of sports nutrition can be adapted to the suit the diet of less active pupils; nourishing the body with a healthy, unprocessed, nutrient-rich diet will fuel the brain for effective learning, optimising the pupil’s ability to engage in the classroom and therefore achieve his or her potential. The human brain is a highly metabolically active tissue that depends on a constant supply of blood glucose to meet its energy needs. Composed principally of fat and water, the brain’s billions of cells, or neurons, require fats, protein and complex carbohydrates as well as micronutrients, for instance B vitamins such as thiamin, riboflavin, niacin and pantothetic acid and minerals such as magnesium, iron and manganese. Energy, generated from food, regulates the growth and change of the cells. There have been many studies over the past ten years on the influence of diet on mental function in young people. For instance, recent research by the Human Appetite Research Unit at the University of Leeds shows that habitual breakfast eating and school breakfast programmes have a positive effect on children’s academic performance, with the clearest effects being measured on the mathematical and arithmetic grades of undernourished children. According to the report, children who skip breakfast have more difficulty focussing on classroom tasks and concentrating in class, which is apparent in both well and undernourished children and children from deprived backgrounds.(1) Most sports practitioners will agree that carbohydrate, converted into blood glucose, or glycogen, and used for energy or stored in the liver and muscle, is the principle source of energy for their sport. Glycogen stored in the muscle is used to fuel the muscles; glycogen stored in the liver is used to maintain steady blood glucose levels for the body and brain. Using the concept of the glycaemic index, (G.I.) a measurement which measures the speed at which glucose is digested into the bloodstream, we can manipulate the carbohydrate we eat to sustain our energy levels, concentration and focus. Students who consume high G.I. foods for breakfast, for instance, sugary cereals, sugary fizzy drinks, doughnuts, which are rapidly digested into the bloodstream, will have a burst of energy followed by a mid-morning slump. This can lead to fidgeting, lack of concentration, headaches or drowsiness. The student will then often consume more sugary foods to boost their energy, thus continuing the cycle. Meals and snacks containing complex, lower G.I. carbohydrates, such as whole grain cereals, oats, wholemeal bread, fruits and starchy vegetables are richer in nutrients, containing more fibre, which slows down the absorption of the carbohydrate, as well as micronutrients vital to healthy brain function, such as B vitamins and vitamin E. Commonly eaten by sports practitioners, these carbohydrates release glucose into the bloodstream more gradually and will therefore provide more sustained levels of energy, concentration and focus. The speed at which carbohydrate is absorbed into the bloodstream can be further reduced if eaten with protein. In sport, a 4:1 carbohydrate to protein ratio is often cited as the ideal combination for sustaining energy and fuelling the muscles for growth, repair and recovery. For instance, a peanut butter sandwich on wholemeal bread, or a banana and a natural yoghurt. This can be applied to the classroom too. As well as sustaining concentration levels, protein provides amino acids, building blocks that are used to support structures in neurons. Good sources of protein are found in lean meat such as chicken or turkey, fish, eggs, pulses, beans and grains, as well as raw nuts and seeds. Fat is an important nutrient for children, particularly those who practice sport on a regular basis. Children need fat to provide essential fatty acids for healthy growth and development, to fuel the muscles, to transport vitamins and proteins around the body, to carry fat-soluble vitamins such as vitamins A, D, E and K, to promote healthy skin and nerve function and to manufacture important hormones. However, the majority of fat they eat should come from unsaturated fats, found in avocados, oily fish, nuts and seeds and cold-pressed vegetable oils, rather than saturated fats, the fats found in processed meats, cream, and hydrogenated vegetable oils (transfats, found in processed cakes and pastries).
0.5529
FineWeb
Statistics shows that Americans drink more soda than ever before. They account for more than 25 percent of all drinks consumed in the United States. More than 15 billion gallons were sold in 2000 — about one 12-ounce can per day for every man, woman and child. But here’s some information that may keep you away from opening the can: 1. Extra pounds Soda is a significant contributor to obesity. Drinking a single can a day of sugary drinks translates to more than a pound of weight gain every month. And diet soda is just as likely to cause weight gain as regular, or even more — it may sound counterintuitive, but people who drink diet soft drinks actually don’t lose weight. Artificial sweeteners induce a whole set of physiologic and hormonal responses that actually make you gain weight. 2. Liver damage Soda damages your liver. Consumption of too many soft drinks puts you under increased risk for liver cirrhosis similar to the increased risk faced by chronic alcoholics. 3. Tooth decay Soda dissolves tooth enamel. Soft drinks are responsible for doubling or tripling the incidence of tooth decay. Soda’s acidity is even worse for teeth than the solid sugar found in candy. 4. Kidney stones and chronic kidney disease Colas of all kinds are well known for their high phosphoric acid content, a substance that changes the urine in a way that promotes kidney stone formation. Drinking one quart (less than three 12-ounce cans) of soda per week may increase your risk of developing kidney stones by 15 percent. Anything that promotes weight gain increases the risk of diabetes. Drinking soda also stresses your body’s ability to process sugar. Some scientists now suspect that this may explain why the number of Americans with type 2 diabetes has tripled from 6.6 million in 1980 to 20.8 million today. 6. Heartburn & acid reflux Heavy consumption of soda is a strong predictor of heartburn. Many carbonated beverages are very acidic. They also deliver a lot of air in the form of carbon dioxide, which can cause distension of your stomach. And that distension appears to be associated with more reflux. 7. Soft drinks = Soft Bones = Osteoporosis Soft drinks containing phosphoric acid are definitely linked to osteoporosis (a weakening of your skeletal structure) because they lead to lower calcium levels and higher phosphate levels in your blood. When phosphate levels are high and calcium levels are low, calcium is pulled out of your bones. 8. Hypertension (high blood pressure) Experts have reasons to believe that overconsumption of soda leads to an increase in blood pressure. It doesn’t matter if the soda is regular or diet. 9. Heart disease Heavy soda drinkers are more likely to develop risk factors for heart disease. Research shows that drinking more than one soft drink a day is associated with an increased risk of developing metabolic syndrome — a group of symptoms such as central obesity, elevated blood pressure, elevated fasting blood sugar, elevated fasting triglycerides, and low levels of HDL or “good” cholesterol. Having three or more of the symptoms increases your risk of developing diabetes and cardiovascular disease. 10. Impaired digestion (gastrointestinal distress) Gastrointestinal distress includes increased stomach acid levels requiring acid inhibitors, and moderate to severe gastric inflammation with possible stomach lining erosion. Drinking sodas, especially on an empty stomach, can upset the fragile acid-alkaline balance of your stomach and other gastric lining, creating a continuous acid environment. This prolonged acid environment can lead to inflammation of your stomach and duodenal lining. I’ve been warning my readers of the dangers of soda since I started this site, and this is a good list of reasons why you’ll want to avoid this beverage like the plague. Soda is on my list of the five absolute worst foods and drinks you can consume. Because even though fat has 250 percent more calories than sugar, the food that people get MOST of their calories from is sugar from corn, or high fructose corn syrup (HFCS). According to USDA estimates, the per capita consumption of HFCS was about 40 pounds per year as of 2007, primarily in the form of soft drinks. Tragically, high fructose corn syrup in the form of soda, is now the number one source of calories in the United States. Food and beverage manufacturers began switching their sweeteners from sucrose (table sugar) to corn syrup in the 1970s when they discovered that HFCS was cheaper to make. HFCS is only about 20 percent sweeter than table sugar, but it’s a switch that has drastically altered the American diet. The good news about all these shocking health facts is that stopping the pernicious habit of drinking soda is one of the easiest things you can do. As you can clearly see from all the examples above, you can radically improve your health simply by cutting soda out of your diet. In fact, I am confident that your health improvement would be even more profound than if you quit smoking – a statement that has raised quite a few eyebrows through the years. Why do I believe this? Because soda clearly elevates insulin levels, and elevated insulin levels are the foundation of nearly every chronic disease, including: - Heart disease - Premature aging In addition to the ten health problems above, most of which I have reported on through the years already, there is one more that is not discussed as often: drinking soda also increases your cancer risk. Soda Drinkers Have a Higher Risk of Several Types of Cancer Numerous studies have pointed out the link between sugar and increased rates of cancer, suggesting that regulating sugar intake is key to slowing tumor growth. In one human study, 10 healthy people were assessed for fasting blood-glucose levels and the phagocytic index of neutrophils, which measures immune-cell ability to envelop and destroy invaders such as cancer. Eating 100 grams of carbohydrates from glucose, sucrose, honey and orange juice all significantly decreased the capacity of neutrophils to engulf bacteria. Another four-year study at the National Institute of Public Health and Environmental Protection in the Netherlands compared 111 biliary tract cancer patients with 480 controls. In this study the cancer risk associated with the intake of sugars more than doubled for the cancer patients. Many studies have linked sugar intake with different types of cancer, such as: Breast cancer — An epidemiological study in 21 modern countries that keep track of morbidity and mortality (Europe, North America, Japan and others) revealed that sugar intake is a strong risk factor that contributes to higher breast cancer rates, particularly in older women. Throat cancer – Throat cancer is a particularly hard cancer to beat–more than 90 percent of patients with invasive esophageal cancer die within five years of diagnosis. Research has revealed that those who drink soda might have a higher risk of developing esophageal cancer. They found a strong link between the accelerated rate of people drinking carbonated soft drinks and the growing number of cases of esophageal cancer over the course of two decades. Colon cancer — According to another study, women who consume a high dietary glycemic load may increase their risk of colorectal (colon) cancer. Glycemic load is a measure of how quickly a food’s carbohydrates are turned into sugars by your body (glycemic index) in relation to the amount of carbohydrates per serving of that food. The study consisted of more than 38,450 women who were followed for almost eight years. The participants filled out questionnaires about their eating habits, so researchers could examine the associations of dietary glycemic load, overall dietary glycemic index, carbohydrate, fiber, non-fiber carbohydrate, sucrose, and fructose with the subsequent development of colon cancer. They found that women who ate the most high-glycemic-load foods were nearly three times more likely to develop colon cancer. Do You Want to Radically Improve Your Health? Then replace soda and other sugary drinks with clean, pure water. Normalizing your insulin levels is one of the most powerful physical actions you can take to improve your health and lower your risk of cancer along with all the other diseases and long-term chronic health conditions mentioned above. Fortunately, it is also the variable most easily influenced by healthy eating and exercise.
0.5119
FineWeb
Courtesy of Dr. Emma Jarvinen and Dr. Martin Schnaiter( KIT and schnaiTEC, Germany ) cite > div > figcaption> Physicists have struggled since the 1960 s to understand how global warming will affect the many different kinds of gloom, and how that will influence global warming in turn. For decades, glooms have been seen as far and away the biggest source of skepticism over how severe global warming will be–other than what civilization will do to reduce carbon emissions. Kate Marvel contemplates the cloud question at the NASA Goddard Institute for Space Studies in New York City. Last-place spring, in her agency several storeys above Tom’s Restaurant on the Upper West Side, Marvel, wearing a cloud-patterned scarf, pointed to a story depicting the assortment of prophecies made by different world climate frameworks. The 30 or so models, run by climate research centers around the world, program in all the known ingredients to predict how much Earth’s temperature will increase as the CO 2 degree tickings up. Each climate model solves a define of equations on a spherical grid representing Earth’s atmosphere. A supercomputer is used to evolve the grid of answers forward in time, indicating how breath and heat flow through each of the grid cells and circulate around the planet. By adding carbon dioxide and other heat-trapping greenhouse gases to the simulated atmosphere and discovering what happens, scientists can predict Earth’s climate response. All the climate frameworks include Earth’s ocean and breeze currents and incorporate most of the important climate feedback loops, like the dissolve of the polar ice caps and the increases in humidity, which both exacerbate global warming. The simulations agree about most factors but differ greatly in how they try to represent clouds. The least sensitive climate simulations, which predict the mildest reaction to increasing CO 2 , was of the view that Earth will warm 2 degrees Celsius if the atmospheric CO 2 concentration doubleds relative to preindustrial times, which is currently on track to happen by about 2050.( The CO 2 concentration was 280 places per million before fossil fuel burning began, and it’s above 410 ppm now. So far, the average world temperature has risen 1 degree Celsius .) But the 2-degree prophecy is the best-case scenario. “The thing that is actually freaks people out is this upper end here, ” Marvel said, indicating projections of 4 or 5 degrees of warming in response to the doubling of CO 2 . “To set that in context, the distinction between now and the last ice age was 4.5 degrees.” The big range in the models’ predictions chiefly comes down to whether they see clouds blocking more or less sunlight in the future. As Marvel put it, “You can moderately confidently say that the simulation spread in climate sensitivity is basically merely a simulate spread in what gloom are going to do.” Lucy Reading-Ikkanda/ Quanta Magazine The problem is that, in computer simulations of the world climate, today’s supercomputers cannot resolve grid cells that are smaller than about 100 kilometers by 100 kilometers in region. But clouds are oftens no more than a few kilometers across. Physicists hence have to simplify or “parameterize” clouds in their global simulations, allocating an overall level of cloudiness to each grid cell based on other belongings, like temperature and humidity. But clouds involve the interplay of so many mechanisms that it’s not obvious how best to parameterize them. The warming of the Earth and sky strengthens some mechanisms involved in cloud shaping, while also fueling other forces that interrupt glooms up. Global climate models that predict 2 degrees of warming for responding to doubling CO 2 generally also read little or no change in cloudiness. Modelings that project a rise of four or more degrees forecast fewer clouds in the coming decades. Michael Mann, head of the Earth System Science Center at Pennsylvania State University, said that even 2 degrees of warming will make “considerable loss of life and suffering.” He said it will kill coral reefs whose fish feed millions, while at the same time heightening the risk of shattering inundations, wildfires, droughts, heat waves, and hurricanes and causing “several paws of sea-level rise and threats to the world’s low-lying island nations and coastal cities.” At the 4-degree intention of the assortment, we would watch is not merely “the destruction of the world’s coral reef, massive loss of animal species, and cataclysmic extreme weather events, ” Mann said, but also “meters of sea-level rise that they are able to challenge our capacity for modification. It would intend the end of human civilization in its current form.” It is difficult to imagine what might happen if, a century or more from now, stratocumulus clouds were to abruptly disappear wholly, initiating something like an 8-degree jump on top of the warming that will already have appeared. “I hope we’ll never got to get, ” Tapio Schneider said in his Pasadena office last year. The Simulated Sky In the last decade, some progress in supercomputing power and brand-new observations of actual clouds have attracted dozens of researchers like Schneider to the problem of global warming’s X-factor. Researchers are now able to example gloom dynamics at high resolve, producing patches of simulated glooms that closely match real ones. This has allowed them to see what happens when they crank up the CO First, physicists came to tractions with high clouds–the icy, wispy ones like cirrus clouds that are miles high. By 2010, task by Mark Zelinka of Lawrence Livermore National Laboratory and others convincingly depicted that as Ground warms, high-pitched glooms will move higher in the sky and likewise change toward higher latitudes, where they won’t block as much direct sunlight as they do nearer the equator. This is expected to slightly exasperate warming, and all world climate frameworks have integrated this effect. But vastly more important and more challenging than high glooms are the low-grade, thick, tumultuous ones — specially the stratocumulus smorgasbord. Bright-white sheets of stratocumulus extend a one-quarter of the ocean, showing 30 to 70 percent of the sunlight that would otherwise be absorbed by the dark ripples below. Simulating stratocumulus clouds involves immense estimating ability because they contain tumultuous eddies of all sizes. A research aircraft piloting through stratocumulus clouds off the coast of Chile during a 2008 mission to gather data about the interactions between clouds, aerosols, atmospheric boundary strata, gale currents and other aspects of the Southeast Pacific climate . div> Chris Bretherton, an atmospheric scientist and mathematician at the University of Washington, performed some of the first simulations of these clouds combined with idealized climate frameworks in 2013 and 2014. He and his collaborators modeled a small patch of stratocumulus and was indicated that as the sea surface below it warmed under the influence of CO 2 , the cloud became thinner. “Whos working” and other findings–such as NASA satellite data indicating that warmer years are less cloudy than colder years–began to suggest that the least sensitive world climate models, the ones predicting little change in cloud cover and merely 2 degrees of warming, likely aren’t right. Bretherton, whom Schneider calls “the smartest person we have in this area, ” doesn’t merely develop some of the best simulations of stratocumulus clouds; he and his team also wing through the actual clouds, hanging instruments from airliner wings to asses atmospheric condition and leaping lasers off of gloom droplets. In the Socrates mission last wintertime, Bretherton hopped on both governments experiment airliner and winged through stratocumulus clouds over the Southern Ocean between Tasmania and Antarctica. Global climate frameworks tend to greatly underestimate the cloudiness of such regions, and this builds the simulates relatively insensitive to possible changes in cloudiness. Bretherton and his team set out to investigate why Southern Ocean clouds are so abundant. Their data indicate that the clouds consist mainly of supercooled water droplets rather than ice specks, as climate modelers have all along been presupposed. Liquid-water droplets stick around longer than ice droplets( which are bigger and more likely to fall as rainwater ), and this seems to be why the region is cloudier than world climate simulations predict. Adjusting the modelings to indicate the findings will establish them more sensitive to cloud loss in this region as the planet heats up. This is one of several cables of indication, Bretherton said, “that would favor the scope of predictions that’s 3 to 5 degrees , not the 2- to 3-degree range.” Schneider’s new simulation with Kaul and Pressel improved on Bretherton’s earlier task chiefly by connecting what is happening in a small patch of stratocumulus cloud to a simple modeling of the rest of Earth’s climate. This allowed them to investigate for the first time how these glooms is not merely respond to, but likewise affect, the world temperature, in a potential feedback loop. Tapio Schneider, Colleen Kaul and Kyle Pressel, of the California Institute of Technology, identified a tipping level where stratocumulus clouds broken off . div> California Institute of Technology( Schneider ); Courtesy of Colleen Kaul; Courtesy of Kyle Pressel Their simulation, which operated for 2 million core-hours on supercomputers in Switzerland and California, modeled a approximately 5-by-5-kilometer patch of stratocumulus cloud much like the clouds off the California coast. As the CO 2 level ratchets up in the simulated sky and the sea surface heats up, the dynamics of the gloom evolve. The researchers found that the tipping degree arises, and stratocumulus clouds suddenly disappear, because of two reigning ingredients that work against their shaping. First, when higher CO 2 grades construct Earth’s surface and sky hotter, the extra hot drives stronger unrest within the cloud. The unrest mingles moist breath near the highest level of the cloud, pushing it up and out through an important boundary stratum that caps stratocumulus clouds, while drawing dry breath in from above. Entrainment, as this is called, works to break up the cloud. Secondly, as the greenhouse effect establishes the upper atmosphere warmer and thus more humid, the cool of the crests of stratocumulus clouds from above becomes less efficient. This cooling is essential, because it causes globs of cold, moist breath at the top of the cloud to drop, shaping room for warm, moist breath near Earth’s surface to rise into the gloom and become it. When cooling get less effective, stratocumulus clouds grow thin. Countervailing forces and results eventually get overpowered; when the CO 2 degree reaches about 1,200 duties per million in the simulation–which could happen in 100 to 150 years, if emissions aren’t curbed–more entrainment and less cooling conspire to break up the stratocumulus cloud altogether. To see how the loss of glooms would affect the world temperature, Schneider and peers inverted the approaching of global climate simulates, simulating their gloom patch at high-pitched solving and parameterizing the rest of the world outside that carton. They found that, when the stratocumulus clouds disappeared in the simulation, the enormous amount of extra heat absorbed into the ocean increased its temperature and rate of vapour. Water vapor has a greenhouse effect much like CO 2 , so more liquid vapor in the sky has meant that more heat will be trapped at the planet’s surface. Extrapolated to the entire globe, the loss of low-toned clouds and rise in liquid vapor leads to runaway warming–the dreaded 8-degree jumping. After the climate has made this transition and water vapor saturates the breath, ratcheting down the CO 2 won’t delivering the cloud back. “There’s hysteresis, ” Schneider said, where the state of the system depends on its history. “You need to reduce CO 2 to concentrations around present day, even slightly below, before you form stratocumulus clouds again.” Paleoclimatologists said today hysteresis might explain other perplexes about the paleoclimate record. During the Pliocene, 3 million years ago, the atmospheric CO 2 degree was 400 ppm, similar to today, but Earth was 4 degrees hotter. This might be because we were cooling down from a much warmer, perhaps largely cloudless period, and stratocumulus clouds hadn’t yet come back. Past, Present, and Future Schneider underscored an important caveat to the study, which will need to be addressed by future employment: The simplified climate framework he and his colleagues created assumed that global breeze currents would stay as they are now. However, there is some proof that these circulations might cripple in a way that would shape stratocumulus clouds more robust, raising the threshold for their departure from 1,200 ppm to some higher level. Other changes could do the opposite, or the tipping degree could vary by region. To better “capture the heterogeneity” of the world structure, Schneider said, researchers will need to use many simulations of cloud spots to calibrate a world climate modeling. “What I would love to do, and what I hope we’ll get a chance to do, is embed many, many of these[ high-resolution] simulations in a global climate model, maybe tens of thousands, and then move a global climate simulation that interacts with” all of them, he said. Such a setup would allow a more precise prediction of the stratocumulus tipping degree or points. A simulation of stratocumulus clouds in a 3-by-3-kilometer patch of sky, as seen from below . div> There’s a long way to go before we reach 1,200 places per million, or thereabouts. Ultimate disaster can be averted if net carbon emissions can be reduced to zero–which doesn’t intend humen can’t liberate any carbon into the sky. We currently pump out 10 billion tons of it each year, and scientists estimate that Earth can assimilate about 2 billion tons of it a year, in addition to providing what’s naturally ejected and assimilated. If fossil fuel emissions can be reduced to 2 billion tons annually through the expansion of solar, gale, nuclear and geothermal vigour, changes in the agricultural sector, and the use of carbon-capture technology, anthropogenic global warming will slow to a halt. What does Schneider envision the future will bring? Sitting in his office with his laptop screen open to a mesmerizing simulation of roiling clouds, he said, “I am pretty–fairly–optimistic, simply because I think solar power has get so much cheaper. It’s not that far away from the cost curve for producing electricity from solar power crossing the fossil fuel cost curve. And once it intersects, there will be an exponential metamorphosi of entire industries.” Kerry Emanuel, the MIT climate scientist , have also pointed out that possible economic breakdown caused by nearer-term the consequences of climate change might also curtail carbon emissions before the stratocumulus tip-off point is reached. But other unforeseen the modifications and climate tipping levels could intensify us toward the cliff. “I’m fretted, ” said Kennett, the pioneering paleoceanographer who discovered the PETM and unearthed evidence of many other tumultuous intervals in Earth’s history. “Are you kidding? As far as I’m concerned, global warming is the major issue of our time.” During the PETM, mammals, newly ascendant after the dinosaurs’ downfall, actually flourished. Their northward marching conducted them to land bridges that allowed them to fan out across the globe, filling ecological niches and spreading south again as countries around the world reabsorbed the extravagance CO 2 in the sky and cooled over 200,000 years. However, their story is hardly one we can hope to imitate. One difference, scientists say, is that Earth was much warmer then to start with, so there were no ice caps to melt and accelerate the warming and sea-level rise. “The other big difference, ” said the climatologist Gavin Schmidt, director of the Goddard Institute, “is, we’re here, and we’re adapted to the climate we have. We built our metropolitans all the way around the coasts; we’ve construct our agricultural systems expecting the rainwater to be where it is and the dry areas to be where they are.” And national borders are where they are. “We’re not prepared for those things to alteration, ” he said. Original narrative reprinted with permission from Quanta Magazine, an editorially independent publishing of the Simons Foundation whose mission is to enhance public understanding of science by encompassing research developments and trends in mathematics and the physical and life sciences . em> https :// www.wired.com/ story/ as-the-world-warms-clouds-could-disappear-catastrophically /
0.9181
FineWeb
What was Snowball Earth? Scientists now believe that at between two to four times between 850 and 580 million years ago, the planet was covered completely in ice and early life forms were nearly wiped out: a “Snowball Earth.” There are a couple of theories as to why this occurred: one being that the Sun was about six percent cooler than it is today; the other is that the continents had all traveled—due to plate tectonics—mostly south; ocean currents traveled easily around the planet without interruption, and volcanic activity dwindled to a minimum. The result was that significantly less carbon dioxide was being generated and expelled into the atmosphere. In addition, the Earth’s axis was significantly more tilted at the time (by 54 degrees versus today’s 23.5 degrees), causing more drastic seasonal extremes. Once a period of cooling began and ice shelves formed, more of the Sun’s light was reflected, thus causing the cooling cycle to accelerate more and more until everything was frozen. Early theories about this, proposed by such scientists as George Williams at the University of Adelaide and Joe Kirschvink at the California Institute of Technology, include the effects of plate tectonics as causing massive volcanic eruptions worldwide. This would lead to an extended winter of unprecedented scale that turned Earth into one big frozen lump in space that would last for millions of years. Nearly all life was wiped out. In fact, one matter of debate concerning this theory was that critics felt such a Snowball Earth would have extinguished all life. This notion was laid to rest in the 1990s when life was found to thrive near geothermal vents deep under the ocean. The Snowball Earth cycle was only broken because volcanic action and plate tectonics would cause carbon dioxide levels to build under the ice, especially since there would be no liquid water to dissolve minerals or aid in the dissipation of CO2 levels. The result would be an inevitable, sudden massive release of CO2 that would lead to a comparatively brief period of extreme heat with temperatures averaging 120°F (50°C). However, the thaw retreated to another snowball period a couple more times over millions of years before the continents moved into positions that created a more stable geophysical state of the planet. Volcanic activity moderated, as did carbon dioxide levels. Another Snowball Earth could happen again, however, as the continents, hundreds of millions of years from now, drift back together to form a new supercontinent.
0.9862
FineWeb
This feature allows a remote extension to cause a caller to record their name to be played before you answer a call To turn ON or OFF you first need to dial your main phone number to access the system. When you get the system dial-tone enter one of the following codes. - Turn ON by dialing 9962, # - Turn OFF by dialing 9963, #
0.9744
FineWeb
An idea does not seem too out of a Hollywood script could become a reality following the theoretical demonstration of a group of scientists in France. The possibility of an invisibility cloak could have a new actor: the heat . The idea has been published in the journal Optics Express . An investigation where a team of scientists from the Fresnel Institute in France have shown how to apply the ideas of optical camouflage, much like Harry Potter layer, in the world “thermal”. And have an idea that could be used to steer and move the heat around the temperature sensitive electronics . The research part of what is known as transformation optics, the first study proposed in 2006 as a means to an invisibility cloak. So far all approaches encounter the same problem, limitations in camouflage invisibility levels remained well below what we’ve seen in fiction. Recently similar ideas have been directed to protect objects of magnetic fields or even sound or seismic waves. Most of these approaches aim to manipulation of the peaks and troughs of the waves in order to achieve the camouflage effect. For the new study, the new actor, the heat would act differently. According to Sebastien Guenneau, Fresnel Institute: The heat wave is not simply diffuses from the hot to cold regions. Mathematics and physics in a very different play. For example, a wave can travel over long distances with little attenuation, while the temperature is usually disseminated in small distances. As told, the “trick” was to apply the mathematics of transformation optics of the equations describing diffusion. According Guenneau and colleagues have found that it was a means to transport heat at will. They proposed a layer made of 20 rings of material, each with its own “diffusivity” understood as the degree to which it can transmit and dissipate the heat. According Guenneau: We can design a blanket for heat to be spread around a region of invisibility, which is protected from the heat. Or we can force the heat to concentrate in a small volume, which then heats up very quickly. A theory must be tested in practice and as these researchers suggest, would have the ability to direct and concentrate the heat . In which case could be applied in the microelectronics industry, where the heat load in specific areas remains a challenge for engineers.
0.9481
FineWeb
The Centre for Community Dialogue and Development works with local communities to promote an equal and just society in Kenya's rift valley region. An independent NGO, it is committed to empowering young people to take charge of their futures. It provides a platform for people to share skills and develop their potential, and runs a wide range of projects and programmes, including: - Providing support to orphans and displaced people. - Supporting the delivery of humanitarian aid. - Fostering self-sustaining development across its areas of operation. - Carrying out advocacy related to these issues. - Developing historical justice work. - Promoting sustainable development of the environment. - Establishing information centres for communities to follow Kenyan parliamentary proceedings, with large turnouts frequently present. - Training facilitators to coordinate activities within the information centre and act as situation monitors. - Seeing local groups identify and develop their own plans for community empowerment. - Setting up grassroots committees sourced in different villages around every centre. These have been pivotal in linking different activities, and the centre considers this a key achievement as it has led to work being coordinated across the programme. Follow the CCDD on facebook.
0.9981
FineWeb
Failing to perform studies and implement their recommendations will leave you without the protective equipment necessary to protect your facility from downtime, or in extreme cases, catastrophic damage to property and personnel Even the most sophisticated and well-designed facilities have experienced the effects of electrical system failure or misoperation. Since unplanned outages can cost millions of dollars in lost production, information, and customers, it pays to explore how outages typically occur and to better understand how you can prevent them. Two common scenarios contribute to an unplanned outage. Either short-circuit protective equipment isn't properly adjusted during installation or it isn't properly maintained and re-adjusted as the configuration of the electrical system changes over time. Prior to shipping, circuit breaker manufacturers typically adjust their products to trip at minimum values. While this may be a conservative approach from a safety and protection standpoint, minimum trip values are rarely the best practical settings for operation of a facility. Basically, circuit breaker and relay manufacturers assume installers and facility owners will properly adjust protective devices before they're put into operation. Despite these steps, misoperation of protective devices can and does occur. Facility decision makers sometimes opt not to perform an engineering study to determine the necessary application-specific settings or adjustments to circuit breakers and relays. In other cases, an engineering study is performed, but no one tests the relays or breakers to confirm they're set correctly or perform as intended. Let's discuss the fundamentals you should know regarding short-circuit and coordination study procedures. What causes a short circuit? A short circuit is the undesired and uncontrolled conduction of electrical current from phase to ground, phase to neutral, and/or phase to phase. It always involves unintentional bridging of conductors and often involves the failure or breakdown of insulation. Several scenarios can lead to a short circuit. For example, electricians may have connected temporary grounds or other conductors between phases/neutral and/or ground for safety purposes during installation and testing. If these temporary conductors are unintentionally left connected when the circuit is energized, a short circuit results, producing what's called a “bolted fault.” Experts agree that air is the cheapest and most commonly used form of electrical insulation. If a water leak or some other form of contamination creates a conductive path between phases/neutral and/or ground, the air insulation will break down and produce a short-circuit arc. During the life of the equipment, other insulating materials can break down and fail, also producing an arc and short-circuit current. Workers who take voltage measurements or perform other work on energized equipment can also unintentionally bridge or short-out conductors in the equipment, creating a short circuit (Photos 1 and 2). It's also important to examine how the utility system influences short-circuit current. The amount of impedance between the source and the short-circuit location has a direct effect on the amount of short-circuit current that will flow during a fault. If the utility increases circuit conductor size, replaces the service transformer with a larger unit, or installs a new generating station near the customer, the available short-circuit current will increase. If little impedance exists between the source of power and the location of the postulated fault, the resulting short circuit can be very large, possibly more than 100,000A. Why is a short circuit dangerous? A short circuit always involves the flow of uncontrolled current that isn't restrained by the normal load resistance. A short circuit, whether a short-circuit arc or bolted fault, provides a much lower resistance to current flow than typical loads. The resulting overcurrent condition will normally exceed the load current rating of conductors, transformers, and other equipment through which the current flows. This increased flow of current quickly heats the conductors and equipment, since heating is a function of current squared. If the short circuit involves an arc in air, the arc produces intense heat that can exceed 20,000°F. This temperature will vaporize conductors, insulation, and other nearby materials. The byproducts of this process include ionized gas, which is conductive and will perpetuate the arc. As copper vaporizes, it expands by a factor of about 67,000. This rapid expansion will result in near-explosive forces on any nearby equipment or workers. The National Electrical Code, IEEE 1584 Guide for Performing Arc Flash Hazard Calculations, and NFPA 70E Electrical Safety Requirements for Employee Workplaces refer to this phenomenon as arc flash and provide guidelines for protection from, and calculation of, arc flash energy. Because of the intense heat and destruction produced by an uncontrolled electrical arc, it's important to de-energize the circuit as quickly as possible after a short circuit. What is short-circuit protection? A common misconception is that fuses and circuit breakers will prevent short circuits or equipment failure. In reality, these protective devices are reactive and only operate after a failure has initiated. The real job of overcurrent protective devices is to limit the damage and effect of a short circuit. They minimize the damage at the point of failure, minimize or prevent injury, prevent damage to other equipment, and minimize the extent of the resulting power outage. If they're designed and adjusted to act very quickly, only a small amount of damage will occur as a result of the fault energy. If there's excessive short-circuit current, protective devices like fuses and circuit breakers are designed, tested, and rated to interrupt specific maximum levels of this current. If the available short-circuit current exceeds the rating of the protective device, the device is likely to fail catastrophically when it attempts to interrupt a fault. Such a failure would result in downtime and extensive repair, and it could unnecessarily expose personnel to injury (Photos 3 and 4). How do you calculate short-circuit current? Although a short circuit produces uncontrolled flow of current, the resulting current isn't infinite. There are a number of factors that determine the magnitude of fault current. The key factors used to calculate or predict the amount of short-circuit current that will flow include the following: Operating voltage, often referred to as electrical pressure System impedance or the resistance to current flow In simple form, the equation for determining short-circuit current is derived from Ohm's Law and is expressed as I=E÷Z, where I is current, E is voltage, and Z is impedance. The voltage used in the calculation is the rated operating voltage of the circuit. The impedance value used is the sum of all the equipment and conductor impedances from the source(s) of power to the point in the circuit where the short circuit is postulated. Since the voltages, impedances, and resulting currents are vector quantities, these calculations can become very complex. Most engineers now use commercially available software to model the system and perform these calculations to conform with the IEEE Brown Book (ANSI/IEEE 399 Standard, Recommended Practice for Power Systems Analysis). The importance of “coordinating” breakers and fuses. The diagram of a simple electrical system resembles a tree-like configuration. The main power source corresponds to the tree trunk, and the primary feeder circuits and branch circuits correspond to large and small tree branches. To minimize damage and the extent of the power outage, breakers and fuses are located at strategic points in the system — usually at the main power entrance and the start of each primary and branch circuit. If the fault occurs near the end of a branch circuit, the fuse or breaker immediately upstream from that fault should open before any other protective devices do, which would limit the resulting power outage to only the portion of the circuit downstream of the protective device. Similarly, if the fault occurs on a primary feeder, the fuse or breaker for that feeder should open before any other upstream protective devices. Selecting and setting the time-current characteristics of protective devices so they'll operate in this manner is called “coordination.” When the branch breaker and main breaker aren't coordinated, the main breaker will trip when a fault occurs on a small branch circuit, exposing the entire facility to a complete power outage. Conversely, if the branch breaker were coordinated with the upstream breakers and fuses, only the branch breaker immediately upstream of the fault should trip. Circuit breaker, relay, and fuse operating characteristics are graphically represented by time-current curves. These protective devices are typically designed to interrupt the current more quickly for higher current values and slower for lower current values. For example, a bolted fault is interrupted more quickly than an overload. Each protective device has a unique curve or set of curves that manufacturers and engineers use to represent its operating characteristics. These curves are a plot of operating time vs. current level. From these curves, you can tell how long it will take for the protective device to interrupt at any value of current. Although fuse manufacturers offer a variety of fuse types, each with its own curve shape and current rating, fuses are non-adjustable devices. If a different operating characteristic or current rating is needed, you must replace the fuse with a more compatible type. Smaller molded case breakers typically aren't adjustable either and must similarly be replaced if a different operating characteristic or trip value is necessary. Most relays and electronically controlled breakers, however, are designed with considerable flexibility. They offer a wide range of field-adjustable trip settings and operating curves. Breaker curves provided by manufacturers show a band of operating times for any given current. The width of each breaker curve is a result of the following factors: Manufacturing tolerances that produce slight variations for each breaker The small but discrete amount of time for a relay or breaker to sense the fault or overload Time for the breaker mechanism to move (opening the contacts), and time for the current to extinguish Consequently, the characteristic curves for breakers are shown with a certain width to indicate the minimum and maximum operating time. Similarly, fuse curves are shown with a distinctive width that's a result of manufacturing tolerances, which produce slight variations for each fuse. In addition, there is a small but discrete amount of time between initial melting of the fuse link(s) and extinction of the current. As explained earlier, it's important to coordinate protective devices by choosing a main fuse or breaker with slower operating characteristics than the feeder breakers. You would also want to select feeder breakers with slower operating characteristics than the branch breakers, and so on. In general, the protective device furthest downstream should have the lowest trip setting (in amperes) and be the one that operates fastest for a given current level. The study engineer will typically plot and overlay the characteristics of each protective device to confirm the sequence in which they'll operate and to confirm that there is adequate margin between the operating times of each. To select the appropriate fuse and breaker/relay settings, it's necessary to perform a short-circuit and coordination analysis for the electrical system. The process begins with a computer model of the system based on a single line diagram. Equipment and conductor impedances, operating voltage, load values, starting currents, equipment ratings, and interrupting characteristics of the protective devices must be included in the model. The short-circuit calculation will identify any interrupting equipment that may be inadequately rated for the available short-circuit current. Using the results of the computer model analysis of the system, it's then possible to choose optimum time-current settings for relays and breakers and plot the results. Engineers use the following general concepts when making these determinations: The fault current or overload should always be interrupted by the first protective device upstream — on the source side — of the fault location. Normal transformer inrush current and motor starting current should never cause a protective device to operate. Overcurrent devices should interrupt the current as quickly as possible after an overload or short-circuit occurs. After the coordination study is complete and a summary report has been issued, field engineers will use the study results to make the appropriate breaker/relay adjustments and test the breakers/relays to confirm that they operate as intended. What about special cases? A fact in all fields of study is that rules come with exceptions. Below we explore special cases and how to best handle them. - What if there is an automatic transfer switch? If the system involves one or more transfer switches, the short-circuit and coordination studies must consider various possible operating parameters, such as source and load configurations, for these switches. - What if there is an emergency generator? If the system includes one or more emergency generators, it will have a similar number of transfer switches. Each of the possible operating scenarios must then be considered when performing the short-circuit and coordination studies. - What if the system is supported by a UPS or back-up battery? An uninterruptible power supply requires special attention when performing a short-circuit and coordination study. Since a UPS and battery represent a load during normal operation but a source during utility outage situations, the protective devices must be sized for either condition. Manufacturer recommendations must be considered. - What if there is co-generation? Co-generation can present special difficulties since available fault current can be high with a generating unit connected directly to the system. Often the addition of a co-gen facility will require that protective equipment be replaced with higher rated equipment. When weighing the myriad factors that can affect plant availability and production, it's critical not to overlook the pivotal role played by short-circuit and coordination studies. They can prevent unplanned outages, eliminate workforce accidents, and extend the life of equipment. When placed in the context of the overall facility operation, they can make the difference between performance and disaster. Considering the fact that the cost of a short-circuit and coordination study is typically a small fraction of the electrical system cost, it's a wise investment that can pay dividends in the form of increased safety and availability. Vahlstrom is director, technical services for Electro-Test, Inc. in San Ramon, Calif. Sidebar: Do You Have the Data for a Study? To perform a short-circuit and coordination study, you'll need the following information: A single line diagram of the electrical system Data from the utility, including available fault current, operating voltage, and specifics regarding the utility's protective equipment at the point of service, such as manufacturer, model, time/current settings, or fuse rating Specifics for each protective device in the electrical system, including manufacturer, model, available time/current settings, and short-circuit interrupting rating Impedance and rating of each transformer Conductor specifics, including lengths, sizes, and types of all overhead lines, bus ducts, and cables Sidebar: Additional Studies You can perform several other valuable engineering studies to improve the safety, efficiency, and reliability of your electrical distribution system. They can be performed with essentially the same information used for short-circuit and coordination studies. - Arc-flash evaluation. IEEE 1584 and NFPA-70E provide guidelines for calculating the incident energy produced by an electric arc. An arc-flash calculation will determine the available arc fault exposure at equipment locations within a facility. This information will provided workers with the information they need to select the appropriate level of PPE required to work on any piece of energized equipment. - N+1 reliability study. A single-point-of-failure or redundancy study can be performed using the information collected for a short-circuit and coordination study. N+1 refers to a normal plus one redundant path for supplying critical loads. In some facilities, the electrical system has been designed to N+2 criteria. This provides for the continued supply by the third path when one path is out for maintenance and a second path fails. - Probabilistic risk assessment (PRA) reliability study. IEEE 493 provides guidance and data for performing a risk assessment for electrical systems. The standard provides data collected from industrial facilities, including probability of failure and mean time to repair for typical electrical equipment, generators, and utility feeders. Using these methods and data, it's possible to analyze an electrical system and calculate the probability of failure and predicted average annual downtime. If actual data is available for the facility involved, this data can be used in the calculations and more accurately predict number and length of outages per year. PRA and its resultant data can be especially helpful in comparing the predicted reliability or availability that would result from alternate electrical system designs. - Voltage profile. Using the model prepared for performing short-circuit and coordination studies, the study engineer can also calculate the probable band of voltages within which the facility will operate. These calculations consider the impedances of the system and operation of various loads at the facility. - Load flow analysis. Using load data and the information collected to perform a short-circuit and coordination study, a load flow analysis can be created that will calculate load currents and voltage levels for various operating conditions. This can be especially helpful in determining tap settings for transformers and when considering or evaluating voltage correction alternatives. - Motor starting analysis. Short-circuit and coordination information can also be helpful in calculating system voltages that will occur during motor starting conditions. As discussed above, such an assessment can be especially helpful in determining tap settings for transformers and when considering or evaluating voltage correction alternatives. - Harmonic current and voltage assessment. The information gathered for a short-circuit and coordination study can also be used to perform a harmonic current and voltage assessment of the electrical system. The results can help evaluate alternative corrective actions such as installation of filters or re-configuration of circuits. - Power factor correction. Poor power factor can result in costly rate penalties imposed by the utility. It can also contribute to overload and overheating of conductors and transformers. The information gathered for a short-circuit and coordination study can be used to assess alternative locations and sizes of any needed power factor correction equipment.
0.9852
FineWeb
Thank you for visiting our T.H.I.S recess information page! We cannot overlook the importance of providing physical activities for our children throughout the school day. Below you will find our T.H.I.S. recess rules, rewards, consequences, resources and general information. You can also find out more information on discipline by reviewing our handbook. - Students have one recess per day and it lasts approximately 25 minutes. We do participate in a "recess before lunch" option as recommended by the national Wellness model. - Recess aides supervise the children while outside and while eating in the cafeteria. The principal alternates between the playground and lunchroom whenever possible. - A discipline referral (see below) is used for minor and major discipline offenses. When students make poor choices but do not require a referral, they may be asked to take "time out" by sitting on the outdoor bench or "quiet table" in the lunchroom. Our School Rules: - T= Tolerance (Am I showing respect to everyone and being tolerant?) - H= Honesty (Am I telling the truth?) - I= Independence (Am I using my own mind and making my own decisions instead of following others?) - S= Safety (Am I being safe and not harming others?) - Golden Tickets are issued to students who follow the rules above. At least five students/day receive Golden Tickets! Golden Tickets can be redeemed in the front office for a variety of prizes in the treasure box. - "Caught You Being Good" recognition is shared with classrooms and individuals for showing positive behaviors. Please see the sample of our Office Referral Form below: Office Referral Form-DRAFT-subject to change Comments: _____________________________________________________________ ________________________________________________________________________Parent Signature: _________________________ Date: __________ _____ Required _____ Not Required Parent contact Referral sent home Phone call Conference required **All major problems require consequence, parent contact, and signature. END OF OFFICE REFERRAL FORM It takes all of us working together to keep our playgrounds safe and we appreciate your support. For additional information about proper playground behavior and promoting character education for our youth, feel free to visit the following website resources:
0.8516
FineWeb
May 26, 2021 Each day, we at AAPI Youth Rising are raising our voices and taking actions to create positive change in the world. Just some of the work we’ve done this month includes: - Making impactful artwork to spread awareness of key issues in our community. - Rallying our community members, schools, and teachers to pledge to teach diverse histories + stories. - Registering people to vote. - Speaking out about the issues affecting us as youth. Consider taking these actions as well to make a difference!
0.9231
FineWeb
This category contains levels in which Snowmen are among the enemies. All items (32) - (11th+, Strax+ req.) The Great Detective - (12th+) Expert Adipose - (L1) Annas playground (No black, pink gems) - (L25) (Time Attack) Fan Christmas Invasion - Second in Command - Snowman endurance - Snowmen in the rain - Snowmen Revisited - Sontaran Disturbance: The Snowmen - The Allies of the Great Intelligence - The Eleventh Doctors Sonic Screwdriver - The final piece - The return of the Snowmen! - The Snowmen: Backstreets of London - The Snowmen: The Great Intelligence - The Snowmen: The Latimer Residence - The snowmen: The snowman - Time attack: 1892 - Time Attack: The allies of the Great Intelligence - Time Splinters Community content is available under CC-BY-SA unless otherwise noted.
0.9511
FineWeb
- Analyze your expenses to identify areas of potential savings. - Embrace technology to automate tasks, reduce manual labor, and increase efficiency. - Outsource non-core functions to save on labor costs and benefit from specialized expertise. - Negotiate with suppliers for better deals or explore other options for lower prices. - Have a disaster management plan to prepare for unexpected events and minimize financial losses. As a business owner, one of your primary objectives is to maximize profits while minimizing costs. Reducing expenses is essential to running a successful business, but it can be challenging to identify where to start. This guide outlines five tips that can help you reduce costs in your business. 1. Analyze Your Expenses The first step to reducing costs in your business is to analyze your expenses. Go through your financial records to identify where most of your money goes. You may be surprised to find that you are overspending in some areas. Once you have identified the areas where you are overspending, you can cut back. For example, you may spend too much on office supplies or utilities. You can reduce these expenses by finding cheaper suppliers or negotiating better rates. You may also discover that some unnecessary expenses can be eliminated. 2. Embrace Technology Embracing technology is another way to reduce costs in your business. Technology can help automate some tasks, reduce the need for manual labor, and increase efficiency. For example, you can use accounting software to manage your finances, reducing the need for an accountant. You can also use online marketing tools to promote your business, reducing the need for expensive advertising campaigns. Additionally, technology can help you streamline your operations, making tracking inventory, managing orders, and communicating with customers easier. By embracing technology, you can save time and money while improving your business processes. 3. Outsource Non-core Functions Outsourcing non-core functions is another way to reduce costs in your business. Non-core functions are essential tasks that do not directly contribute to the growth of your business. For example, accounting, payroll, and human resources are necessary functions but do not generate revenue. By outsourcing these functions, you can reduce your labor costs, as you do not need to hire full-time employees to perform them. You can also benefit from the expertise of professionals who specialize in these areas, which can improve the quality of your services. 4. Negotiate With Suppliers Negotiating with suppliers is another effective way to reduce costs in your business. Many suppliers are willing to offer discounts or better rates to their long-term customers. You can try negotiating a better deal if you have used the same supplier for a long time. You can also explore other suppliers to compare prices and quality. By doing this, you may find better deals, which can help you reduce your costs while maintaining the quality of your products or services. 5. Have a Disaster Management Plan A disaster management plan is critical for any business, as it can help you minimize the impact of unexpected events. Disasters can cause significant financial losses, which can be difficult to recover. By having a disaster management plan, you can prepare for emergencies and reduce the risk of financial loss. Here are the aspects that should be in your disaster management plan: Water Damage Management Investing in commercial water restoration can be an invaluable resource for managing water damage in your business. A commercial water restoration team can help you limit the damage caused by flooding or other water issues in your business. They can provide prompt and professional service to reduce the cost of repairs and minimize downtime. Fire Safety Plan Ensure your staff is trained in detecting a fire, such as smoke alarms or sprinkler systems. Ensure they know when to evacuate the building safely and what routes to take in a fire emergency. Develop evacuation routes, designate a safe meeting location outside the building, and provide everyone with flashlights and other tools for navigating the dark. Develop a plan to protect your business from attacks on its IT systems and networks. Establish secure passwords, create a plan for responding to data breaches, implement antivirus software, and regularly back up data to restore it in an attack or other emergency. Business Continuity Plan Ensure you have a plan to keep your business running during an emergency. This includes inventorying all necessary supplies, having backup power sources, creating alternative communication strategies for staying in touch with customers and vendors, and ensuring all employees know the disaster management policy. Additionally, it would be best if you established procedures for dealing with customer refunds or financial losses. Reducing costs in your business requires careful planning and analysis. Following the tips outlined in this guide, you can identify areas where you are overspending, embrace technology, outsource non-core functions, negotiate with suppliers, and have a disaster management plan. By implementing these strategies, you can reduce your costs, increase efficiency, and improve the profitability of your business.
0.8458
FineWeb
It is universally suggested that cancer is suspected of being environmentally induced. The economic risk associated with exposure to suspected environmental carcinogens is not yet analyzed. Although several researchers have estimated the health damages of certain pollutants, an appropriate methodology to estimate the economic damages of suspected carcinogens is not yet developed. The theoretical model developed in this dissertation analyzes the economic risk of exposure to suspected environmental carcinogens. The important conclusion drawn from the theoretical model is that exposure to environmental carcinogens at earlier stages of life should be optimally avoided. The theoretical model provides a basis for the empirical estimation of marginal increases in the risk of cancer incidence attributable to environmental factors. The risk of cancer associated with suspected carcinogens is estimated according to a cross-sectional statistical study. This study was conducted to examine the correlation between cancer mortality in 60 selected U.S. cities and a host of carefully chosen environmental factors. The statistical model is unique and innovative. It includes many relevant independent variables to avoid spurious relationships; since it is evident that suspected environmental carcinogens are numerous. The independent variables considered fall into 3 broad categories -- socioeconomic, eating habits and life style, and environmental quality variables. The model is also lagged to establish a reliable relationship between cancer and environmental factors since it is evident that cancer has a relatively long latency period. The atmospheric concentration of suspended particulates and ammonium as well as beef, pork, and cigarette consumption are found to be significantly correlated with several cancer categories. Since tobacco smoke contains nitrosamines and pork products contain nitrates -- precursors to nitrosamines perhaps the carcinogenic effect of nitrosamines is reflected in pork and cigarette consumption. Economic damages from pork and cigarette consumption are found to exceed consumer expenditures on these items. The economic damages of environmental factors are assessed as the product of the risk of cancer associated with environmental factors and the value of increased risk of death, rather than lost earnings. The value of increased risk of death is estimated from market observations made by Thaler and Rosen. The following actions are recommended: 1) The public should be warned about the suspected health hazards of nitrate-containing food items. 2) The high level of correlation between cigarette consumption and various cancer categories should be further emphasized. 3) A serious attempt should be made to identify suspected carcinogenic agents and reduce their ambient level or remove them from the environment if such action is economically optimal. Level of Degree Department of Economics First Committee Member (Chair) William Dietrich Schulze Second Committee Member Shaul Ben David Third Committee Member Allen V. Kneese Pazand, Reza. "Environmental Carcinogenesis: An Economic Analysis Of Risk." (1976). https://digitalrepository.unm.edu/econ_etds/118
0.5829
FineWeb
US 20050188220 A1 The present invention relates to an arrangement (and a method) for protection of end user personal profile data in a communication system comprising a number of end user stations and a number of service/information/content providers or holding means holding end user personal profile data. It comprises an intermediate proxy server supporting a first communication protocol for end user station communication and comprising means for providing published certificates, a personal profile data protection server supporting a second communication protocol for communication with the intermediary proxy server and a third communication protocol for communication with a service/information/content provider, and an application programming interface (API) allowing service/information/content provider queries/interactions, and comprising storing means for storing of end user specific data and end user personal profile data. The intermediary proxy server comprises means for verifying the genuinity of a certificate requested over said second communication protocol from the personal profile protection server against a published certificate and the service/information/content server can request, via the API, personal profile data and personal profile data is delivered according to end user preferences or in such a manner that there is no association between the actual end user and the personal profile data of the end user. 1. An arrangement for protection of end user personal profile data in a communication system including a number of end user stations and a number of service/information/content providers or holding means holding end user personal profile data, comprising: an intermediate proxy server supporting a first communication protocol for end user station communication; means for providing published certificates; a personal profile data protection server supporting a second communication protocol for communication with the intermediary proxy server and a third communication protocol for communication with one of said service/information/content providers, said personal profile data protection server further comprises an application programming interface (API) allowing service/information/content provider queries/interactions, and storing means for storing of end user specific data and end user personal profile data; and wherein the intermediary proxy server further comprises means for verifying the genuinity of a certificate requested over said second communication protocol from the personal profile protection server against a published certificate and in that the service/information content server can request, via the API, personal profile data and in that personal profile data is delivered according to end user preferences or in such a manner that there is no association between the actual end user and the personal profile data of the end user. 2. An arrangement according to 5. An arrangement according to 7. An arrangement according to 8. An arrangement according to 9. An arrangement according to 11. An arrangement according to 12. An arrangement according to 13. An arrangement according to 14. An arrangement according to 15. An arrangement according to 16. An arrangement according to 17. An arrangement according to 20. An arrangement according to 21. An arrangement according to 22. An arrangement according to 23. A method for protection of end user personal profile data in a communication system with a number of end user stations and a number of service/information/content providers, comprising the steps of: registering a certificate for an end user personal profile protection server with a trusted third party, providing a request for the certificate from an intermediary proxy server in communication with an end user station using a first communication protocol, to the protection server over a second communication protocol, providing a response from the protection server to the intermediary server, verifying, in the intermediary proxy server that the certificate is genuine, thereby belonging to the respective protection server and is registered with the trusted third party, after confirmation that the protection server/certificate is genuine, allowing the service provider having acquired the protection server to retrieve end user data and personal profile data according to policy setting and end user privacy level over an Application Programming Interface and a third communication protocol. 24. The method according to 25. The method according to 29. The method according to 30. The method according to providing an API at the protection server, using the API for queries to the protection servers from the service provider, providing responses over a third communication protocol to the service provider. 31. The method of 32. The method of The present invention relates to an arrangement and a method respectively for protection of end user data, more generally of end user personal profile data in a communication system comprising a number of end user stations and a number of service/information/content providers. End user personal profile data tends to get more and more spread out at different locations e.g. on Internet. With the fast development of global data communication networks, it gets possible to distribute data both via fixed and via wireless applications. Data will also be pushed out to an even higher extent than hitherto, e.g. from companies to end users, other companies etc. Internet end users, mobile as well as non-mobile, have to rely on and trust service providers. The service providers, in turn, require that the end users provide a lot of personal information in order to be able to serve the end users properly, and possibly for other reasons. However, the personal information can easily be misused, consciously or unconsciously, but still very little is done to protect the privacy rights of the end users. This is a serious problem. This will also have as a consequence that fewer end users sign up to, or take advantage of, all services that could be useful for them, which also is disadvantageous. The need for means to protect privacy therefore increases. For the individual end user it is exceedingly important that his personal information can be protected from uncontrolled distribution among service providers, other end users, companies etc. At the same time as, for example, the number of services that can be provided to end users, over for example Internet, increases, it becomes more and more interesting for service and information providers to be able to obtain detailed information about users. This may be in conflict with the security (e.g. privacy) aspect for the end users, as well as it of course also may be attractive for the end users, since they can also take advantage of personal information being spread out, and thereby obtain other useful or desired information etc. For statistical purposes it is interesting for e.g. companies to get information in order to become familiar with the needs for services, products etc. An end user may today have stored personal profile data of different kinds, at different locations, which contains various kinds of information about the user, such as name, address, particular habits, hobbies, accounts, financial situation etc. Thus, it is exceedingly important for the service/content providers to know the characteristics of existing and potential customers to allow for targeted advertising etc., at the same time as it is also exceedingly important for the end user to be able to properly protect the personal profile data. Thus there is an inherent conflict between different interests. Therefore laws and regulations have been created in an increasing number of countries, such as for example within the European Union, to restrict the accessibility to privacy information. Such laws and regulations often vary from one country to another, but generally they have in common that the consumer or the end user should have control over his or her profile, including conditions for its release. Solutions have been suggested for systems for protecting user personal profile data acting as a kind of a safe or functioning as a profile repository. The profiles can, by replacement of the user identity, for example the mobile phone number, through a code, be stored such that there will be no connection to the user identity, throughout the network. Such a repository or storing means for user profiles can be arranged at different nodes within the network. One example relates to a profile holding means provided between a portal and an advertising node. It is then supposed that the personal profile has been transferred to the advertising node, with the user identity in the form of a mobile phone number (MSISDN) replaced by a code, which is totally unrelated to the phone number. The procedure will then be that the portal requests an advertisement for a user, e.g. with a phone number. The profile holding means then forwards the request to the advertising node with the mobile phone number converted into a corresponding code. The advertising node subsequently returns the advertisement to the personal profile holding means, which subsequently returns the advertisement to the portal. Such a system is for example known under the trademark Respect™ which is an e-business platform enabling privacy control, identity management and instant personalization for on-line transactions. The profile holding means is then represented by the Respect™ server which is a virtual infrastructure located at the mobile Internet provider. However, there are several problems associated with systems as described above. One main issue is the transactional capacity of the profile protecting means. Normally the number of users that can be handled is limited, which results in serious problems for real time applications. With reference to the example given above, advertisements have to be served when an end user actually visits a particular page, or accesses a particular service, and many operations are time-critical. The time criticality is particularly important in wireless environments. It is certain that complete protection of end user personal profile data can never be guaranteed, any solution can in principle be cracked by a malicious partly, but the suggestions made so far leave a lot to desire. It is therefore an object of the present invention to provide an arrangement and a method respectively through which end user personal (profile) data can be protected to a high extent, particularly as much as required by most end users still wanting to make use of, and take advantage of, available services. It is also an object of the invention to provide an arrangement that makes it possible for an end user to trust a service provider to such an extent that the service provider is allowed to use personal data e.g. for statistical and other purposes while still providing the end user with the satisfaction that the data hardly can be abused of. Further yet it is an object to provide a solution through which end user data can be provided by the end user to such an extent that also the service provider can use the data to an extent so as to be able to optimally serve the end user. It is particularly an object to provide a solution through which an agreement can be established between end user and service provider which is very difficult to break. It is a general and main object of the invention to provide an arrangement and a method respectively which make abuse of personal data extremely difficult and unlikely to happen and such that the end user can feel confident when giving away personal data. Therefore an arrangement and a method having the features of the independent claims are suggested. Advantageous implementations are given by the appended sub-claims. The invention will in the following be more thoroughly described, in a non-limiting manner, and with reference to the accompanying drawings, in which: In one implementation a certificate of the protection server 4 is registered at a trusted third-party, such as the operator having sold it and protection server certificates are somehow made available to the intermediary proxy server 2. The task of the intermediary proxy server is to verify the genuinity of a protection server 4 for example through requesting a certificate and, in a particular implementation, signed content from the protection server 4 over the second communication protocol and comparing it with published certificates stored in certificate storing means 3. It should be clear that the verification of the genuinity (e.g. authenticity) of the protection server can also be done in other manners by the intermediary proxy server. In one implementation the end user preferences are held in the intermediary proxy server 2. However, in an alternative implementation the user preferences are held at the end user station. Still further the end user preferences may be agreed upon with the user klicking through them. After the negotiation they can be cached or stored such that the agreement can be handled quicker at a subsequent time. No change wanted may for example mean OK. In general the protection server should provide an API giving the service provider the possibility to change the policies of sites and pages taking the level of privacy into consideration, such that if for example the level of privacy is raised, the affected data should be deleted etc. Furthermore the protection server 4 must provide responses upon request to the intermediary proxy server 2, e.g. as far as certificates, possibly signatures etc. are concerned. Furthermore it should provide responses to requests for agreements relating to policy files and/or natural language statements to the intermediary proxy server 2. Still further it provides a query API to which questions can be asked by the service provider according to the policy settings. The protection proxy server 4A has an SQL allowing questions to be asked to the data base(es) 5A1,5A2,5A3 from the service provider (application) 6A. (It should be clear that SQL merely constitutes one example among others, e.g. LDAP (Lightweight Directory Access Protocol). It is supposed that the intermediary proxy server 2A requests a certificate and signed content from the protection proxy server 4A over an IPSec connection (or some other connection), verifies that the certificate belongs to a protection proxy server with the trusted third-party, by comparing the requested certificate with the published certificates available from certificate holding means 3A, which may be actual holding means, or over Internet or in any other manner. It is actually not necessary to implement any handling of certificates, a list of protection servers may also be available over Internet, for example. It is also supposed that, in this implementation, the intermediary proxy server 2A performs a P3P (Platform for Privacy Preferences Project) agreement, which specifies a protocol that provides an automated way for users to gain control over the use of personal data on visited web-sites. The invention covers security communication agreements in general, e.g. P3P, national language agreements etc. used within the field of privacy. According to that web-sites are enabled to express their privacy practices in a machine readable XML (Extensible Markup Language) format that can be automatically retrieved and compared with an end user's privacy preferences. This makes it possible for an end user to make a decision as to submit or not a piece of personal information to a particular web-site. As referred to above, the user's preferences may be in the intermediary proxy server 2A or in the end user device PC 1A or agreed upon as the end user klicks them through. Storing or cashing may be implemented or not as also discussed above. After performing the P3P agreement, if the genuinity of the protection server etc. has been established, the actual web-page may be requested with the full or acceptable profile of the user. Actually also personal data such as name, address etc. can be sent since the protection server can be trusted to handle the data correctly and in a manner acceptable to the end user. As referred to above the protection server 4A provides an API giving the service provider the possibility to change the policies of the sites and pages and if the level of privacy is raised, the affected data should be deleted. In addition to responding to requests for certificates and signatures, the protection server 4A responds to requests for P3P reference and policy files and/or natural language statements. According to the policy settings, the service provider may then ask questions over the SQL API to the protection server according to the policy settings, for example relating to user specific data such as name, address, purchased items etc., which then can be retrieved, since the protection server is trustworthy. It may also be possible to retrieve profile information, in particular implementations with history information. Further yet the service provider may retrieve statistical data, however, in such a manner, that a specific end user cannot be tracked. In a particular implementation statistical information and profile information is pseudonymized and anonymized in an appropriate manner, e.g. it may be stored and retrieved using a oneway hash function to ensure privacy and security also in case the protection server actually is broken into or similar. Particularly the protection server requests the certificate and the signature from the service provider 6A. The protection proxy server 4A may pseudonymize a request (over HTTP) over the URL (Uniform Resource Locator) of the service provider. A new pseudo (e.g. a counter) has to be used for each new URL that is requested. The data that the policy file claims to use, must be sent along with the request. Particularly the protection server assures that personal data is not passed on in such a way that the profile information can be tied to the user. If for example a page wants to store some kind of user specific data, the user identity provided with the request is used to store the information in the protection server. When information is to be retrieved, however, it is important that the request comes from a page where profile information was not retrieved, in order to ensure security (the desired degree of privacy according to the policy). In another implementation it is supposed that P3P is not implemented. Then only steps III, IV are used. In still another implementation it is supposed that the certificate verification is omitted, actually relying on the protection server being “genuine”. In that case only steps I, II and IV are implemented, and still supposing that P3P is implemented. Finally the user agent may be unaware of the protection server and P3P and thus sends a request to the application. In particular this is a request with user data. (Simple requests from the user agent i.e. without user data are illustrated in The U.S. patent application referred to generally relates to a method for contacting an origin server from a user, by generating a minimal user profile for the user, which profile contains user designated CPI (Capabilities and Preferences Information). (CPI is represented through a profile and determines how far and to what extent to communicate profile information to other web sites). It should be noted that the user agent and the intermediary proxy server both can be at the operators environment, i.e. a combined entity, but this is not necessarily the case. The protection server with its logic is then responsible for storing data according to agreement, or according to the policy, in the database(s) inside the protection server, or associated with the protection server. This is done in an anonymized and pseudonymized manner. The anonymized, pseudonymized HTTP request is also forwarded to the application, e.g. containing a sequence number or anything that makes it “identifiable”. SQL requests for data may then be sent from the application to the protection server (storing means), and responses are provided according to the policy. Finally a HTTP response is provided to the protection server (logic part), which forwards it to the user agent via the intermediary proxy server. Thus, a request for a P3P reference file is sent from the user agent via the intermediary proxy server to the protection server, 100. From the protection server the P3P reference file is then returned, 101. Subsequently a P3P policy request is sent from the user agent to the protection server, 102. The protection server then returns the P3P policy, an indication of the protection server and a certificate to the user agent, 103. Although in this implementation no certificate verification is illustrated, a step might here be included according to which the user agent requests that the intermediary proxy server provides for a verification of the certificate or more generally of the protection server, e.g. as explained earlier in this document, which then returns a response to the user agent. With, or without, verification of the certificate, user data is then sent in the header encrypted by means of the certificate from the user agent to the protection server, 104. The protection server (logic) then provides for appropriate storing in the protection server storing means according to the policy, anonymized and pseudonymized, 105. An anonymized and pseudonymized HTTP request is also sent to the application, 106. SQL requests can then be sent from the application to the protection server, or to the storing means thereof, which then responds according to the policy, 107. Finally a response with the file is sent from the application, via the protection server etc. to the user agent, 108. The invention is of course not limited to the explicitly illustrated embodiments, but it can be varied in a number of ways within the scope of the appended claims.
0.5495
FineWeb
In this powerful 4-hour seminar, Andrew offers an in-depth experience in love, sexuality and relationships. You will learn how to: - Nurture a more authentic connection with your friends, family & intimate partnersUnderstand why you repeat unsuccessful patterns in relationships - Discern what is true for you in each moment rather than being caught in the past - Become a feeling person, rather than an emotional one - Step into the power through integrating your mature feminine and masculine - Understand the flow of sexual energy during sacred union - Use the Relationship Map process and how this can more easily identify your relationship problems and find solutions to them - Understand the three phases of relationship and how to evolve through them - Discover the five main problem areas in relationships and what to do about them - Understand the importance of emotional wisdom in relationships To every relationship and situation in our life we bring our past experiences and the meanings we have attached to them. These visible and invisible histories can influence, if not govern, how we relate to our friends, family and intimate partners. In order to relate authentically we need to bring conscious awareness to our old patterns of fear and learn how they can influence our behaviour in the present. This beautiful and transformative seminar can support you to: - Heal your emotional wounds - Have healthy and fulfilling relationships - Learn about masculine/feminine dynamics to create harmony - Develop valuable skills in the way you relate to yourself and others - Discover what triggers your fear responses in relationships and how to respond rather than react to situations - Deepen your intimacy in sexual relationships - Feel more confidence and acceptance of who you are, exactly as you are Andrew offers an in-depth understanding of and solutions to the most common problems that drain love from our relationships, bringing new awareness and skills to conscious communication, boundary setting, love, sexuality, and relationships.
0.9147
FineWeb
- I'm back in, with a revison. Instead of staying POP, how about if I stay with healthy eating? Then, my other 3, which are exercise of some sort everyday except Sunday, a vitamin daily, and drinking the water. - how about if you revise yours to something more do-able? The group just isn't the same w/o you. So what are you gals rewarding yourselves with if/when you make it? Not sure what I'll be doing about that, yet. I know I was pretty proud of myself when I earned the book and the patio stuff. Two outta six is pathetic, but at least I'm trying!
0.6838
FineWeb
At some point of their lives, women must seriously consider anti aging therapy to erase their wrinkles and other signs of aging. Thankfully, with the right methods, it is indeed possible to slow down the appearance of aging and retain your youthful looks for many more years. Here are some excellent ideas for better skin health: - Shield your skin from sun exposure: Protect your skin from sunlight even if you will be out in the sun just for a short while. UV rays in sunlight can adversely affect the texture of your skin even in winter. Sun damage can show up on the skin in the form of dullness, uneven skin tone, age spots and premature formation of wrinkles. - Exfoliate your skin regularly: Dead skin cells can diminish your youthful radiance and make your skin look dull and tired. Exfoliation helps eliminate those dead cells to reveal newer, smoother skin beneath. Exfoliating products come in different forms. While some simply require you to wash your face with it, others need to be rubbed into the skin for best results. Exfoliate the skin at least once every week to maintain your youthful looks. - Ensure a balanced diet: You need to switch to a balanced, healthy diet to enhance the wrinkle removal results of your age defying formulation. Include fruits and vegetables of different colors in your menu. Avoid junk food and opt for salads, fruit juices or smoothies instead. Fish such as salmon and mackerel are rich in omega 3 fatty acids that work wonders in maintaining the youthful appearance of your skin. Increase your intake of water to as much as 10 glasses per day to stay hydrated. You need follow a regular skincare regime and pay close attention to what you apply on your facial skin. Look out for products featuring naturally sourced ingredients to ensure optimal results without allergic reactions.
0.8051
FineWeb
Chapter 8: Creativity and Innovation Chapter 8 is structured around two key concepts - creativity and innovation. Creativity is defined as the iterative process of bringing imagination to reality, whereas innovation is defined as creativity with a cause (or creativity that benefits others). The chapter explores a number of classroom practices that foster creativity, including: original work, authentic tasks, abstract thinking, workflow flexibility, and handoff. Moreover, through the example of the Apps for Good project, Neebe and Roberts lay out the process of building innovation into the curriculum through design thinking. Reading Power Up with a PLC or faculty book group? Download a PDF of the complete study guide for free from Stenhouse. Tools for Creating - Six word memoirs Video - The Story of The Atom (Made with Docs Story Builder) Video - Scarlet Letter iBook - Stanford dSchool Design Thinking Crashcourse - Resources Compiled by Psychology Today
0.9895
FineWeb
Formats and tools - Unit Description - Reconstruct the unit from the xml and display it as an HTML page. - Assessment Tool - an assessor resource that builds a framework for writing an assessment tool - Assessment Template - generate a spreadsheet for marking this unit in a classroom environment. Put student names in the top row and check them off as they demonstrate competenece for each of the unit's elements and performance criteria. - Assessment Matrix - a slightly different format than the assessment template. A spreadsheet with unit names, elements and performance criteria in separate columns. Put assessment names in column headings to track which performance criteria each one covers. Good for ensuring that you've covered every one of the performance criteria with your assessment instrument (all assessement tools together). - Wiki Markup - mark up the unit in a wiki markup codes, ready to copy and paste into a wiki page. The output will work in most wikis but is designed to work particularly well as a Wikiversity learning project. - Evidence Guide - create an evidence guide for workplace assessment and RPL applicants - Competency Mapping Template - Unit of Competency Mapping – Information for Teachers/Assessors – Information for Learners. A template for developing assessments for a unit, which will help you to create valid, fair and reliable assessments for the unit, ready to give to trainers and students - Observation Checklist - create an observation checklist for workplace assessment and RPL applicants. This is similar to the evidence guide above, but a little shorter and friendlier on your printer. You will also need to create a seperate Assessor Marking Guide for guidelines on gathering evidence and a list of key points for each activity observed using the unit's range statement, required skills and evidence required (see the unit's html page for details) - Self Assessment Survey - A form for students to assess thier current skill levels against each of the unit's performance criteria. Cut and paste into a web document or print and distribute in hard copy. - Moodle Outcomes - Create a csv file of the unit's performance criteria to import into a moodle course as outcomes, ready to associate with each of your assignments. Here's a quick 'how to' for importing these into moodle 2.x - Registered Training Organisations - Trying to find someone to train or assess you? This link lists all the RTOs that are currently registered to deliver CPPSIS3010A, 'Perform basic spatial computations'. - Google Links - links to google searches, with filtering in place to maximise the usefulness of the returned results - Reference books for 'Perform basic spatial computations' on fishpond.com.au. This online store has a huge range of books, pretty reasonable prices, free delivery in Australia *and* they give a small commission to ntisthis.com for every purchase, so go nuts :) Elements and Performance Criteria 1Prepare to perform basic traverse computations. 1.1 Task objectives are defined. 1.2 Pertinent standards are identified, considered and adhered to in line with project specifications. 2Execute the task. 2.1 Computations are performed on angles and bearings. 2.2 Conversions between polar and rectangular modes are performed. 2.3 Computations are performed on the coordinates of a simple closed traverse. 2.4 Organisational documented and undocumented practices are adhered to. 2.5 OHS requirements are planned for and adhered to. 2.6 Skills and knowledge are updated to accommodate changes in operating environment and equipment. 3Document the task. 3.1 All required documentation is completed according to organisational guidelines.
0.949
FineWeb
|Scientific Name:||Oryx gazella| |Species Authority:||(Linnaeus, 1758)| Oryx gazella (Linnaeus, 1758) subspecies gazella |Taxonomic Notes:||This is a monotypic species including only those animals from southern Africa, with O. beisa from northeast Africa here regarded as a distinct species (see Grubb 2005).| |Red List Category & Criteria:||Least Concern ver 3.1| |Assessor(s):||IUCN SSC Antelope Specialist Group| |Reviewer(s):||Mallon, D.P. (Antelope Red List Authority) & Hoffmann, M. (Global Mammal Assessment)| Listed as Least Concern as the species is numerous and widespread, and populations are currently stable or even increasing. The Gemsbok’s future is secure as long as it continues to occur in large numbers on private land and in protected areas in Southern Africa. Its high value as a trophy animal should ensure further increases in its numbers on private land. |Range Description:||The Gemsbok formerly occurred widely in the semi-arid and arid bushland and grassland of the Kalahari and Karoo and adjoining regions of southern Africa, with a marginal intrusion into south-west Angola (East 1999; Knight in press). The extensive contraction of its distribution and decline of its numbers which accompanied the expansion of human activities in Southern Africa during the 19th and 20th centuries have been partly compensated in the last 10 - 20 years by the widespread reintroduction of Gemsbok to private land and protected areas. Today, they remain widely, albeit patchily, distributed in south-western southern Africa, although populations in Angola are now considered extirpated, even from the former stronghold in Iona N.P. (East 1999). They have also been introduced in small numbers to areas outside their natural range, such as private game ranches in Zimbabwe (East 1999).| Native:Botswana; Namibia; South Africa; Zimbabwe Possibly extinct:Angola (Angola) |Range Map:||Click here to open the map viewer and explore range.| |Population:||Population estimates are available for almost all of this species’ range. Summation of these estimates gives a total population of 326,000, but actual numbers are probably higher because of an unknown level of undercounting bias in aerial surveys. Assuming an average correction factor for undercounting bias of 1.3 would give a total population estimate of 373,000. Overall population trend is increasing in private farms and conservancies and protected areas, and stable elsewhere (East 1999).| |Habitat and Ecology:|| Adapted to waterless wastelands uninhabitable for most large mammals, Gemsboks inhabit semi-arid and arid bushland and grassland of the Kalahari and Karoo and adjoining regions of Southern Africa. They are is equally at home on sandy and stony plains and alkaline flats. It ranges over high sand dunes and climbs mountains to visit springs and salt licks. Although they are predominantly grazers, they broaden their diets in the dry season to include a greater proportion of browse, ephemerals and Acacia pods. They drink water regularly when available, but can get by on water-storing melons, roots, bulbs, and tubers, for which it digs assiduously. Adaptations to living in a desert environment are summarized by Knight (in press). |Use and Trade:||Proportion of specimens taken from the wild and from private ranches is not known.| Presently there are no major threats to the survival of the species. In the past its numbers and its distribution decline significantly due to the expansion of human activities in Southern Africa during the 19th and 20th centuries. Yet, in the last two decades there has been widespread reintroduction of Gemsbok to private land and protected areas, For example, in Namibia the largest numbers occur on private farmland, where the estimated population increased from 55,000 in 1972 to >164,000 in 1992 (East 1999). Despite this favourable trend, in some areas such as south-western Botswana its distribution is increasingly restricted to protected areas, to the point where there are now two discrete concentration areas within this region, in Central Kgalagadi-Khutse Game Reserves and within and to the north and east of Gemsbok National Park. Outside these protected areas, it occurs mainly in areas of the Kalahari without cattle (East 1999). Its ability to meet its survival needs within a relatively small area of semi-arid or arid savanna, even during severe droughts, enable it to occupy much smaller mean annual ranges than migratory species such as blue wildebeest and red hartebeest. The gemsbok’s independence of surface water and non-migratory behaviour have enabled it to largely escape the adverse effects of veterinary cordon fencing (East 1999). The largest numbers occur on private land (about 45% of the population), especially in Namibia, and in protected areas (35%) such as Namib-Naukluft and Etosha (Namibia), Central Kgalagadi-Khutse Game Reserves and Gemsbok National Park and surrounds (Botswana) and Kalahari Gemsbok National Park (South Africa). All of these populations are stable or increasing. The Gemsbok is of major economic value to the wildlife industry in southern Africa. It is a key trophy species on game farms and an important component of game-capture activities. In South Africa it is in great demand among farmers because of its trophy value. It has been introduced widely to areas outside its natural range, e.g., Gemsbok numbers have increased dramatically on bushveld farms in the north of the country, mainly due to introductions from Namibia. Kalahari Gemsbok National Park supports South Africa’s largest Gemsbok population. East, R. 1999. African Antelope Database 1999. IUCN, Gland, Switzerland and Cambridge, UK. Grubb, P. 2005. Artiodactyla. In: D. E. Wilson and D. M. Reeder (eds), Mammal Species of the World. A Taxonomic and Geographic Reference (3rd ed), pp. 637-722. Johns Hopkins University Press, Baltimore, USA. Knight, M. In press. Oryx gazella. In: J. S. Kingdon and M. Hoffmann (eds), The Mammals of Africa, Academic Press, Amsterdam, The Netherlands. |Citation:||IUCN SSC Antelope Specialist Group 2008. Oryx gazella. The IUCN Red List of Threatened Species. Version 2014.3. <www.iucnredlist.org>. Downloaded on 29 March 2015.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
0.6748
FineWeb
Ballbar testing explained The popularity of ballbar testing has been built on the basic simplicity of the test, quickness of use and the large amount of quantitative information generated. How does the test work? In theory, if you program a CNC machine to trace out a circular path and the positioning performance of the machine was perfect, then the actual circle would exactly match the programmed circle. In practice, many factors in the machine geometry, control system and wear can cause the radius of the test circle and its shape to deviate from the programmed circle. If you could accurately measure the actual circular path and compare it with the programmed path, you would have a measure of the machine's accuracy. This is the basis of all telescopic ballbar testing and of the Renishaw QC20-W ballbar system. See a complete QC20-W ballbar test animation here; including the new partial arc feature: The QC20-W ballbar test sequence Renishaw ballbar testing consists of 3 simple stages, Set-up, Data capture and Analysis. - Connecting to your QC20-W is simple, thanks to its Bluetooth connectivity. Test set-up is quick and easy with Windows® based software guiding the operator through each step. There is even a 'part programme generator' to help you set up the corresponding programme on your machine tool. - In many cases you will be using existing test templates. The powerful file administration features let you search and access these quickly. - The centre pivot is positioned on the machine table and (using a setting ball provided in the QC20-W kit) the spindle is moved to a reference point and the test 'zero' coordinates set. - The spindle is moved to the test start position and the QC20-W is mounted between two kinematic magnetic joints. - A simple G02 and G03 command program is all that's required to start the test. Data capture: 360° testing - The 'classic' test calls for the machine tool to perform two consecutive circles; one in a clockwise direction, the other counter-clockwise. - In practice there is an extra arc added before and after the test circle to allow for the machine accelerating and then slowing down. - With the use of extension bars the test radius can be selected to reflect the size of the machine and the sensitivity to particular issues (e.g. large radius circles are better at highlighting machine geometry errors, smaller circles are more sensitive to servo mismatch or lag). - As you can see the basic test is very quick once set up! - Data capture is shown live on screen, so any errors or problems can be detected as the test progresses and the test stopped without wasting additional time (important if you are carrying out a large radius test with a slow feed rate) Data capture: 220° 'partial arc' testing Before the launch of QC20-W, testing in planes perpendicular to the standard X-Y test plane meant using special test mounts and repositioning of the centre mount. Now you can carry out tests covering 3 orthogonal planes without moving the centre pivot. The secret to this is the ability of the QC20-W system to carry out a restricted arc (220°) in two of the planes. This produces a modified test analysis for that arc but still produces an overall circularity value for that test. With all three tests carried out around a single point it allows the use of the (new for Ballbar 20) volumetric diagnostics report, giving you more information and quicker than with previous systems. - The user has a choice of several report formats according to International standards (e.g. ISO, ASME) and the comprehensive Renishaw diagnostics (including volumetric analysis) with a number of different screens views and links to the help manual. - Many reports can be customised and the final result used for written reports using the inbuilt 'cut and paste' facility.
0.5259
FineWeb
On a personal level, the formulation of this question is troubling in that it conveys the notion that serving the Jewish People is construed as a byproduct of my service as a professor of Jewish Studies. In point of fact, the reverse is true. My decision to pursue an academic career as a sociologist of American Jewry—taken as an 18-year-old Columbia College junior—took shape as a direct consequence of my strongly held intention to serve the Jewish People. My entire career (except for a four-year interlude as an assistant professor when I wrote articles on ethnicity in pursuit of tenure) has been entirely devoted to exploring issues of policy relevance to Jewish communal life. Thus, my research has been animated by, and enriched by, the most urgent questions being asked by Jewish communal leaders. These generally revolve around the central issue of the quality of Jewish life and how it can be improved. Accordingly, I've addressed my writings, directly or obliquely, to the most energetic areas of contemporary discourse in Jewish communal life. By way of illustration, I've sought to: 1. Demonstrate that which should be intuitively known (e.g., various forms of intensive Jewish education produce clear positive consequences). 2. Add nuance to our collective murky understanding of emerging trends (e.g., The Sovereign Jewish Self and The Jew Within). 3. Spark debate about vital issues (distancing of younger American Jews from Israel, largely due to intermarriage). 4. Develop innovative policy responses and rationales (e.g., on intermarriage, presenting myself as an "empirical hawk" and a "policy dove"). 5. Advance thinking on practice and policy for leaders (as in Sacred Strategies for congregational leaders). 6. Promote particular ways of thinking about Jewish engagement (e.g., as a culture and nationality rather than a Western religious identity). I see my "students" as located outside the classroom, with communal professionals, lay leaders, and philanthropists uppermost in my mind, along with colleagues and other social scientists. And, I've sought collaborative relationships, having co-authored works with at least sixty different colleagues over the years. In short, contributing to Jewish life is intrinsic to my academic mission.
0.6354
FineWeb
Cold War nuclear radiation testing and the inhospitable living conditions on Mars was the subject of this month’s Packed Lunch at Wellcome Collection. Dr Lewis Dartnell, an astrobiologist at the Centre for Planetary Sciences at UCL, was in to discuss his research into whether some forms of life might be able to survive on Mars. From what we know of it, Mars isn’t the friendliest of places. It’s bitterly cold; the air temperature is rarely above freezing. And it’s very dry. Although we can see that the planet has polar ice caps, there’s no sign of any liquid water – which is essential for life – anywhere. It all seems to be either frozen or evaporating into the atmosphere. To make matters worse, Mars has no magnetic shield, no ozone layer, and only a very thin atmosphere, so its surface is exposed to intense levels of radiation, including UV and ionising cosmic rays, which damage and destroy DNA. As Dartnell explained, we’re protected from cosmic radiation by our planet’s “lovely deep atmosphere,” which absorbs high-energy particles in cosmic rays, and a magnetic shield, which deflects them. The high-energy radiation that reaches the surface of Mars would kill us. But what about other forms of life, such as microbes? Might they be able to survive the harsh cold and fierce cosmic radiation on Mars? To find out, Dartnell has been looking at bugs he calls “ultra hardy survivors” which live in the Dry Valleys of Antarctica, the place on our planet that is most like Mars. With only 1mm of snowfall a year, and cosmic and solar radiation streaming through the hole in the ozone above Antarctica, no plants and animals survive in those desolate valleys. But in the cracks of the rocks, protected from the fierce UV light and cold, dry winds, lives a tiny bug called Deinococcus radiodurans. Its name means ‘enduring radiation’, and indeed Deinococcus is the most radiation-resistant organism we’ve found on this planet. It even grew on the walls of post-explosion Chernobyl. To find out precisely how much radiation these bacteria can survive (and therefore, whether similar microbes might be able to survive on Mars), Dartnell cultured them in his lab. Deinococci are rich in pigments that protect them from UV radiation, and the cultures sprawl in lurid shades of pink, across the Petri dishes, easily visible to the naked eye. Dartnell then took his cultured specimens to an old Ministry of Defence research facility at the University of Cranfield where, during the Cold War, researchers tested whether tanks could protect against radiation from a nuclear bomb. There, he bombarded them high doses of gamma radiation from a cobalt-60 source. Gamma rays are extremely energetic forms of light that that can break bonds in our DNA and shatter our genomes. A dose of a few Grays of ionising radiation would kill a person, yet Dartnell found his deinococcus bugs could all survive doses of 5000 grays “without blinking an eye”. How do they do it? Dartnell says the bacteria have a number of quirks in their biochemistry that protect them. While they can’t directly protect themselves from the gamma rays, which are powerful enough to go through rock, they are very good at repairing the damage those rays do to their DNA. A suite of repair enzymes piece the damaged genome together again. Moreover, each cell or bacteria has up to 12 or more copies of its genome, whereas humans only have one copy in each of our cells. The deinococcus’s ability to piece itself together again, in the right order, is of great interest to genetic engineers, who are looking to apply its molecular biology to other cells to develop new medicines or forms of bio-remediation. It might be possible to engineer organisms that can survive radiation and clean up oil slicks and toxic waste, for example. As well as testing his bugs’ impressive survival rates, for his PhD Dr Dartnell worked with physicists, mathematicians and computer programmers at UCL to develop a computer simulation of the radiation on the surface of Mars. These kinds of multi-disciplinary collaborations are, he says, at the forefront of modern science: mathematicians condense biological complexity into equations, which a computer programme then codes for analysis. That work also involved some hard-core, very high-energy particle physics. Dr Dartnell and his team used the same codes that researchers at the Large Hadron Collider at CERN (the European Organization for Nuclear Research near Geneva) use to simulate particle energies. The collisions that occur in a huge underground circular tunnel at CERN also happen right over our heads in the Earth’s atmosphere, and on and around a metre below the ground on Mars. It’s there, in the crevices of Martian rocks, that Dartnell is hoping to find some hardy little deinococcus -like organisms hiding from the radiation underground. And he may soon find out whether that is the case when, in 2018, the European Space Agency and NASA launch their robotic ExoMars mission. As we know from watching CSI, organic molecules (blood, sperm, and other proteins) glow in the dark under UV light. So the ExoMars robot will wait for night on Mars, and then scan the landscape and rocks with a UV laser, looking for protective nooks and cavities, where microbes might be hiding. If those crevices fluoresce in the dark, it could be a sign of life. The robot will then fish out an armful of soil and deliver it into the onboard instruments for analysis. In the meantime, researchers are looking at the space material that’s already delivered to us. We know that the building blocks of life can begin in outer space. Meteorites, what Dartnell calls the “builder’s rubble left over from the making of the planets”, made of silica rocks, contain large amounts of carbon. And carbon is good at sticking together to make organic compounds. Add sources of energy from the sun or from radioactive decay in the asteroids, and sometimes water to the mix, and the chemistry results in organic molecules. The Murchison meteorite, which fell near Murchison in Australia in 1969 apparently, reeked of petro-chemicals and organic molecules, when it was first picked up. And researchers found 70 different amino acids (the building blocks of protein) inside the meteorite – yet there are only 20 amino acids on Earth – definitive proof that the building blocks of extra-terrestrial life flicker into being far out in the depths of space. Penny Bailey is a writer at the Wellcome Trust.
0.8113
FineWeb
Bridges and Viaducts of the Los Angeles River from Los Feliz Blvd. to Washington Blvd. The bridges over the L.A. River through downtown Los Angeles were built by the City Public Works Department between 1910 and 1932. This period saw exponential population growth. The city grew 5 times larger during this time and saw many changes to its infrastructure. One of the largest projects, The L.A. Aqueduct, completed in 1913, brought water from the Owens Valley to the city, quenching a growing thirst for water. Although on a smaller scale, the bridges reliably connected the two halves of the city, allowing and furthering the aims of civic leaders to grow the city into the largest in the west. My interest in the L.A. River came from my interest in the history of the city and the large role the river has, and continues to have, in shaping the city. The city was founded on the banks of the river because it was the only large reliable source of water in the region. It is the reason the city is where it is. Curious about the way the river looked, led me to investigate its history and relationship to the city. I found that the way the river looked was a result of its history. In this way, the bridges over the river have also been shaped by the history of the city to which they belong.
0.5111
FineWeb
By Gary Lachman The occult used to be a very important effect at the Renaissance, and it obsessed the preferred thinkers of the day. yet with the Age of cause, occultism was once sidelined; basically charlatans stumbled on any use for it. Occult principles didn't disappear, notwithstanding, yet fairly went underground. It built right into a fruitful resource of notion for lots of vital artists. Works of brilliance, occasionally even of genius, have been produced less than its effect. In a dismal Muse, Lachman discusses the Enlightenment obsession with occult politics, the Romantic explosion, the futuristic occultism of the fin de siècle, and the deep occult roots of the modernist circulation. a few of the writers and thinkers featured during this hidden heritage of western proposal and sensibility are Emanuel Swedenborg, Charles Baudelaire, J. okay. Huysmans, August Strindberg, William Blake, Goethe, Madame Blavatsky, H. G. Wells, Edgar Allan Poe, and Malcolm Lowry. Read or Download A Dark Muse: A History of the Occult PDF Best occult books Religious attainment has often been defined as a change wherein a human's leaden, uninteresting nature is back to its golden country. This splendidly insightful quantity introduces a few of the metaphors precious for developing attitudes required for the soul's development: belief, self assurance, desire, and detachment. Clean rules for the trendy mage lie on the center of this thought-provoking consultant to magic conception. coming near near magical perform from a data paradigm, Patrick Dunn offers a distinct and modern standpoint on an old perform. Dunn teaches all approximately image platforms, magical artifacts, sigils, spirits, elementals, languages, and magical trips, and explains their importance in magical perform. By means of the top of the 19th century, Victorians have been looking rational motives for the realm within which they lived. the novel principles of Charles Darwin had shaken conventional non secular ideals. Sigmund Freud used to be constructing his leading edge types of the awake and subconscious brain. And anthropologist James George Frazer was once subjecting magic, fantasy, and formality to systematic inquiry. This publication is compiled from talks given ordinarily in 2001 by means of Ajahn Sumedho; they impart an intuitive realizing of the Buddha's instructing which has arisen from over 35 years of perform as a Buddhist monk. This method begins with accepting ourselves as we're, no longer as a few perfect of whom we predict we should always be. - The Hammer and the Flute: Women, Power, and Spirit Possession - The Theban Oracle: Discover the Magic of the Ancient Alphabet That Changes Lives - The Doctrine and Literature of the Kabalah - Œuvres complètes, tome 11: Articles I 1944-1949 - The Key of the Abyss Extra resources for A Dark Muse: A History of the Occult He conformed (unlike his brother, a Presbyterian minister) in 1662 and held posts in London. He was a friend of the Cambridge Platonist Henry More, to whom he sent accounts for More’s edition of Glanvill’s Saducismus Triumphatus (1681) and in 1676 More ensured that Fowler was granted a prebendal stall in Gloucester Cathedral originally intended for him. In 1681 he became vicar of St Giles Cripplegate, a huge parish (with a population greater than Bristol’s) where he practised his beliefs in Protestant unity against the threat of Catholicism (in June 1679 he had smashed a window at Gloucester Cathedral as a ‘vile relic of popish superstition’). Christian died some time after 1696, and Bedford, apparently childless, remarried Martha Jones (d. 9 Bedford’s most prominent activity prior to 1703 was in Bristol’s Society for the Reformation of Manners, established in 1699. Bedford believed passionately that such reformation was required if England was to avoid God’s judgment and fulfil the opportunity created by William’s Godgiven deliverance of the nation from popery and slavery in 1688. He sought to direct the Society towards supporting Anglican preaching (a course of sermons at Read’s St Nicholas) and education of poor children, using materials from the SPCK, of which he was the local correspondent. See also his sermons: Unity, Love and Peace Recommended (Bristol, 1714); The Doctrine of Obedience and Non-Resistance (1717); Three Sermons Preach’d on Three Particular Subjects (1717); A Sermon at St Mary Redcliffe 21 October 1717 (Bristol, 1717); The Obligations Which Lie upon Both Magistrates and Others (1718); A Sermon Preached to the Societies for Reformation of Manners (1734). See also his Temple Musick (Bristol, 1706) and Excellency of Divine Music (1733). Serious Reflections on the Abuse and Effects of the Stage (Bristol, 1705); A Second Advertisement Concerning the Profaneness of the Play-House (Bristol, 1705); The Evil and Danger of Stage-Plays (Bristol, 1706); Serious Remonstrance (reissued 1730, when he had renewed the attack with A Sermon Preached in the Parish Church of St Botolph’s Aldgate (1729)).
0.5096
FineWeb
- Where is the Rio Negro? - What is the largest river in the world? - What’s the deepest river in the world? - Where does Amazon meet the sea? - Which river is known as Black River in India? - How long is the Amazon River? - What is unique about the Rio Negro? - Is it safe to swim in the Amazon River? - Can two rivers cross? - Where do two rivers meet but don’t mix? - Which Indian River is known as the holy river? - How deep is the Rio Negro? - Where did the Rio Negro meet the Amazon? - How long is Rio Negro? - Why is the Amazon River Brown? Where is the Rio Negro? ArgentinaRío Negro, provincia (province), south-central Argentina. It lies within the region of Patagonia and extends westward from the Atlantic Ocean to the Andes Mountains and the border with Neuquén province.. What is the largest river in the world? Amazon RiverThe Amazon River is 3,980 miles (6,400 kilometers) long, according to the U.S. National Park Service. It is, however, the world’s largest river by volume and contains 20 percent of the Earth’s fresh water, according to the National Science Foundation. What’s the deepest river in the world? Congo RiverFrom its tributaries to where it meets the Atlantic Ocean, the massive river includes rapids, wetlands, floodplains, lakes and swamps. In addition, the Congo River is the world’s deepest recorded river at 720 feet (220 meters) deep in parts — too deep for light to penetrate, The New York Times reported. Where does Amazon meet the sea? The river system originates in the Andes Mountains of Peru and travels through Ecuador, Colombia, Venezuela, Bolivia, and Brazil before emptying into the Atlantic Ocean. Which river is known as Black River in India? Sharda River Mahakali RiverSharda RiverSharda River Mahakali RiverCountriesIndia and NepalRegionUttarakhand and Uttar Pradesh in India; Sudurpashchim Pradesh in NepalPhysical characteristicsSource29 more rows How long is the Amazon River? 6,575 kmAmazon River/Length What is unique about the Rio Negro? It is the largest blackwater river in the world. It is called Negro (Spanish and Portuguese for “black”) because its waters are colored by particles of humus, which make them look much like tea. The Rio Negro is the largest tributary of the Amazon on the left-hand side (as it goes towards the sea). Is it safe to swim in the Amazon River? There are guided tours on the Amazon to see things like the Amazon River Dolphin, some of which apparently will let people swim with them. Based on this, it’s probably safe to swim in those areas, but like any river with wild-life there are no guarantees. If you are worried about wildlife, not very dangerous. Can two rivers cross? A confluence can occur in several configurations: at the point where a tributary joins a larger river (main stem); or where two streams meet to become the source of a river of a new name (such as the confluence of the Monongahela and Allegheny rivers at Pittsburgh, forming the Ohio); or where two separated channels of … Where do two rivers meet but don’t mix? Despite its name, the Rio Negro is not technically black, but does harbor a very dark color. When it meets the Rio Solimoes, which is the name given to the upper stretches of the Amazon River in Brazil, the two rivers meet side by side without mixing. Which Indian River is known as the holy river? Ganges RiverGanges River, Hindi Ganga, great river of the plains of the northern Indian subcontinent. Although officially as well as popularly called the Ganga in Hindi and in other Indian languages, internationally it is known by its conventional name, the Ganges. From time immemorial it has been the holy river of Hinduism. How deep is the Rio Negro? about 9 metresDespite being very dark in appearance, the Rio Negro actually contains very little sediment, meaning that it is surprisingly clear once you get up close and personal with it. In fact, on a good day, you can see down into the water to a depth of about 9 metres. Where did the Rio Negro meet the Amazon? BrazilThe Meeting of Waters is the confluence of two or more rivers. The expression generally refers to the larger of these events: the meeting between the dark Rio Negro with the sandy colored upper Amazon River, or Solimões, as it is known in Brazil. How long is Rio Negro? 2,250 kmRio Negro/Length Why is the Amazon River Brown? The Amazon river carries a lot of sediment (particles of mud and sand), which gives the water a muddy-brown color. Its largest tributary (branch), the Rio Negro, or black river, is filled with chemicals washed out of soil and plants, making the water very dark.
0.5227
FineWeb
Middle School Educational and Parenting Articles Browse middle school educational and parenting articles. Browse all our articles by topic and grade, or use the search. Found 3 Worksheets that match this search. Try this search on Worksheets. Nothing matched your search term. Try these categories: Search our entire site for “month-to-month-guide-for-kindergarten-prep” - Why Do Students Struggle With Mathematics - Kindergarten Sight Words List - The Best Kids Magazines for the Elementary School Set - Introducing Yourself to the Classroom Guide for the Substitute Teacher - Definitions of Social Studies - Principles of the Montessori Method - Emotional Development - Child-Centered Education - Curriculum Definition - 10 Tips for Math Success
0.9964
FineWeb
The Importance of Emergency Electrical Services Understanding Emergency Electrical Services Electricity plays a crucial role in our daily lives. We rely on it for lighting, heating, cooling, and operating our appliances and electronics. However, electrical emergencies can happen unexpectedly, disrupting our routines and putting us at risk. That’s where emergency electrical services come in. These services provide immediate assistance and repair work when electrical issues occur outside of normal business hours. Understanding the importance of emergency electrical services is essential for ensuring the safety and well-being of individuals and maintaining the functionality of our electrical systems. The Dangers of Electrical Emergencies Electrical emergencies can pose significant hazards to individuals and property. When electrical issues arise, there is a risk of electrical shock, electrocution, and even fire. Faulty wiring, overloaded circuits, and power outages can all lead to dangerous situations. Addressing these emergencies promptly is crucial to minimize the risks and prevent further damage or harm. Emergency electrical services are equipped with the expertise and tools to handle these situations safely and effectively. Immediate Response and Rapid Repairs One of the key advantages of emergency electrical services is their ability to provide immediate response and rapid repairs. Electrical emergencies can disrupt daily activities, jeopardize the security of a building, or even halt business operations entirely. With emergency services, trained professionals are available 24/7 to quickly assess the situation and carry out necessary repairs or replacements. This ensures that essential services are restored in a timely manner, minimizing inconvenience, and preventing further complications. Expertise and Experience Emergency electrical service providers are highly skilled professionals with extensive expertise in handling a wide range of electrical issues. They have the knowledge and experience to troubleshoot problems efficiently, identify the root cause of the issue, and implement appropriate solutions. By relying on their expertise, you can have peace of mind knowing that your electrical emergencies are being addressed with the utmost care and professionalism. Prevention and Maintenance In addition to providing immediate assistance during emergencies, emergency electrical services also play a crucial role in prevention and maintenance. Regular maintenance can help identify potential electrical issues before they become emergencies. By conducting inspections, testing electrical systems, and addressing any underlying problems, emergency service providers can help ensure the safety and longevity of your electrical infrastructure. Moreover, emergency electrical services may offer preventive measures such as surge protection and backup power solutions. These proactive measures can safeguard your property and equipment against electrical surges, ensuring uninterrupted power supply even during severe weather conditions or grid failures. Learn more about the topic with this suggested external resource. Star Plus Group https://starplusgroup.com.au, uncover additional details and fresh viewpoints on the topic covered in this piece. Emergency electrical services are essential for maintaining the safety and functionality of our electrical systems. They provide immediate response and rapid repairs during electrical emergencies, ensuring the well-being of individuals and minimizing the risks of fire and electrical hazards. By relying on their expertise, we can address issues promptly and effectively, preventing further damage or harm. Additionally, emergency electrical services play a vital role in prevention and maintenance, helping to identify potential problems and safeguarding our electrical infrastructure. By understanding the importance of emergency electrical services, we can be prepared for unexpected situations and ensure the smooth operation of our electrical systems. Find additional information in the related posts we’ve compiled for you:
0.9876
FineWeb
Botanium 90ml Nutrient • Specifically designed for Botanium Systems • Contains primary and secondary nutrients • Encourages rapid plant growth • Easy to use Botanium Nutrient is a liquid fertilizer containing both primary and secondary nutrients with trace elements. This specifically designed nutrient solution is suitable for growing all kinds of plants in Botanium systems. Each bottle will lasts 12-18 months per Botanium system, this can differ depending on what plant you are growing, for example Chilli plants will drink more than Basil. Pipette included.
0.8905
FineWeb
The birth of a child is a momentous occasion in a Dong village and requires strict adherence to many conventions. The first is the “stepping-over-the-threshold” convention, which is the belief that the first person to enter the house where the child was born will be the greatest influence on its personality and future success. After this person is established, neighbours are invited to the house to bring gifts. The birth is then announced to the mother’s family and, on the third day, female relatives will visit with more gifts. After the visitations from friends and relatives, a ceremony called “building the bridge” is practised, where three wooden planks are lined up side by side to symbolise a bridge and express goodwill to people passing by the house. The child’s hands are then wrapped in cloth, which the Dong believe will influence the child not to steal things later in life. The child’s first haircut and first taste of fermented rice happens when they are about one month old, and it is considered unlucky if these events happen prior to the one month mark. At six months old, the child will have their first taste of meat dipped in wine, which is considered a major milestone in the child’s life. Join a travel with us to discover the Culture of Dong Ethnic Minority: Explore the culture of Ethnic minorities in Southeast Guizhou
0.7156
FineWeb
How are human societies changing the global environment? Is sustainable development really possible? Can environmental risks be avoided? Is our experience of nature changing? This book shows how questions about the environment cannot be properly answered without taking a sociological approach. It provides a comprehensive guide to the ways in which sociologists have responded to the challenge of environmental issues as diverse as global warming, ozone depletion, biodiversity loss and marine pollution. It also covers sociological ideas such as risk, interpretations of nature, environmental realism, ecological modernization and globalization. Environmentalism and green politics are also introduced. Unlike many other texts in the field, the book takes a long-term view, locating environmental dilemmas within the context of social development and globalization. The Environment: A Sociological Introduction is unique in presenting environmental issues at an introductory level that assumes no specialist knowledge on the part of readers. The book is written in a remarkably clear and accessible style, and uses a rich range of empirical examples from across the globe to illustrate key debates. A carefully assembled glossary and annotated further reading suggestions also help to bring ideas to life. The book will be a valuable resource for students in a range of disciplines, including sociology, geography and the environmental sciences, but also for anyone who wants to get to grips with contemporary environmental debates.
0.851
FineWeb
Materials play a major role in the performance and lifetime of seals. Generally, hydraulic seals are exposed to a variety of application and working conditions, such as a wide temperature range, contact with various hydraulic fluids and the outside environment as well as high pressures and contact forces. The appropriate seal materials have to be selected to achieve a reasonable service life and service intervals. A wide variety of seal materials from four major polymeric material groups is available: - thermoplastic elastomers, such as polyurethane (TPU) and thermoplastic polyester elastomers (TPC) - rubbers, such as nitrile rubber (NBR) and hydrogenated nitrile rubber (HNBR), fluorocarbon rubbers (FKM, FPM) - polytetrafluoroethylene (PTFE) and its compounds - rigid thermoplastics and thermosets and their composites Many different material properties should be considered to support and maintain the sealing function over the expected seal service life, for example: - good elasticity over a wide temperature range, especially at low temperatures - excellent compression set and stress relaxation behaviour to keep the sealing force for the requested operating period - adequate hardness and flexibility to avoid leakage and allow easy installation - superior gap extrusion resistance to cover the increased pressures of fluid power equipment - adequate working temperature range - good chemical compatibility to cover a wide assortment of hydraulic fluids such as mineral and synthetic oils, biodegradable and water-based fluids or fire-resistant fluids - excellent tribological properties, i.e. low friction values and high wear resistance to achieve a high efficiency and avoid early failures especially when sealing against rough counter-surfaces In addition to these considerations, the structure and morphology of polymeric materials make selection and specification of seal materials much more complicated than the standard materials used in mechanical engineering (e.g. aluminium or steel). Mechanical properties of polymeric materials are strongly influenced by time, temperature, load and rate of motion. Highly complex intermolecular processes affect the stress relaxation and retardation phenomena. Furthermore, the tribology conditions of the system (e.g. friction and wear) has a strong influence on the seal material behaviour and vice versa. Therefore, state-of-the-art sealing systems can only be developed by close cooperation between material experts and product designers, supported by advanced design tools like non-linear FEA and extensive seal testing capabilities. SKF has a global material development organization that closely cooperates with the product development and testing functions. SKF is uniquely suited to develop, simulate, test and manufacture tailor-made materials for specific customer needs. The tables in the following sections list the most common materials used by SKF for serial production of hydraulic seals. A wide variety of additional seal materials are available for special hydraulic seals or other seal applications.
0.9395
FineWeb