Monday, August 31, 2009

What is the relationship between cancer and substance abuse?


Cancer Risks

Certain substances of abuse, including tobacco, marijuana, alcohol, methamphetamine, cocaine, and heroin, present special risks for the development of cancer.







Tobacco

Tobacco is a well-known carcinogen, and its use is the leading cause of preventable illness and death in the United States. In addition to causing lung, throat, and mouth cancer, it has been associated with cancers of the nasal cavity, esophagus, stomach, pancreas, breast, kidney, bladder, and cervix. Nicotine, which is contained in tobacco leaves, is highly addictive; however, it is not known to be carcinogenic. Nicotine is a vasoconstrictor (blood vessel constrictor), so it increases the risk of cardiovascular disease.


The National Cancer Institute has revealed the following statistics related to cancer and tobacco use in the United States:


• Cigarette smoking causes an estimated 443,000 deaths each year, including approximately 49,400 deaths from exposure to secondhand smoke.


• Lung cancer is the leading cause of cancer-related death among both men and women; 90 percent of lung cancer deaths among men and approximately 80 percent of lung cancer deaths among women are caused by smoking.


• Persons who smoke are up to six times more likely to have a heart attack than are nonsmokers, and the risk increases with the number of cigarettes smoked. Smoking also causes most cases of chronic lung disease.


• In 2009, about 21 percent of adults were cigarette smokers.


• Nearly 20 percent of high school students smoke cigarettes.


Smoking (or chewing) tobacco markedly increase the risk of cancers of the oral cavity (mouth, lips, and tongue). One of the effects of tobacco is that it weakens the immune system, which not only increases the risk of cancer but also increases the risk of infection. Aside from its relationship to cancers of the oral cavity, chewing tobacco also increases the risk of many other cancers and health problems.





Marijuana

Smoked marijuana and smoked tobacco are chemically similar; thus, like cigarettes, the greatest health hazard of marijuana is from smoking the substance. The psychoactive component of marijuana leaves, delta-9-tetrahydrocannabinol (THC), is a relatively safe drug.


Smoked marijuana, however, is a health risk. Thorough scientific analyses have identified at least six thousand of the same chemicals in marijuana smoke present in tobacco. The chief difference between the two plants is that marijuana contains THC and tobacco contains nicotine. Moreover, one of the most potent carcinogens in tobacco smoke, benzo[a]-pyrene, is present in larger quantities in marijuana smoke.


Another factor increasing the carcinogenic risk of marijuana is in the way it is inhaled; marijuana smokers frequently inhale and hold the smoke in their lungs, which increases the amount of tar deposited in the respiratory tract by a factor of about four. Approximately 20 percent of regular marijuana smokers (those who smoke three to four joints a day) have problems with chronic bronchitis, coughing, and excess mucus.


An alternative to smoking marijuana is ingesting it in pastries, drinks, and lollipops. Marijuana leaves also can be baked into brownies and other desserts. Ingested marijuana has no known carcinogenic effect; however, it still has a psychoactive effect, which can result in myriad problems, including social problems, traffic accidents, and dependence.


A problem with ingesting rather than smoking marijuana is that the digestive process markedly slows the onset of psychoactive effects. This makes ingesting less attractive to users of the substance; furthermore, because the onset of marijuana’s effect is slowed through ingestion, a large amount of the substance must be consumed, ultimately resulting in an unusually high level of THC in the body.





Alcohol

The combination of alcohol abuse and tobacco use markedly increases the risk of cancers of the oral cavity. Approximately 50 percent of cancers of the mouth, pharynx (throat), and larynx (voice box) are associated with heavy drinking. Even in nonsmokers, a strong association exists between alcohol abuse and cancers of the upper digestive tract, including the esophagus, the mouth, the pharynx, and the larynx.



Alcohol abuse, either alcoholism or binge drinking, also has been linked to pancreatic cancer, particularly in men. The risk has been reported to be up to six times greater than in men who do not abuse alcohol. A possible association may exist between alcohol abuse and other cancers, such as liver, breast, and colorectal cancers. It has been estimated that 2 to 4 percent of all cancer cases are caused either directly or indirectly by alcohol abuse. Alcohol abuse, like cigarette smoking, suppresses the immune system, which in turn increases the risk of developing cancer. These persons often do not seek treatment until the cancer is well advanced.





Methamphetamine

A number of different chemical processes can be used to make methamphetamine
; most of the processes include the use of volatile organic compounds, which are emitted gases, some of which have carcinogenic effects. Also, other toxic substances can be produced through the production of methamphetamine. Some of these substances are carcinogenic. Specifically, pancreatic cancer has been associated with methamphetamine use.





Cocaine and Heroin


Cocaine itself is not associated with a cancer risk; however, substances added to cocaine are carcinogenic. One example is phenacetin, which not only can cause cancer but also can induce kidney damage. Heroin and other opiates have no known carcinogenic properties. However, like cocaine, heroin may contain additives that are carcinogenic.




Substances Used and Abused by Persons with Cancer

Cancer, particularly in advanced stages, can cause extreme pain; thus, persons with cancer are often prescribed opiates to lessen their pain. Marijuana too is used by persons with cancer for pain relief and to reduce the side effects of chemotherapy.





Opiates

An opiate is a drug derived from opium, which is the sap of the opium poppy (Papaver somniferum). Opium has been used by humans since ancient times. Many opiates are on the market, including morphine, meperidine hydrochloride (Demerol), hydromorphone hydrochloride (Dilaudid), hydrocodone (Vicodin), and oxycodone (Oxycontin). Heroin is an excellent analgesic, but it is not prescribed for pain relief because of its highly addictive properties compared with other opiates. Another property of opiates is tolerance, which results in the need for increasingly higher doses to achieve the same effect.


Tolerance and addiction are not a major concern for a terminally ill person with cancer but they are a concern for persons with cancer that is in remission or cured. Some of these persons have “exchanged” their cancer for a drug addiction. After completing a drug rehabilitation program, these persons are at high risk to resume the use of opiates. Researchers believe that drug relapse is caused by the stress associated with cancer, combined with the ready availability of psychoactive drugs, both prescription and illegal.





Marijuana

Cancer and its treatment with chemotherapy is associated with side effects such as nausea, vomiting, anorexia (loss of appetite), and cachexia (muscle wasting). Marijuana is effective in reducing these symptoms; therefore, it has been recommended for persons with cancer.


The opinion of scientists at the National Cancer Institute, however, is that pharmaceuticals are available that are superior to marijuana in their effects. These pharmaceuticals include serotonin antagonists such as ondansetron (Zofran) and granisetron (Kytril), used alone or combined with dexamethasone (a steroid hormone); metoclopramide (Reglan) combined with diphenhydramine and dexamethasone; methylprednisolone (a steroid hormone) combined with droperidol (Inapsine); and prochlorperazine (Compazine).



Medical marijuana
legislation is a controversial topic in the United States. Despite the controversy, medical marijuana outlets (dispensaries) are increasing in number throughout the country. Their incidence depends on state and federal regulations. Many states have adopted marijuana statutes that are much more liberal than federal statutes.


Although there are legitimate medical uses for marijuana for persons with cancer and other conditions (such as glaucoma), many of the medical marijuana outlets supply the product to almost anyone for any reason.




Bibliography


Barclay, Joshua S., Justine E. Owens, and Leslie J. Blackhall. "Screening for Substance Abuse Risk in Cancer Patients Using the Opioid Risk Tool and Urine Drug Screen." Supportive Care in Cancer 22.7 (2014): 1883–8. Print.



Earleywine, Mitch. Understanding Marijuana: A New Look at the Scientific Evidence. New York: Oxford UP, 2005. Print.



Fisher, Gary, and Thomas Harrison. Substance Abuse Information for School Counselors, Social Workers, Therapists, and Counselors. 4th ed. Boston: Allyn, 2008. Print.



Granata, Roberta, Paolo Bossi, Rossella Bertulli, and Luigi Saita. "Rapid-Onset Opioids for the Treatment of Breakthrough Cancer Pain: Two Cases of Drug Abuse." Pain Medicine 15.5 (2014): 758–61. Print.



Miller, William. Rethinking Substance Abuse: What the Science Shows, and What We Should Do about It. New York: Guilford, 2010. Print.



O'Neill, Siobhan, et al. "Associations between DSM-IV Mental Disorders and Subsequent Self-Reported Diagnosis of Cancer." Journal of Psychosomatic Research 76.3 (2014): 207–12. Print.

What is wheezing?


Causes and Symptoms

Wheezing is a whistling or grating noise created when a person’s breathing passages are narrowed or blocked. It can be accompanied by tightness in the chest or shortness of breath, as well as anxiety due to difficulty breathing.


Wheezing is a symptom of several disorders. The most common causes of chronic wheezing are asthma and emphysema. Temporary wheezing due to obstruction by mucus can be caused by bronchitis, pneumonia, viral infections, and allergies as well as smoking and inhalation of fumes or foreign matter. The exact timing of the wheeze can give clues to its cause. Bronchitis causes a noise at the very end of a complete exhalation. Wheezing at the start of exhalation usually indicates asthma or emphysema. Wheezing only when inhaling is a sign of asthma.


Conditions such as gastroesophageal reflux disease, vocal cord dysfunction, and genetic disorders that affect the lungs, such as cystic fibrosis, can also cause wheezing. Patients with heart failure often develop cardiac asthma caused by a pulmonary edema, in which fluid builds up in the lungs because of inefficient pumping of the heart. Less commonly, wheezing may be a symptom of tumors, joint disorders, or heart aneurysms. Radiation therapy for cancer or other diseases can also cause the airways to constrict.




Treatment and Therapy

Treatment for wheezing involves treating the underlying disorder. Doctors may do blood work or administer x-rays; antibiotics and antihistamines may be prescribed for allergies or infections.


Medicines are often given to manage the discomfort and anxiety this symptom causes. For chronic wheezing, respiratory inhalers are usually prescribed. Bronchodilators give temporary relief by relaxing the airways. Bronchodilators can cause dependence, however, and patients should be monitored continually by a doctor. More severe symptoms may require regular use of corticosteroid inhalers, which reduce inflammation in the airways and make them less likely to constrict.


For mild wheezing, drinking warm liquids and inhaling moist, heated air, such as from a vaporizer or a hot shower, is helpful. Severe wheezing may require hospitalization and use of a strong bronchodilator, an oxygen tent, or a respiratory tube.




Perspective and Prospects

The Western use of bronchodilators for treatment of bronchitis and asthma began in the nineteenth century, although Indian medicine had used plant derivatives for similar effect for thousands of years. Corticosteroids became standard treatment during the 1970s.


Although wheezing is a well-managed symptom, determining the precise cause is often extremely difficult. This is especially true for doctors with limited resources in developing countries, where pneumonia is the most common respiratory cause of child mortality.




Bibliography


Barnes, Peter J., and Simon Godfrey. Asthma and Wheezing in Children. New York: Taylor & Francis, 1999.



Carson-DeWitt, Rosalyn. "Asthma—Adult." Health Library, September 30, 2012.



Shuman, Jill. "Bronchitis." Health Library, June 24, 2013.



Silverman, Michael, ed. Childhood Asthma and Other Wheezing Disorders. New York: Oxford University Press, 2002.



"Wheezing." MedlinePlus, May 16, 2010.

Sunday, August 30, 2009

What are three examples of foreshadowing in the first six pages of "The Birds"?

Early in "The Birds," Du Maurier frequently uses foreshadowing to build anticipation ahead of the birds' attack. One example of this comes in the second paragraph when Nat is eating his lunch and observing the birds from the cliff top:



The birds had been more restless than ever this fall of the year.



This is Du Maurier's first hint at the transformation of the birds' behaviour. Later on, Mr Trigg provides another example of foreshadowing when he relates an incident to Nat:



One or two gulls came so close to my head this afternoon I thought they'd knock my cap off!



These two examples, then, set the scene for the attacks: the birds are more restless and have come closer to humans than ever before.


Finally, Du Maurier uses foreshadowing through the character of Mrs Trigg. When Nat tells her about the attack on his house, for instance, she displays an attitude of scepticism:



"Sure they were real birds?" she said, smiling.



This scepticism foreshadows her own demise at the hands of the birds because it suggests that she might have lived, had she been more believing of Nat's obvious warning. 

Friday, August 28, 2009

What is hand-foot-and-mouth disease?


Causes and Symptoms

Hand-foot-and-mouth disease is usually caused by coxsackievirus A16, but it may also be associated with a number of other coxsackieviruses and enterovirus 71. Outbreaks of the disease are most common in the summer and early fall. Infants and young children ages one to five years are most commonly infected because they have not had previous exposure to the virus and, therefore, have less immunity than adults. They often become infected through contact with the nasal and oral secretions of infected children, and nursery school outbreaks may occur. Skin
lesions
and fecal material may also contribute to the spread of the virus. The incubation period is three to six days.



The illness commences with a low-grade fever (100 to 101 degrees Fahrenheit) and a sore mouth. Oral lesions begin as small, red macules and evolve rapidly into fragile vesicles that rupture, leaving painful ulcers. Any part of the mouth may be involved, but the hard palate buccal mucosa and tongue are mainly affected with an average of five to ten lesions. Similar lesions develop on the skin over the next one to two days; they usually number twenty to thirty, but there may be as many as one hundred. Discrete macular lesions, about 4 millimeters in diameter, appear on the hands and feet and sometimes the buttocks. These lesions often occur along skin lines and progress to become papules and white or gray flaccid vesicles containing infective virus. The lesions may be painful or tender. The fever occurs during the first one to two days of the illness, which resolves in seven to ten days. Rarely, the viral infection is complicated by meningoencephalitis, carditis, or pneumonia.




Treatment and Therapy

There is no specific treatment for hand-foot-and-mouth disease. The infection usually resolves without complications in about one week. Topical anesthetic agents, such as viscous lidocaine, may be used to soothe the discomfort of the mouth lesions. Popsicles and cool sherbets may be given to young children to help soothe a sore mouth. Acetaminophen given at an appropriate dosage for the body weight of the child may also help to relieve the pain of this condition. Some pediatricians recommend a blend of Benadryl and liquid antacid to relieve the stinging sensation of the mouth lesions.




Perspective and Prospects

The first described outbreak of this disease occurred in Toronto, Canada, in 1957. British authors first coined the term “hand-foot-and-mouth disease” when they reported an outbreak in Birmingham, England, in 1959. While there currently are no medications available for treating enteroviral infections, a number of antiviral agents are being studied and might be useful for complicated forms of this disease, such as meningoencephalitis.




Bibliography


Barnhill, Raymond, and A. Neil Crowson, eds. Textbook of Dermatopathology. 3d ed. New York: McGraw-Hill, 2010.



Belshe, Robert B., ed. Textbook of Human Virology. 2d ed. St. Louis, Mo.: Mosby Year Book, 1991.



Goldsmith, Lowell, et al., eds. Fitzpatrick’s Dermatology in General Medicine. 8th ed. 2 vols. New York: McGraw-Hill, 2012.



"Hand, Foot, and Mouth Disease (HFMD)." Centers for Disease Control and Prevention, April 27, 2012.



Mandell, Gerald L., John E. Bennett, and Raphael Dolin, eds. Mandell, Douglas, and Bennett’s Principles and Practice of Infectious Diseases. 7th ed. New York: Churchill Livingstone/Elsevier, 2010.



McCoy, Krisha. "Hand, Foot, and Mouth Disease." Health Library, November 26, 2012.



Vorvick, Linda J., and David Zieve. "Hand-Foot-Mouth Disease." Medline Plus, August 10, 2012.

What is intelligence?


Introduction

The idea that human beings differ in their capacity to adapt to their environments, to learn from experience, to exercise various skills, and in general to succeed at various endeavors has existed since ancient times. Intelligence is the attribute most often singled out as responsible for successful adaptations. Up to the end of the nineteenth century, notions about what constitutes intelligence and how differences in intelligence arise were mostly speculative. In the late nineteenth century, several trends converged to bring about an event that would change the way in which intelligence was seen and dramatically influence the way it would be studied. That event, which occurred in 1905, was the publication of the first useful instrument for measuring intelligence, the Binet-Simon scale, which was developed in France by Alfred Binet and Théodore Simon.





Although the development of
intelligence tests was a great technological accomplishment, it occurred, in a sense, somewhat prematurely before much scientific attention had been paid to the concept of intelligence. This circumstance tied the issue of defining intelligence and a large part of the research into its nature and origins to the limitations of the tests that had been devised. In fact, the working definition of intelligence that many psychologists have used either explicitly or implicitly in their scientific and applied pursuits is the one expressed by Edwin Boring
in 1923, which holds that intelligence is whatever intelligence tests measure. Most psychologists realize that this definition is redundant and inadequate in that it erroneously implies that the tests are perfectly accurate and able to capture all that is meant by the concept. Nevertheless, psychologists and others have proceeded to use the tests as if the definition were true, mainly because of a scarcity of viable alternatives. The general public has also been led astray by the existence of “intelligence” tests and the frequent misuse of their results. Many people have come to think of the intelligence quotient, or IQ, not as a simple score achieved on a particular test, which it is, but as a complete and stable measure of intellectual capacity, which it most definitely is not. Such misconceptions have led to an understandable resistance toward and resentment of intelligence tests.




Changing Definitions

Boring’s semifacetious definition of intelligence may be the best known and most criticized one, but it is only one among many that have been offered. Most experts in the field have defined the concept at least once in their careers. Two of the most frequently cited and influential definitions are the ones provided by Binet himself and by David Wechsler, author of a series of “second-generation” individual intelligence tests that overtook the Binet scales in terms of the frequency with which they are used. Binet believed that the essential activities of intelligence are to judge well, to comprehend well, and to reason well. He stated that intelligent thought is characterized by direction, knowing what to do and how to do it; by adaptation, the capacity to monitor one’s strategies for attaining a desired end; and by criticism, the power to evaluate and control one’s behavior. In 1975, almost sixty-five years after Binet’s death, Wechsler defined intelligence, not dissimilarly, as the global capacity of the individual to act purposefully, to think rationally, and to deal effectively with the environment.


In addition to the testing experts (psychometricians), developmental, learning, and cognitive psychologists, among others, are also vitally interested in the concept of intelligence. Specialists in each of these subfields emphasize different aspects of it in their definitions and research.


Representative definitions were sampled in 1921, when the Journal of Educational Psychology published the views of fourteen leading investigators, and again in 1986, when Robert Sternberg and Douglas Detterman collected the opinions of twenty-four experts in a book entitled What Is Intelligence? Contemporary Viewpoints on Its Nature and Definition. Most of the experts sampled in 1921 offered definitions that equated intelligence with one or more specific abilities. For example, Lewis Terman equated it with abstract thinking, which is the ability to elaborate concepts and to use language and other symbols. Others proposed definitions that emphasized the ability to adapt or learn. Some definitions centered on knowledge and cognitive components only, whereas others included nonintellectual qualities, such as perseverance.


In comparison, Sternberg and Detterman’s 1986 survey of definitions, which is even more wide ranging, is accompanied by an organizational framework consisting of fifty-five categories or combinations of categories under which the twenty-four definitions can be classified. Some theorists view intelligence from a biological perspective and emphasize differences across species or the role of the central nervous system. Some stress cognitive aspects of mental functioning, while others focus on the role of motivation and goals. Still others, such as Anne Anastasi, choose to look on intelligence as a quality that is inherent in behavior rather than in the individual. Another major perspective highlights the role of the environment, in terms of demands and values, in defining what constitutes intelligent behavior. Throughout the 1986 survey, one can find definitions that straddle two or more categories.


A review of the 1921 and 1986 surveys shows that the definitions proposed have become considerably more sophisticated and suggests that, as the field of psychology has expanded, the views of experts on intelligence may have grown farther apart. The reader of the 1986 work is left with the clear impression that intelligence is such a multifaceted concept that no single quality can define it and no single task or series of tasks can capture it completely. Moreover, it is clear that to unravel the qualities that produce intelligent behavior, one must look not only at individuals and their skills but also at the requirements of the systems in which people find themselves. In other words, intelligence cannot be defined in a vacuum.


New intelligence research focuses on different ways to measure intelligence and on paradigms for improving or training intellectual abilities and skills. Measurement paradigms allow researchers to understand ongoing processing abilities. Some intelligence researchers include measures of intellectual style and motivation in their models.




Factor Analysis

The lack of a universally accepted definition has not deterred continuous theorizing and research on the concept of intelligence. The central issue that has dominated theoretical models of intelligence is the question of whether it is a single, global ability or a collection of specialized abilities. This debate, started in England by Charles Spearman, is based on research that uses the correlations among various measures of abilities and, in particular, the method of
factor analysis, which was also pioneered by Spearman. As early as 1904, Spearman, having examined the patterns of correlation coefficients among tests of sensory discrimination and estimates of intelligence, proposed that all mental functions are the result of a single general factor, which he later designated g. Spearman equated g with the ability to grasp and apply relations. He also allowed for the fact that most tasks require unique abilities, and he named those s, or specific, factors. According to Spearman, to the extent that performance on tasks was positively correlated, the correlation was attributable to the presence of g, whereas the presence of specific factors tended to lower the correlation between measures of performance on different tasks.


By 1927, Spearman had modified his theory to allow for the existence of an intermediate class of factors, known as group factors, which were neither as universal as g nor as narrow as the s factors. Group factors were seen as accounting for the fact that certain types of activities, such as tasks involving the use of numbers or the element of speed, correlate more highly with one another than they do with tasks that do not have such elements in common.


Factor-analytic research has undergone explosive growth and extensive variations and refinements in both England and the United States since the 1920s. In the United States, work in this field was influenced greatly by Truman Kelley, whose 1928 book Crossroads in the Mind of Man presented a method for isolating group factors, and L. L. Thurstone, who by further elaboration of factor-analytic procedures identified a set of about twelve factors that he designated as the “primary mental abilities.” Seven of these were repeatedly found in a number of investigations, using samples of people at different age levels, that were carried out by both Thurstone and others. These group factors or primary mental abilities are verbal comprehension, word fluency, speed and accuracy of arithmetic computation, spatial visualization, associative memory, perceptual speed, and general reasoning.




Organizational Models

As the search for distinct intellectual factors progressed, their number multiplied, and so did the number of models devised to organize them. One type of scheme, used by Cyril Burt, Philip E. Vernon, and others, is a hierarchical arrangement of factors. In these models, Spearman’s g factor is placed at the top of a pyramid and the specific factors are placed at the bottom; in between, there are one or more levels of group factors selected in terms of their breadth and arranged according to their interrelationships with the more general factors above them and the more specific factors below them.


In Vernon’s scheme, for example, the ability to change a tire might be classified as a specific factor at the base of the pyramid, located underneath an intermediate group factor labeled mechanical information, which in turn would be under one of the two major group factors identified by Vernon as the main subdivisions under g—namely, the practical-mechanical factor. The hierarchical scheme for organizing mental abilities is a useful device that is endorsed by many psychologists on both sides of the Atlantic. It recognizes that very few tasks are so simple as to require a single skill for successful performance, that many intellectual functions share some common elements, and that some abilities play a more pivotal role than others in the performance of culturally valued activities.


Another well-known scheme for organizing intellectual traits is the structure-of-intellect (SOI) model developed by J. P. Guilford. Although the SOI is grounded in extensive factor-analytic research conducted by Guilford throughout the 1940s and 1950s, the model goes beyond factor analysis and is perhaps the most ambitious attempt to classify systematically all the possible functions of the human intellect. The SOI classifies intellectual traits along three dimensions—namely, five types of operations, four types of contents, and six types of productions, for a total of 120 categories (5 × 4 × 6). Intellectual operations consist of what a person actually does (for example, evaluating or remembering something), the contents are the types of materials or information on which the operations are performed (for example, symbols, such as letters or numbers), and the products are the form in which the contents are processed (for example, units or relations). Not all the 120 categories in Guilford’s complex model have been used, but enough factors have been identified to account for about one hundred of them, and some have proved very useful in labeling and understanding the skills that tests measure. Furthermore, Guilford’s model has served to call attention to some dimensions of intellectual activity, such as creativity and interpersonal skills, that had been neglected previously.




Competence and Self-Management

Contemporary theorists in the area of intelligence have tried to avoid the reliance on factor analysis and existing tests that have limited traditional research and have tried different approaches to the subject. For example, Howard Gardner, in his 1983 book Frames of Mind: The Theory of Multiple Intelligences, starts with the premises that the essence of intelligence is competence and that there are several distinct areas in which human beings can demonstrate competence. Based on a wide-ranging review of evidence from many scientific fields and sources, Gardner designated seven areas of competence as separate and relatively independent “intelligences.” In his 1993 work Multiple Intelligences, Gardner revised his theory to include an eighth type of intelligence. This set of attributes comprises verbal, mathematical, spatial, bodily/ kinesthetic, musical, interpersonal, intrapersonal, and naturalist skills.


Another theory is the one proposed by Robert Sternberg in his 1985 book Beyond IQ: A Triarchic Theory of Human Intelligence. Sternberg defines intelligence, broadly, as mental self-management and stresses the “real-world,” in addition to the academic, aspects of the concept. He believes that intelligent behavior consists of purposively adapting to, selecting, and shaping one’s environment and that both culture and personality play significant roles in such behavior. Sternberg posits that differences in IQ scores reflect differences in individuals’ stages of developing the expertise measured by the particular IQ test, rather than attributing these scores to differences in intelligence, ability, or aptitude. Sternberg’s model has five key elements: metacognitive skills, learning skills, thinking skills, knowledge, and motivation. The elements all influence one another. In this work, Sternberg claims that measurements derived from ability and achievement tests are not different in kind; only in the point at which the measurements are being made.




Intelligence and Environment

Theories of intelligence are still grappling with the issues of defining its nature and composition. Generally, newer theories do not represent radical departures from the past. They do, however, emphasize examining intelligence in relation to the variety of environments in which people actually live rather than to only academic or laboratory environments. Moreover, many investigators, especially those in cognitive psychology, are more interested in breaking down and replicating the steps involved in information processing and problem solving than they are in enumerating factors or settling on a single definition of intelligence. These trends hold the promise of moving the work in the field in the direction of devising new ways to teach people to understand, evaluate, and deal with their environments more intelligently instead of simply measuring how well they do on intelligence tests. In their 1998 article “Teaching Triarchically Improves School Achievement,” Sternberg and his colleagues note that teaching or training interventions can be linked directly to components of intelligence. Motivation also plays a role. In their 2000 article “Intrinsic and Extrinsic Motivation,” Richard Ryan and Edward Deci provide a review of contemporary thinking about intrinsic and extrinsic motivation. The authors suggest that the use of motivational strategies should promote student self-determination.


The most heated of all the debates about intelligence is the one regarding its determinants, often described as the nature-nurture controversy. The nature side of the debate was spearheaded by ;Francis Galton, a nineteenth century English scientist who had become convinced that intelligence was a hereditary trait. Galton’s followers tried to show, through studies comparing identical and nonidentical twins reared together and reared apart and by comparisons of people related to each other in varying degrees, that genetic endowment plays a far larger role than the environment in determining intelligence. Attempts to quantify an index of heritability for intelligence through such studies abound, and the estimates derived from them vary widely. On the nurture side of the debate, massive quantities of data have been gathered in an effort to show that the environment, including factors such as prenatal care, social-class membership, exposure to certain facilitative experiences, and educational opportunities of all sorts, has the more crucial role in determining a person’s level of intellectual functioning.


Many critics, such as Anastasi (in a widely cited 1958 article entitled “Heredity, Environment, and the Question ’How?’”) have pointed out the futility of debating how much each factor contributes to intelligence. Anastasi and others argue that behavior is a function of the interaction between heredity and the total experiential history of individuals and that, from the moment of conception, the two are inextricably tied. Moreover, they point out that, even if intelligence were shown to be primarily determined by heredity, environmental influences could still modify its expression at any point. Most psychologists now accept this “interactionist” position and have moved on to explore how intelligence develops and how specific genetic and environmental factors affect it.




Bibliography


Alloway, Tracy Packiam, and Ross Alloway. Working Memory: The Connected Intelligence. New York: Psychology Press, 2013. Print.



Fancher, Raymond E. The Intelligence Men: Makers of the IQ Controversy. New York: W. W. Norton, 1987. Print.



Flynn, James R. What Is Intelligence? Beyond the Flynn Effect. New York: Cambridge University Press, 2009. Print.



Gardner, Howard. Frames of Mind: The Theory of Multiple Intelligences. New York: Basic Books, 2004. Print.



Gardner, Howard. Multiple Intelligences: The Theory in Practice. New York: Basic Books, 2006. Print.



Guilford, Joy Paul. The Nature of Human Intelligence. New York: McGraw-Hill, 1967. Print.



Kaufman, Scott Barry. Ungifted: Intelligence Redefined. New York: Basic, 2013. Print.



Martinez, Michael E. Future Bright: A Transforming Vision of Human Intelligence. New York. Oxford UP, 2013. Print.



Murdoch, Stephen. IQ: A Smart History of a Failed Idea. Hoboken, N.J.: John Wiley & Sons, 2007. Print.



Ryan, R. M., and E. L. Deci. “Intrinsic and Extrinsic Motivation.” Contemporary Educational Psychology 25(2000): 54–67. Print.



Sternberg, Robert J. Successful Intelligence. New York: Plume, 1997. Print.



Sternberg, Robert J.The Triarchic Mind: A New Theory of Human Intelligence. New York: Viking Penguin, 1989. Print.



Sternberg, Robert J., B. Torff, and E. L. Grigorenko.“Teaching Triarchically Improves School Achievement.” Journal of Educational Psychology 90 (1998): 374–84. Print.



Vernon, Philip Ewart. Intelligence: Heredity and Environment. San Francisco: W. H. Freeman, 1979. Print.

Besides food shortages, what other shortages were there in My Brother Sam Is Dead by Christopher and James L. Collier?

In My Brother Sam is Dead, by Christopher and James Collier, there are many shortages, in addition to food.


As in all times of war, those involved in the fighting suffer many shortages of things that they are otherwise accustomed to. Some of these would include the following: 


There was a shortage of warm and appropriate clothing. Once the clothing they arrived with was torn, worn out, or lost, it was often difficult to have it replaced.


There was a shortage of horses. The soldiers needed horses to help get them from one place to another, to carry items that needed to move from one battlefield to another.


There was also a shortage of medical personnel to tend the wounded. Both doctors and nurses were needed to take care of and perform surgery on soldiers when necessary.

Thursday, August 27, 2009

What is methicillin-resistant staph infection?


Definition

A methicillin-resistant staph (MRSA) infection is caused by the bacterium
Staphylococcus aureus. The bacterium can affect the skin, blood, bones, or lungs. A person can be infected or colonized with MRSA. When a person is infected, the bacterium produces symptoms. A person colonized also has the bacterium, but the bacterium may not cause any symptoms.


















There are two types of MRSA infection: community acquired and nosocomial. People who have community-acquired MRSA infection were infected outside a hospital setting (such as a dormitory). Nosocomial MRSA infections occur in hospital settings.




Causes

An MRSA infection can spread through several mechanisms, including from contaminated surfaces, from person to person, and from one area of the body to another.




Risk Factors

The following factors increase the chance of community acquired infection:
impaired immunity, sharing crowded spaces (such as dormitories and locker rooms),
using intravenous drugs, serious illness, exposure to animals (as pet owners,
veterinarians, and pig farmers, for example), using antibiotics,
having a chronic skin disorder, and past MRSA infection. Also at higher risk
are young children, athletes, prisoners, and military personnel.


For nosocomial infection, the risk factors are impaired immunity, exposure to hospital or clinical settings, advanced age, chronic illness, using antibiotics, having a wound, living in a long-term-care center, and having an indwelling medical device (such as a feeding tube or intravenous catheter). Also, men are at higher risk.




Symptoms

The symptoms of MRSA include folliculitis (infection of hair follicles), boils (a skin
infection that may drain pus, blood, or an amber-colored liquid), scalded skin
syndrome (a skin infection characterized by a fever, rash, and sometimes
blisters), impetigo (large blisters on the skin), toxic shock
syndrome (a rare but serious bacterial
infection whose primary symptoms are a rash and high fever),
cellulitis (a skin infection characterized by a swollen, red
area that spreads quickly), and an abscess.




Screening and Diagnosis

A doctor will ask about symptoms and medical history and will perform a physical exam. Tests may include cultures, blood tests, urine tests, and a skin biopsy (removal of a sample of skin to test for infection).




Treatment and Therapy

Treatment options include medications such as antibiotics, prescribed to kill the bacteria, and incision and drainage of an abscess, in which the doctor (but not the patient) opens the abscess and allows the fluid to drain. Another treatment is cleansing the skin. To treat the infection and to keep it from spreading, one should wash skin with an antibacterial cleanser, apply an antibiotic, and cover skin with a sterile dressing.




Prevention and Outcomes

To help reduce the chance of getting an MRSA infection, one should thoroughly wash hands with soap and water, keep cuts and wounds clean and covered until healed, and avoid contact with other people’s wounds and with materials contaminated by wounds. Hospitalized persons’ visitors, and health care workers, may be required to wear special clothing and gloves to prevent spreading the infection to others.




Bibliography


Archer, G. L. “Staphylococcal Infections.” Andreoli and Carpenter’s Cecil Essentials of Medicine. Ed. Thomas E. Andreoli, et al. 8th ed. Philadelphia: Saunders, 2010. Print.



Centers for Disease Control and Prevention. “Seasonal Flu and Staph Infection.” Available at http://www.cdc.gov/flu/about/qa/flustaph.htm.



Crossley, Kent B., Kimberly K. Jefferson, and Gordon L. Archer, eds. Staphylococci in Human Disease. Hoboken: Wiley, 2009. Print.



Laibl, V. R., et al. “Clinical Presentation of Community-Acquired Methicillin-Resistant Staphylococcus aureus in Pregnancy.” Obstetrics and Gynecology 106 (2005): 461–65. Print.



"Methicillin-resistant Staphylococcus aureus (MRSA) Infections." CDC.gov. Centers for Disease Control and Prevention, 2015. Web. 31 Dec. 2015.

What is social perception?


Introduction

Social perception deals with two general classes of cognitive-perceptual processes through which people process, organize, and recall information about others. Those that deal with how people form impressions of other people’s personalities (called person perception) form the first class. The second class includes those processes that deal with how people use this information to draw conclusions about other people’s motivations, intentions, and emotions to explain and predict their behavior (called attribution processes). This importance of social perception in social psychology is revealed in the fact that people’s impressions and judgments about others, whether accurate or not, can have profound effects on their own and others’ behavior.




People are naturally motivated to understand and predict the behavior of those around them. Being able to predict and understand the social world gives people a sense of mastery and control over their environment. Psychologists who study social perception have shown that people try to make sense of their social worlds by determining whether other people’s behavior is produced by a disposition—some internal quality or trait unique to a person—or by something in the situation or environment. The process of making such determinations, which is called causal attribution, was developed by social psychologists Fritz Heider, Edward Jones, Keith Davis, and Harold Kelley in the late 1950s and early 1960s.


According to these attribution theorists, when people decide that a person’s behavior reflects a disposition (when, for example, someone decides that a person is friendly because he acted friendly), people have made an internal or dispositional attribution. In contrast, when people decide that a person’s behavior was caused by something in the situation—he acted in a friendly way to make someone like him—they have made an external or situational attribution. The attributions people make for others’ behaviors carry considerable influence in the impressions they form of them and in how they will behave toward them in the future.




Inaccuracies and Biases

People’s impressions and attributions are not always accurate. For example, in many situations, people seem to be inclined to believe that other people’s behavior is caused by dispositional factors. At the same time, they believe that their own behavior is the product of situational causes. This tendency has been called the actor-observer bias.
Moreover, when people try to explain the causes of other people’s behavior, especially behavior that is clearly and obviously caused by situational factors (factors such as a coin flip, a dice roll, or some other situational inducement), they tend to underestimate situational influences and overestimate the role of dispositional causes. This tendency is referred to as correspondence bias, or the fundamental attribution error. In other words, people prefer to explain other people’s behavior in terms of their traits or personalities rather than in terms of situational factors, even when situational factors actually caused the behavior.


In addition to these biases, social psychologists have examined other ways in which people’s impressions of others and inferences about the causes of their behavior can be inaccurate or biased. In their work, for example, psychologists Daniel Kahneman and Amos Tversky have described a number of simple but efficient thinking strategies, or rules of thumb, called heuristics. The availability heuristic
is the tendency to explain behaviors on the basis of causes that are easily or quickly brought to mind. Similarly, the representativeness heuristic is the tendency to believe that a person who possesses characteristics that are associated with a social group probably belongs to that group. Although heuristics make social thinking more efficient and yield reasonable results most of the time, they can sometimes lead to significant judgment errors.




Influence of Schemata and Primacy Effect

Bias can also arise in social perception in a number of other ways. Because of the enormous amount of social information that people must process at any given moment, they have developed various ways of organizing, categorizing, and simplifying this information and the expectations they have about various people, objects, and events. These organizational structures are called schemata. For example, schemata that organize information about people’s membership in different categories or groups are called stereotypes
or prototypes. Schemata that organize information about how traits go together in forming a person’s personality are called implicit personality theories (IPTs). Although schemata, like heuristics, help make social thinking more efficient and yield reasonable results most of the time, they can also sometimes lead to significant judgment errors, such as prejudice and discrimination.


Finally, social perception can be influenced by a variety of factors of which people are unaware but that can exert tremendous influence on their thinking. Social psychologist Solomon Asch was the first to describe the primacy effect in impression formation. The primacy effect is the tendency for things that are seen or received first to have a greater impact on people’s thinking than things that come later. Many other things in the environment can prime people, or make them “ready,” to see, interpret, or remember things that they might not otherwise have seen, thought about, or remembered. Priming occurs when something in the environment makes certain things easier to bring to mind.


During the 1970s and 1980s, social psychologists made numerous alterations and extensions of the existing theories of attribution and impression formation to keep pace with the field’s growing emphasis on mental (cognitive) and emotional (affective) processes. These changes focused primarily on incorporating work from cognitive psychology on memory processes, the use of schemata, and the interplay of emotion, motivation, and cognition.




Stereotype and Conflict Research

Social psychologists have argued that many social problems have their roots in social perception processes. Because social perception biases can sometimes result in inaccurate perceptions, misunderstandings, conflict between people and groups, and other negative consequences, social psychologists have spent much time and effort trying to understand them. Their hope is that by understanding such biases they will be able to suggest solutions for them. In a number of experiments, social psychologists have attempted to understand the social perception processes that may lead to stereotyping, which can result in prejudice and discrimination.


For example, one explanation for why stereotypes are so hard to change once they have been formed is the self-fulfilling prophecy. Self-fulfilling prophecies occur when people have possibly inaccurate beliefs about others (such as stereotypes) and act on those beliefs, bringing about the conditions necessary to make those beliefs come true. In other words, when people expect something to be true about another person (especially a negative thing), they frequently look for and find what they expect to see. At other times, they actually bring out the negative (or positive) qualities they expect to be present. In a classic 1968 study by social psychologists Robert Rosenthal and Lenore Jacobson, for example, children whose teachers expected them to show a delayed but substantial increase in their intelligence (on the basis of a fictitious intelligence test) actually scored higher on a legitimate intelligence quotient (IQ) test administered at the end of the school year. Presumably, the teachers’ expectations of those students caused them to treat those students in ways that actually helped them perform better. Similarly, social psychologists Rebecca Curtis and Kim Miller have shown that when people think someone they meet likes them, they act in ways that lead that person to like them. If, however, people think a person dislikes them, they act in ways that actually make that person dislike them.


The behaviors that produce self-fulfilling prophecies can be subtle. For example, in 1974, social psychologists Carl Word, Mark Zanna, and Joel Cooper demonstrated that the subtle behaviors of interviewers during job interviews can make applicants believe that they performed either poorly or very well. These feelings, in turn, can lead to actual good or poor performance on the part of the applicants. What was most striking about this study, however, was that the factor that led to the subtle negative or positive behaviors was the interviewers’ stereotypes of the applicants’ racial group membership. Black applicants received little eye contact from interviewers and were not engaged in conversation; the behaviors displayed by interviewers in the presence of white applicants were exactly the opposite. Not surprisingly, black applicants were seen as less qualified and were less likely to be hired. Clearly, subtle behaviors produced by racial stereotypes can have major consequences for the targets of those stereotypes.




Primacy Effect in Academic Settings

The relevance of social perception processes to everyday life is not restricted to stereotyping, although stereotyping is indeed an important concern. In academic settings, for example, situational factors can lead teachers to form impressions of students that have little bearing on their actual abilities. Social psychologist Jones and his colleagues examined the way in which primacy effects operate in academic settings. Two groups of subjects saw a student perform on a test. One group saw the student start out strong and then begin to do poorly. The other group saw the student start out poorly and then begin to improve. For both groups, the student’s performance on the test was identical, and the student received the same score. The group that saw the student start out strong and then falter thought the student was brighter than the student who started out poorly and then improved. Clearly, first impressions matter.




Correspondence Bias

Finally, research on the correspondence bias (fundamental attribution error) makes it clear that people must be very careful when trying to understand what other people are like. In many situations, the demands of people’s occupations or their family roles force them to do things with which they may not actually agree. Substantial research has shown that observers will probably think these people have personalities that are consistent with their behaviors. Lawyers, who must defend people who may have broken the law; debaters, who must argue convincingly for or against a particular point of view; and actors, who must play parts that they did not write, are all vulnerable to being judged on the basis of their behavior. Unless the observer is particularly sensitive to the fact that when these people are doing their jobs, their behaviors do not reveal anything about their true personalities, the observer may actually (and incorrectly) believe that they do.




Influential Research and Theories

The study of social perception has multiple origins that can be traced back to a number of influential researchers and theorists. It was one of the first topics to be emphasized when the modern study of social psychology began during World War II. Later perspectives on social perception processes can be traced to the early work of Asch, a social psychologist who emigrated to the United States before the war. His work yielded important demonstrations of primacy effects in impression formation.


Also important to the development of an understanding of social perception was the work of Heider, another German émigré, who came to the United States as World War II was ending in Europe. Heider’s influential book The Psychology of Interpersonal Relations (1958) arguably started the cognitive approach to social perception processes. In many circles, it is still regarded as a watershed of ideas and insights on person perception and attribution.


Perhaps the most important historical development leading up to the modern study of social perception, however, was the work of Jerome Bruner
and other “new look” cognitive psychologists. Following World War II, a number of psychologists broke with the then-traditional behaviorist/learning theory perspective and applied a Gestalt perspective to human perception. They emphasized the subjective nature of perception and interpretation and argued that both cognition (thinking) and situational context are important in determining “what” it is that a person perceives. Using ambiguous figures, for example, they demonstrated that the same object can be described in many different ways depending on the context in which it is seen.




Theoretical Commonalities

Although their perspectives differ, theorists of social perception generally share some common themes. First, they all acknowledge that social perception is inherently subjective; the most important aspect of understanding people is not what is “true” in an objective sense, but rather what is believed to be true. Second, they acknowledge that people think about other people and want to understand why people do the things they do. Finally, they believe that some general principles govern the ways in which people approach social perception and judgment, and they set out to demonstrate these principles scientifically.




Bibliography


Alicke, Mark D., David A. Dunning, and Joachim I. Krueger, eds. The Self in Social Judgment. New York: Psychology, 2005. Print.



Carlston, Donal E., ed. The Oxford Handbook of Social Cognition. New York: Oxford UP, 2013. Print.



Deaux, Kay, and Gina Philogene, eds. Representations of the Social: Bridging Theoretical Traditions. Malden: Blackwell, 2001. Print.



Forgas, Joseph P., ed. Affect in Social Thinking and Behavior. New York: Psychology, 2006. Print.



Gilbert, Daniel T., ed. The Selected Works of Edward E. Jones. Hoboken: Wiley, 2004. Print.



Hewstone, Miles, Wolfgang Stroebe, and Klaus Jonas, eds. An Introduction to Social Psychology. 5th ed. Chichester: Wiley, 2012. Print.



Johnson, Kerri L., and Maggie Shiffrar, eds. People Watching: Social, Perceptual, and Neurophysiological Studies of Body Perception. New York: Oxford UP, 2014. Print.



Strack, Fritz, and Jens Forster, eds. Social Cognition: The Basis of Human Interaction. New York: Psychology, 2009. Print.



Teiford, Jenifer B., ed. Social Perception: Twenty-first Century Issues and Challenges. New York: Nova Science, 2008. Print.

What keeps George and Lennie together in Of Mice and Men by John Steinbeck?

George's sense of duty to Lennie, and to Lennie's Aunt Clara, is the biggest reason that the two men travel around together. George grew up with Lennie, so there's an emotional connection to home, as well as his brotherly bond, that keeps him watching out for Lennie. George tells Slim that he used to play jokes on Lennie when they were younger. As George matured, he realized that Lennie not only didn't understand when he was being bullied, but he also thanked George for helping him with the jokes. That made George think that maybe he should stop being a problem for Lennie and start being the solution.


Then, when Aunt Clara died, Lennie didn't have anyone else and George says, "Lennie just come along with me out workin'. Got kinda used to each other after a little while" (40). Add all of these reasons to the fact that Lennie could not take care of himself if he were alone and George is stuck; but at least he cares for Lennie, too. Finally, George admits to Slim that life can get lonely as a transient worker and having someone to talk to helps keep the loneliness to a minimum. George also admits the following:



"I ain't got no people. . . I seen the guys that go around on the ranches alone. That ain't no good. They don't have no fun. After a long time they get mean. They get wantin' to fight all the time" (41).



Thus, George keeps a look out for Lennie because of his sense of duty, kindness, and to keep the loneliness of their life from making them mean.

What was meant by the term, "Second New Deal?"

The New Deal was a program of economic reform put forth by President Franklin D. Roosevelt. From 1933 to 1939, changes were made in agriculture and resource distribution, housing, and labor conditions. The New Deal was intended to remedy the effects of the Great Depression and focused on aid for the poor, the elderly, and the unemployed. Perhaps the most famous change resulting from the New Deal was the establishment of the Social Security program, which ensures a financial safety net for people who cannot work or otherwise afford their needs.


Most historians use the term, "Second New Deal" to refer to the second part of Roosevelt's reforms. Between 1935 and 1938, Roosevelt promoted labor unions to improve working conditions, signed off on the Social Security Act, and created the United States Housing Authority. While the first part of the New Deal had more to do with "big-picture" economic crises like the stock market crash, the Second New Deal sought to reform troubles experienced on a more day-to-day basis, like poverty.

Wednesday, August 26, 2009

What are some ideas for a letter from George about his feelings about the other characters on the ranch in Of Mice and Men?

While I don't want to write your letter for you, I will give you some help with content. George and Lennie travel around California as itinerant workers and come to the ranch to "buck barley" because it is harvesting time in the Salinas Valley. In chapter two the reader is introduced to the main characters on the ranch, including Candy, Crooks, Slim, Curley and his wife.


In George's letter, he would probably first reveal his feelings toward Lennie. While Lennie often "does bad things" and gets George in trouble or loses him a job, the reader may assume that George loves his friend and will try to protect him. That's why he tells Lennie to avoid both Curley and Curley's wife. Unfortunately, Lennie is involved in ugly incidents with both of them. He fights Curley in chapter three and breaks the man's hand while also making him a bitter enemy. He accidentally kills Curley's wife when she tempts him into touching her hair. George would acknowledge his deep feelings of sorrow and regret over having to kill Lennie. He would explain that he felt he had no choice and might use the words of Candy (Candy tells George he should have shot his dog himself) and Slim (who tells George it won't be good if Curley or the law gets to Lennie first) to justify his action.


George would definitely say nice things about Slim, Candy and Crooks. He would say that Slim was a wise man who understood his final action toward Lennie. He might even say that he and Slim were now good friends because they walked away from the river together in the final scene. He would explain the situation with Candy and how the three of them were all set to buy that "little piece of land" because Candy was going to contribute a significant amount of money. He would feel sorry for Candy because the dream never materialized after the incident with Curley's wife. George might also have kind words for Crooks, who treated Lennie well on the night Lennie entered the black man's room. He definitely wouldn't have any bad feelings about the stable buck.


That could not be said for Curley and his wife. He would concede his hatred for Curley even before the fight with Lennie. He would lament the fact that he and Lennie had to work on a ranch where Curley was the boss's son. George would directly blame Curley's wife for Lennie's ultimate death. He might call her jailbait or a tramp. He would say that the ranch was no place for a girl such as her.


Finally, George might chastise himself for not doing a better job of protecting Lennie. He should have listened to Lennie in chapter two when his friend wanted to leave because "This ain't no good place."

A car approaches a curve on an icy road. Explain why the car might slide as it moves into the curve using all the following terms: friction,...

Any mass that is in motion will have inertia, which is a way of describing Newton's law of motion that states "a body in motion tends to stay in motion." The only way of stopping the forward motion of the car is by providing a counter-force to negate its inertia. This can be done suddenly, such as if the car struck an object, or gradually, as in the case of the car turning. The wheels, through their interaction with the ground, provide the friction necessary to redirect the car away from its forward path. 


However, the ice has a very low coefficient of friction, meaning that it is unlikely to provide a strong force that resists the car's forward motion. The car has the potential to slide due to a combination of its inertia and the low friction of the ice; if the ice does not provide sufficient friction to redirect the car, it's more likely that the inertia will overcome the friction and cause the car to continue along its original path. 

How are the weaknesses and strengths represented in Lord of the Flies and what is the significance of the relationship between them?

The four main characters in Lord of the Flies are well-developed characters with obvious strengths and weaknesses. Ralph's strengths are his willingness to put others first and his cooperativeness; his weaknesses are a lack of natural leadership and lack of focus. Piggy is highly intelligent and has a strong sense of right and wrong; unfortunately, he is obnoxious, physically weak, and whiny. Jack has an innate ability to get people to do what he wants and has drive; however, he is selfish and immature. Simon is insightful and kind, but he can't express himself well with words and has epilepsy. 


The significance of how these strengths and weaknesses play out in the novel is that the people with the strengths that will be most beneficial to the society have weaknesses that get in the way, allowing Jack's strengths to lead the boys in a downward path. Although Ralph, Piggy, and Simon have the characteristics that the boys need to create a functioning society--altruism, intelligence, and insight--their weaknesses prevent them from effectively keeping the other boys in line. Even though Jack is selfish and childish, his strong leadership skills and ability to manipulate others allow him to gain control over the boys. The relationships between the boys' strengths and weaknesses demonstrate that someone who lacks principles but has charisma can sometimes be more successful in garnering and maintaining a following than those who have the best interests of others at heart but don't have the necessary people skills.

What does the farm changing to a republic symbolize?

The farm changing to a republic symbolizes the ascent of Napoleon as supreme leader of Animal Farm: in fact, it symbolizes the turning point when the farm is about to become a tyranny. 


Earlier, the animals had been taught to anticipate a republic as a good thing, a sign of animal equality and solidarity:



The flag was green, Snowball explained, to represent the green fields of England, while the hoof and horn signified the future Republic of the Animals which would arise when the human race had been finally overthrown.



Then, the April after Snowball is run off the farm



Animal Farm was proclaimed a Republic, and it became necessary to elect a President. There was only one candidate, Napoleon, who was elected unanimously.



The declaration of the "Republic" and the election of Napoleon as President are both merely ruses to allow him to seize power. Now that he is in charge, he will run roughshod over the rights of the other animals. This move to a "Republic" also represents or allegorizes the betrayal of the communist revolution in the USSR by Stalin, who similarly took control and became a tyrant. 

Tuesday, August 25, 2009

What is food biochemistry?


Structure and Functions

Food biochemistry is concerned with the breakdown of food in the cell as a source of energy. Each cell is a factory that converts the
nutrients of the food one eats to energy and other structural components of the body. The amount of energy that these nutrients supply is expressed in Calories (kilocalories). The number of Calories consumed will determine the energy balance of the individual and whether one loses or gains weight. The nutrients come in a variety of forms, but they can be divided into three major categories: carbohydrates, lipids (fats), and proteins. These nutrients are broken down by the cell metabolically to produce energy for cellular processes. Other components are used by the cell and the entire body for structure and transport. Each of these nutrients is essential to a well-balanced diet and good health. Two other components of a successful diet are vitamins and minerals.



Carbohydrates are molecules composed of carbon, hydrogen, and oxygen. They range from the simple
sugars all the way to the complex carbohydrates. The simplest carbohydrates are the monosaccharides (one-sugar molecules), primarily glucose and fructose. The simple monosaccharides are usually joined to form disaccharides (two-sugar molecules), such as sucrose (glucose and fructose, or cane sugar), lactose (glucose and galactose, or milk sugar), and maltose (two glucoses, which is found in grains). The complex carbohydrates are the polysaccharides (multiple sugar molecules), which are composed of many monosaccharides, usually glucose. There are two main types: starch, which is found in plants such as potatoes, and glycogen, in which form humans store carbohydrate energy for the short term (up to twelve hours) in the liver.


The next major group of molecules is the lipids, which are made up of the solid fats and liquid oils. These molecules are primarily composed of carbon and hydrogen. They form three major groups: the triglycerides, phospholipids, and sterols. The triglycerides are composed of three fatty acids attached to a glycerol (a three-carbon molecule); this is the group that makes up the fats and oils. Fats, which are primarily of animal origin, are triglycerides that are solid at room temperature; such triglycerides are saturated, which means that there are no double bonds between their carbon molecules. Oils are liquid at room temperature, primarily of plant origin, and either monounsaturated or polyunsaturated (there are one or more double bonds between the carbon molecules in the chain). This group provides long-term energy in humans and is stored as adipose (fat) tissue. Each gram of fat stores approximately nine Calories of energy per gram, which is about twice that of carbohydrates and proteins (four Calories per gram). The adipose tissue also provides important insulation in maintaining body temperature. Phospholipids are similar to triglycerides in structure, but they have two fatty acids and a phosphate attached to the glycerol molecule. Phospholipids are the building blocks of the cell membranes that form the barrier between the inside and the outside of the cell. Sterols are considered lipids, but they have a completely different structure. This group includes cholesterol, vitamin D, estrogen, and testosterone. The sterols function in the structure of the cell membrane (as cholesterol does) or as hormones (as do testosterone and estrogen).


The last major group of molecules is that of the proteins, which are composed of carbon, hydrogen, oxygen, and nitrogen. Proteins are long chains of amino acids; each protein is composed of varying amounts of the twenty different amino acids. Proteins are used in the body as enzymes, substances that catalyze (generally, speed up) the biochemical reactions that take place in cells. They also function as transport molecules (such as hemoglobin, which transports oxygen) and provide structure (as does keratin, the protein in hair and nails). The human body can synthesize eleven of the twenty amino acids; the other nine are considered essential amino acids because they cannot be synthesized and are required in the diet.


Two other groups of essential compounds are necessary in the diet for the body’s metabolism: vitamins and minerals. The vitamins are organic compounds (made up of carbon) that are required in only milligram or microgram quantities per day. The vitamins are classified into two groups: the water-soluble vitamins (the B vitamins and vitamin C) and the fat-soluble vitamins (vitamins A, D, E, and K). Vitamins are vital components of enzymes.


The
minerals are inorganic nutrients that can be divided into two classes, depending on the amounts needed by the body. The major minerals are required in amounts greater than one hundred milligrams per day; these minerals are calcium, phosphorus, magnesium, sodium, chloride, sulfur, and potassium. The trace elements, those needed in amounts of only a few milligrams per day, are iron, zinc, iodine, fluoride, copper, selenium, chromium, manganese, and molybdenum. Although they are required in small quantities, the minerals play an important role in the human body. Calcium is involved in bone and teeth formation and muscle contractions. Iron is found in hemoglobin and aids in the transportation of oxygen throughout the body. Potassium helps nerves send electrical impulses. Sodium and chloride maintain water balance in tissues and vascular fluid.


An adequate diet is one that supplies the body and cells with sources of energy and building blocks. The first priority of the diet is to supply the bulk nutrients—carbohydrates, fats, and proteins. An average young adult requires between 2,100 and 2,900 Calories per day, taking into account the amount of energy required for rest and work. Carbohydrates, fats, and proteins are taken in during a meal and digested—that is, broken into smaller components. Starch is broken down to glucose, and sucrose is broken down to fructose and glucose and absorbed by the bloodstream. Fats are broken down to triglycerides, and proteins are divided into their separate amino acids, to be absorbed by the bloodstream and transported throughout the body. Each cell then takes up essential nutrients for energy and to use as building parts of the cell.


Once the nutrients enter the cell, they are broken down into energy through a series of metabolic reactions. The first step in the metabolic process is called glycolysis. Glucose is broken down through a series of reactions to produce
adenosine triphosphate (ATP), a molecule used to fuel other biochemical pathways in the cell. Glycolysis gives off a small amount of energy and does not require oxygen. This process can provide the energy for a short sprint; lactic acid buildup in the muscles will lead to fatigue, however, if there is insufficient oxygen.


Long-term energy requires oxygen. Aerobic respiration can metabolize not only the sugars produced by glycolysis but triglycerides and amino acids as well. The molecules enter what is called the Krebs cycle, an aerobic pathway that provides eighteen times more energy than glycolysis. The waste products of this pathway are carbon dioxide and water, which are exhaled.




Disorders and Diseases

Diet plays a major role in the metabolism of the cells. One major problem in diet is the overconsumption of Calories, which can lead to weight gain and eventual obesity. Obesity is defined as being 20 percent over one’s ideal weight for one’s body size. A number of problems are associated with obesity, such as high blood pressure, high levels of cholesterol, increased risk of cancer, heart disease, and early death.


At the other end of the scale is malnutrition. Carbohydrates are the preferred energy source in the form of either blood glucose or glycogen, which is found in the liver and muscles. This source gives a person approximately four to twelve hours of energy. Long-term storage of energy occurs as fat, which constitutes anywhere from 15 percent to 25 percent of body composition. During times of starvation, when the carbohydrate reserve is almost zero, fat will be mobilized for energy. Fat will also be used to make glucose for the blood because the brain requires glucose as its energy source. In extreme starvation, the body will begin to degrade the protein in muscles down to its constituent amino acids in order to produce energy.


Malnutrition can also occur if essential vitamins and minerals are excluded from the diet. Vitamin deficiencies affect the metabolism of the cell since these compounds are often required to aid the enzymes in producing energy. A number of medical problems are associated with vitamin deficiencies. A deficiency in thiamine will result in the metabolic disorder called beriberi. A loss of the thiamine found in wheat and rice can occur in the refinement process, making it more difficult to obtain enough of this vitamin from the diet. Alcoholics have an increased thiamine requirement, to help in the metabolism of alcohol, and usually have a low level of food consumption; thus, they are at risk for developing beriberi. Lack of vitamin A results in night blindness, and lack of vitamin D results in rickets, in which the bones are weakened as a result of poor calcium uptake.


One interesting feature about diet and the metabolic pathways concerns how different molecules are treated. Some people mistakenly believe that it is better to eat fruit sugar than other sugars. Fruit sugar, the simple sugar fructose, is chemically related to glucose. Because glucose and fructose are converted to each other during glycolysis, it does not matter which sugar is eaten. Far more important to proper nutrition is what accompanies the sugar. Table sugar provides only calories, while a piece of fruit contains both fruit sugar and vitamins, minerals, and fiber for a more complete diet.


Many errors in metabolism occur because genes do not carry the proper information. The result may be an enzyme that, although critical to a biochemical pathway, does not function properly or is missing altogether. One disorder of carbohydrate metabolism is called galactosemia. Mother’s milk contains lactose, which is normally broken down into galactose and glucose. With galactosemia, the cells take in the galactose but are unable to convert it to glucose because of the lack of an enzyme. Thus, galactose levels build up in the blood, liver, and other organs, impairing their function. This condition can lead to death in an infant, but the effects of galactosemia are usually detected and the diet modified by use of a milk substitute.


Amino acid metabolism can also be defective, leading to the accumulation of toxic by-products. One of the best-known examples involves the amino acid phenylalanine. About one in ten thousand infants is born with a defective pathway, a disorder called phenylketonuria (PKU). If PKU is not discovered in time, by-products can accumulate, causing poor brain development and severe mental retardation. PKU must be diagnosed early in life, and a special, controlled diet must be given to the infant. Because phenylalanine is an essential amino acid, limited amounts are included in the diet for proper growth, but large amounts need to be excluded. The artificial sweetener aspartame (NutraSweet) poses a problem for those with PKU. Aspartame is composed of two amino acids, phenylalanine and aspartic acid. When aspartame is broken down during digestion, phenylalanine enters the bloodstream. Individuals with PKU should not ingest aspartame; there is a warning to that effect on products containing this chemical.


Other errors of metabolism are noted later in life. One example is lactose intolerance, in which lactose is cleaved by the enzyme lactase into glucose and galactose. The enzyme lactase, found in the digestive tract, is very active in suckling infants, but only northern Europeans and members of some African tribes retain lactase activity into adulthood. Other groups, such as Asian, Arab, Jewish, Indian, and Mediterranean peoples, show little lactase activity as adults. These people cannot digest the lactose in milk products, which then cannot be absorbed in the intestinal tract. A buildup of lactose can lead to diarrhea and colic. Usually in those parts of the world milk is not used by adults as food. In the United States, one can purchase milk that contains lactose which has been partially broken down into galactose and glucose. One can also purchase the lactase enzyme itself and add it to milk.


Another error of metabolism results in diabetes mellitus, which means “excessive excretion of sweet urine.” A telltale sign of this condition is sugar, specifically glucose, in the urine. In normal people, blood glucose levels remain relatively stable. After a meal, when the blood glucose levels rise, the pancreas starts to secrete the hormone insulin. Insulin causes the cells to take in the extra blood glucose and convert it to glycogen or fat, thus storing the extra energy. In diabetics, there is little or no insulin production or release, or the target cells may have faulty receptors. As a result, the blood glucose level remains high. The excess glucose is then excreted in the urine, leading to the symptom of excess thirst. The body is forced to rely much more heavily on fats as an energy source, leading to high levels of circulating fats and cholesterol in the blood. These substances can be deposited in the blood vessels, causing high blood pressure and heart disease. Excess fats, in levels that exceed the body’s
ability to metabolize and burn them, may produce acetone, which gives the breath of diabetics a sweet odor. A buildup of acetone can lead to ketoacidosis, a pathologic condition in which the blood pH drops from 7.4 to 6.8. Complications arising from diabetes also include blindness, kidney disease, and nerve damage. Furthermore, resulting peripheral vascular disease, in which the body’s extremities do not get enough blood, leads to tissue death and gangrene.




Perspective and Prospects

The study of food biochemistry has evolved over the years from a strictly biochemical approach to one in which diet and nutrition play a major role. An understanding of diet and nutrition required vital information about the metabolic processes occurring in the cell, supplied by the field of biochemistry.


This information started to become available in 1898 when Eduard Buchner discovered that the fermentation of glucose to alcohol and carbon dioxide could occur in a cell-free extract. The early twentieth century led to the complete discovery of the glycolytic pathway and the enzymes that were involved in the process. In the 1930s, other pathways of metabolism were elucidated.


In conjunction with Buchner’s discovery, British physician Archibald Garrod in 1909 hypothesized that genes control a person’s appearance through enzymes that catalyze certain metabolic processes in the cell. Garrod thought that some inherited diseases resulted from a patient’s inability to make a particular enzyme, and he called them “inborn errors of metabolism.” One example he gave was a condition called alkaptonuria, in which the urine turns black upon exposure to the air.


Some of the earliest nutritional studies date back to the time of Aristotle, who knew that raw liver contained an ingredient that could cure night blindness. Christiaan Eijkman studied beriberi in the Dutch East Indies and traced the problem to diet. Sir Frederick Hopkins was an English biochemist who conducted pioneering work on vitamins and the essentiality of amino acids in the early twentieth century. Hopkins realized that the type of protein is important in the diet as well as the quantity. Hopkins hypothesized that some trace substance in addition to proteins, fats, and carbohydrates may be required in the diet for growth; this substance was later identified as the vitamin. Hopkins was the first biochemist to explore diet and metabolic function.


Working with this broad base, scientists have made tremendous advances in the study of diet and nutrition based on the biochemistry of the cell. In 1943, the first recommended daily (or dietary) allowances (RDAs) were published to provide standards for diet and good nutrition; the term now used is dietary reference intake (DRI). The DRIs suggest the amounts of protein, fats, carbohydrates, vitamins, and minerals required for adequate nutrient uptake. The major uses of the DRIs are for schools and other institutions in planning menus, obtaining food supplies, and preparing food labels.


Since the 1970s, research has consistently associated nutritional factors with six of the ten leading causes of death in the United States: high blood pressure, heart disease, cancer, cardiovascular disease, chronic liver disease, and non-insulin-dependent diabetes mellitus. This research has led to improvements in the American diet.




Bibliography


Bonci, Leslie. American Dietetic Association Guide to Better Digestion. New York: Wiley, 2003.



Campbell, Neil A., et al. Biology: Concepts and Connections. 6th ed. San Francisco: Pearson/Benjamin Cummings, 2008.



Clark, Nancy. Nancy Clark’s Sports Nutrition Guidebook. 4th ed. Champaign, Ill.: Human Kinetics, 2008.



Duyff, Roberta Larson. American Dietetic Association Complete Food and Nutrition Guide. 3d ed. Hoboken, N.J.: John Wiley & Sons, 2007.



"Malnutrition." MedlinePlus, June 14, 2011.



Margen, Sheldon. Wellness Foods A to Z: An Indispensable Guide for Health-Conscious Food Lovers. New York: Rebus, 2002.



Marlow, Amy. "Phenylketonuria." Health Library, November 26, 2012.



Nasset, Edmund S. Nutrition Handbook. 3d ed. New York: Harper & Row, 1982.



Nelson, David L., and Michael M. Cox. Lehninger Principles of Biochemistry. 5th ed. New York: W. H. Freeman, 2009.



Nieman, David C., Diane E. Butterworth, and Catherine N. Nieman. Nutrition. Rev. ed. Dubuque, Iowa: Wm. C. Brown, 1992.



Wood, Debra. "Lactose Intolerance." Health Library, May 11, 2013.

Sunday, August 23, 2009

What is viral hepatitis?


Definition

Viral hepatitis is an infection of the liver caused by a virus. Viral hepatitis leads to liver inflammation and can also lead to liver cancer. There are five types of viral hepatitis infection: A, B, C, D, and E.












Causes

Progressive and chronic viral hepatitis is caused by toxins and by heavy drinking of alcohol.




Risk Factors

It is possible to develop viral hepatitis with or without the common risk
factors listed here. However, the more risk factors, the greater the likelihood
that a person will develop viral hepatitis. The risk factors for hepatitis
vary, depending on the type of hepatitis.


Persons at a greater risk include infants born to women with hepatitis B
or C and children in day-care centers. Also at greater risk are
child-care workers (especially if one changes diapers or toilet-trains
toddlers), first aid and emergency workers, funeral home staff, health care
workers, dentists and dental assistants, firefighters, and police personnel.


The following behaviors are risk factors for developing hepatitis: close
contact with someone who has the disease; using household items that were used by
an infected person and were not properly cleaned; anal sex; sexual contact with
multiple partners; sexual contact with someone who has hepatitis or a
sexually
transmitted disease (STD); injecting drugs, especially with
shared needles; using intranasal cocaine; and getting a tattoo or body piercing
(because the needles may not be properly sterilized). For hepatitis A
or E, risk factors include traveling to (or spending long
periods of time in) a country where hepatitis A or E are common or where there is
poor sanitation.


Health conditions and procedures that increase the risk of hepatitis include
hemophilia or other disorders of blood clotting, kidney
disease requiring hemodialysis, receiving a blood
transfusion, receiving multiple transfusions of blood or
blood products, receiving a solid-organ transplant, persistent elevation of
certain liver function tests (found in people with undiagnosed liver problems),
and having an STD.




Screening and Diagnosis

The purpose of screening is early diagnosis and treatment. Screening tests are usually administered to people without current symptoms but who may be at high risk for certain diseases or conditions.


The Centers
for Disease Control and Prevention recommends screening for
hepatitis in pregnant women at their first prenatal visit and in people at high
risk for the disease. Screening for hepatitis is a method of finding out if a
person has hepatitis before he or she begins to have symptoms. Screening involves
assessing the person’s medical history and behaviors that may increase or decrease
the risk of hepatitis and undergoing tests to identify early signs of hepatitis,
including blood tests for hepatitis antigens and antibodies.




Treatment and Therapy

Treatment for hepatitis involves behavioral changes, medications, and alternative and complementary therapies. There are no surgical procedures to treat viral hepatitis.




Prevention and Outcomes

Hepatitis is a contagious disease that is preventable. Basic preventive principles include avoiding contact with other people’s blood or bodily fluids and practicing good sanitation. In addition, vaccines are available to prevent some types of hepatitis. They are given to people at high risk of contracting the disease.


Infected blood and bodily fluids can spread hepatitis. To avoid contact, one should avoid sharing drug needles, avoid sex with partners who have hepatitis or other STDs, practice safer sex (such as using latex condoms) or abstain from sex, limit one’s number of sexual partners, avoid sharing personal hygiene products (such as toothbrushes and razors), and avoid handling items that may be contaminated with hepatitis-infected blood. Also, one should donate his or her own blood before elective surgery so it can be used if a blood transfusion is necessary.


Health care professionals should always follow routine barrier precautions and safely handle needles and other sharp instruments and dispose of them properly. One should wear gloves when touching or cleaning up bodily fluids on personal items, such as bandages, tampons, sanitary pads, diapers, and linens and towels. One should cover open cuts or wounds and use only sterile needles for drug injections, blood draws, ear piercing, and tattooing.


Women who are pregnant should have a blood test for hepatitis B. Infants born to women with hepatitis B should be treated within twelve hours of birth.


When traveling to countries where the risk of hepatitis is higher, one should follow proper precautions, such as drinking bottled water only, avoiding ice cubes, and avoiding certain foods, such as shellfish, unpasteurized milk products, and fresh fruits and vegetables. Good sanitation too can prevent the transmission of some forms of hepatitis.


Vaccines are available for hepatitis A and B. Hepatitis A vaccine is recommended for all children age twelve months and older. The following people also should be vaccinated: persons traveling to areas where hepatitis A is prevalent, persons who engage in anal sex, drug users, people with chronic liver disease or blood-clotting disorders (such as hemophilia), children who live in areas where hepatitis A is prevalent, and people who will have close contact with an adopted child from a medium- or high-risk area. Hepatitis B vaccine is recommended for all children and for adults who are at risk.


An immunoglobulin injection, if recommended, is available for hepatitis A and B. Immunoglobulin contains antibodies that help provide protection. This shot is usually given before exposure to the virus or as soon as possible after exposure to the virus.




Bibliography


Boyer, Thomas D., Teresa L. Wright, and Michael P. Manns, eds. Zakim and Boyer’s Hepatology: A Textbook of Liver Disease. 5th ed. Philadelphia: Saunders/Elsevier, 2006. A thorough compendium on most aspects of liver disease. The section on hepatitis contains a complete clinical description of the disease and of the biology of the hepatitis viruses.



Feldman, Mark, Lawrence S. Friedman, and Lawrence J. Brandt, eds. Sleisenger and Fordtran’s Gastrointestinal and Liver Disease: Pathophysiology, Diagnosis, Management. New ed. 2 vols. Philadelphia: Saunders/Elsevier, 2010. A clinical text that covers basic liver anatomy, disorders and diseases of the liver (including hepatitis), and related topics.



Humes, H. David, et al., eds. Kelley’s Textbook of Internal Medicine. 4th ed. Philadelphia: Lippincott Williams & Wilkins, 2000. A medical textbook that contains an extensive section on liver diseases, including a concise description of viral hepatitis. The discussion of hepatitis viruses is thorough and clear.



Plotkin, Stanley A., and Walter A. Orenstein, eds. Vaccines. 5th ed. Philadelphia: Saunders/Elsevier, 2008. An excellent description of the role of vaccines in the prevention of disease. The book begins with a history of immunization practices. Chapters deal with specific diseases and the role and history of vaccine production in its prevention.



Specter, Steven, ed. Viral Hepatitis: Diagnosis, Therapy, and Prevention. Totowa, N.J.: Humana Press, 1999. This clearly written and readable review of viral hepatitis provides useful information for family physicians and general readers. Each chapter is divided into sections for quick access to desired information.

How does the choice of details set the tone of the sermon?

Edwards is remembered for his choice of details, particularly in this classic sermon. His goal was not to tell people about his beliefs; he ...