Tuesday, December 30, 2014

What is an essential nutrient?



Nutrients are the substances an organism requires for survival. When an organism is denied the nutrients it needs, the health of that organism will suffer. Essential nutrients are the nutrients that an organism cannot synthesize on its own or cannot produce in sufficient quantities. An organism must obtain these essential nutrients from outside sources. If an organism is deprived of its essential nutrients it cannot function properly and will ultimately die.




The specific essential nutrients vary from organism to organism. The human body requires six types of essential nutrients to maintain its health: water, proteins, carbohydrates, fats, vitamins, and minerals. A lack of any one of these will cause a person to become ill due to malnutrition, and a severe deficiency can result in death. Because of this, it is important to have an understanding of the sources of essential nutrients to maintain good health.




Background

In developed nations, adequate levels of essential nutrients can be easily acquired through diet throughout the year. Potable water and reasonably priced sources of the macronutrients—proteins, fats, and carbohydrates—are readily available.


Fats in the form of oil, butter, and foods such as avocado, fish, and nuts are typically close at hand. Fats include the essential fatty acids: alpha-linolenic acid, an omega-3 fatty acid that can be found in soy, rapeseed, and flax; and linolenic acid, an omega-6 fatty acid that can be found in a variety of vegetable oils. Fats are a significant source of energy and can increase the body’s absorption of fat-soluble vitamins.


Proteins are made of chains of amino acids and can be found in relatively inexpensive and readily available foods such as eggs, nuts, meats, and soy protein. Of the twenty-one amino acids that make up proteins, nine are essential in human nutrition and cannot be synthesized. Proteins are a vital component of all cells.


Carbohydrates, the third class of macronutrients, are a vital source of energy for the human body. Complex carbohydrates can be obtained from foods such as cereals, breads, and pasta, while simple carbohydrates include a variety of sugars.


The essential micronutrients include vitamins and minerals. The thirteen essential vitamins are vitamin A, the eight B vitamins (thiamine, riboflavin, niacin, pantothenic acid, biotin, vitamin B-6, vitamin B-12, and folate), vitamin C, vitamin D, vitamin E, and vitamin K. Vitamins perform a variety of essential functions in the body and can be found in dark-colored fruits, dark leafy vegetables, egg yolks, fortified dairy products, whole grains, lentils, beans, fish, liver, and beef. Eating a varied diet is the best way to ensure one receives all of the essential vitamins.


Minerals that are essential to human nutrition include sodium, potassium, calcium, magnesium, and iodine. Adequate levels of sodium and potassium help to maintain cell function and balance fluid volumes, while calcium is critical to bone health.


Where there are adequate supplies of foods containing essential nutrients, a number of diseases that can be attributed to nutritional deficiencies can be prevented with little or no effort. All that is required is to eat a wide variety of foods or foods that are rich in the associated nutrients. Enriched foods and daily vitamins provide many essential nutrients. Many breakfast cereals and breads, as well as milk and salt, are enriched or fortified. It is important to check food labels to see what these foods have in the way of essential nutrients and to seek sources for what is not included.




Impact

In some parts of the world, many people do not have access to the foods they require for a sufficient quantity of essential nutrients. When this is the case, people will be undernourished, which leads to various health problems and reduced resistance to illnesses. Children who do not receive adequate nutrition will not reach their full potential in terms of physical or cognitive development. Adults who do not receive adequate nutrition are unlikely to have long life expectancies. Infant mortality rates will also be high, due to low birth weights and the effects of nutritional deficiencies; for example, folic acid deficiency is the principal cause of neural tube defects , a potentially fatal birth defect.


There are many conditions and diseases that result from a lack of essential nutrients. While these are nearly unheard of in developed nations, they are serious ailments in developing nations. Some of the more well-known are rickets, beriberi, pellagra, scurvy, anemia, and goiter. Rickets results from a lack of vitamin D and causes bowed legs, weak bones, and dental deformities. Beriberi is due to insufficient thiamin intake and causes atypical muscle coordination and damage to the nerves, resulting in loss of feeling in the extremities and confusion. Pellagra results from insufficient niacin and causes dementia. Scurvy is the result of a lack of vitamin C and causes internal bleeding and severe problems with the teeth and gums. Insufficient quantities of iron, vitamin B6, or vitamin B12 result in anemia, which leads to extreme fatigue and weakness. Goiter can arise from insufficient iodine; however, goiter due to iodine deficiency is rare in developed countries due to the use of iodized salt. Symptoms of this disease include an enlarged thyroid gland and difficulties breathing. Each of these diseases can be prevented or reversed by adequate intake of essential nutrients.


The principal cause of nutritional deficiencies worldwide is poverty and limited access to a varied and healthy diet; however, fad diets, drug interactions, alcoholism, genetic abnormalities, or gastrointestinal problems can also lead to inadequate levels of essential nutrients in the body. Whatever the cause of nutritional deficiencies, they take a significant toll on those affected, as well as their countries and communities. The human body cannot synthesize these nutrients; without access to sources of these nutrients, it is not possible for people to maintain good health.




Bibliography


Escott-Stump, Sylvia. Nutrition and Diagnosis-Related Care. Philadelphia: Lippincott, 2008. Print.



Insel, Paul, et al. Nutrition. 5th ed. Sudbury: Jones, 2013. Print.



Langley-Evans, Simon. Nutrition: A Lifespan Approach. Chichester: Wiley, 2009. Print.



Mann, Jim, and A. S. Truswell. Essentials of Human Nutrition. Oxford: Oxford UP, 2012. Print.



Porter, Robert S., ed. "Nutritional Disorders." The Merck Manual. 19th ed. West Point: Merck, 2011. Print.



Smolin, Lori A., and Mary B, Grosvenor. Nutrition: Science and Applications. 3rd ed. Hoboken: Wiley, 2013. Print.



Thompson, Janice, and Melinda Manore. Nutrition: An Applied Approach. 4th ed. San Francisco: Cummings, 2012. Print.



"Vitamins and Minerals." Centers for Disease Control and Prevention. Centers for Disease Control and Prevention, 23 Feb. 2011. Web. 15 Dec. 2014.

What is human resource training and development?


Introduction

The term “human resources” implies that human abilities and potential, such as aptitudes, knowledge, and skills, are as important to a company’s survival as are monetary and natural resources. To help employees perform their jobs as well as they can, companies develop training and development programs.








Most employees must go through some form of training program. Some programs are designed for newly hired or recently promoted employees who need training to perform their jobs. Other programs are designed to help employees improve their performance in their existing jobs. Although the terms are used interchangeably in this discussion, the former type of program is often referred to as a “training program” and the latter as a “development program.”


There are three phases to a training or development program. During the first phase, managers determine training needs. One of the best ways to determine these needs is with job analysis. Job analysis is a process that details the exact nature and sequencing of the tasks that make up a job. Job analysis also determines performance standards for each task and specifies the corresponding knowledge, skills, and aptitudes (potential) required to meet these standards. Ideally, job analysis is used as the basis for recruiting and selecting employees. Managers like to hire employees who already have the ability to perform the job; however, most employees enter an organization with strong aptitudes but only general knowledge and skills. Consequently, during the second phase of training, a method of training is designed that will turn aptitudes into specific forms of task-related knowledge and skills.


A long history of training and educational research suggests a number of guidelines for designing effective training programs. First, training is most effective if employees have strong intellectual potential and are highly motivated to learn. Second, trainees should be given active participation in training, including the opportunity to practice the skills learned in training. Practice will usually be most effective if workers are given frequent, short practice sessions (a method called distributed practice) rather than infrequent, long practice sessions (called massed practice). Third, trainees should be given continuous feedback concerning their performance. Feedback allows the trainee to monitor and adjust performance to meet training and personal standards.


One of the greatest concerns for trainers is to make certain that skills developed in training will transfer to the job. Problems with transfer vary greatly with the type of training program. In general, transfer of training will be facilitated if the content of the training program is concrete and behavioral, rather than abstract and theoretical. In addition, transfer is improved if the training environment is similar to the job environment. For example, a manager listening to a lecture on leadership at a local community college will have more difficulty transferring the skills learned in the classroom than will a mechanic receiving individual instruction and on-the-job training.


Once training needs have been analyzed and a training program has been implemented, the effectiveness of the training program must be measured. During the third phase of training, managers attempt to determine the degree to which employees have acquired the knowledge and skills presented in the training program. Some form of testing usually serves this goal. In addition, managers attempt to measure the degree to which training has influenced productivity. To do this, managers must have a performance evaluation program
in place. Like the selection system and the training program, the performance evaluation system should be based on job analysis. Ideally, a third goal of the evaluation phase of training should be to examine whether the benefits of training, in terms of productivity and job satisfaction, warrant the cost of training. A common problem with training programs is that managers do not check the effectiveness of programs.


Training and development are integral parts of a larger human resource system that includes selection, performance evaluation, and promotion. Because employee retention and promotion can be considerably influenced by training, training and development programs in the United States are subject to equal employment opportunity (EEO) legislation. This legislation ensures that the criteria used to select employees for training programs, as well as the criteria used to evaluate employees once in training programs, are related to performance on the job. When managers fail to examine the effectiveness of their training programs, they cannot tell whether they are complying with EEO legislation. EEO legislation also ensures that if minority group members do not perform as well as majority group members in training, minorities must be given the opportunity for additional training or a longer training period. Minorities are given the additional time based on the assumption that their life experiences may not have provided them with the opportunity to develop the basic skills that would, in turn, allow them to acquire the training material as fast as majority group members.




On-the-Job Training

The most common form of training is on-the-job training, in which newly hired employees are put to work immediately and are given instruction from an experienced worker or a supervisor. On-the-job training is popular because it is inexpensive and the transfer of training is excellent. This type of training program is most successful for simple jobs not requiring high levels of knowledge and skill. On-the-job training is often used for food service, clerical, janitorial, assembly, and retail sales jobs. Problems with on-the-job training arise when formal training programs are not established and the individuals chosen to act as trainers are either uninterested in training or are unskilled in training techniques. A potential drawback of on-the-job training is that untrained workers are slow and tend to make mistakes.


An apprenticeship is a form of long-term training in which an employee often receives both on-the-job training and classroom instruction. Apprenticeships are one of the oldest forms of training and are typically used in unionized skilled trades such as masonry, painting, and plumbing. Apprenticeships last between two and five years, depending on the trade. During this time, the apprentice works under the supervision of a skilled worker, or “journeyman.” Once a worker completes the training, he or she may join a trade union and thereby secure a position in the company. Apprenticeships are excellent programs for training employees to perform highly complex jobs. Apprenticeships offer all the benefits of on-the-job training and reduce the likelihood that training will be carried out in a haphazard fashion. Critics of apprenticeship programs, however, claim that some apprenticeships are artificially long and are used to keep employee wages low.




Simulation Training

Although on-the-job training and apprenticeship programs allow employers to use trainees immediately, some jobs require employees to obtain considerable skill before they can perform the job. For example, it would be unwise to allow an airline pilot to begin training by piloting an airplane filled with passengers. Where employees are required to perform tasks requiring high levels of skill, and the costs of mistakes are very high, simulator training is often used.


In simulator training, a working model or reproduction of the work environment is created. Trainees are allowed to learn and practice skills on the simulator before they start their actual jobs. Simulators have been created for jobs as varied as pilots, mechanics, police officers, nuclear power plant controllers, and nurses. The advantage of simulator training is that trainees can learn at a comfortable pace. Further, training on simulators is less expensive than training in the actual work environment. For example, flight simulator training can be done for a fraction of the cost of operating a plane. An additional benefit of simulator training is that simulators can be used to train employees to respond to unusual or emergency situations with virtually no cost to the company for employee errors. A potential disadvantage of simulator training is the high cost of developing and maintaining a simulator.


These simulator training programs are used for technically oriented jobs held by nonmanagerial employees. Simulator training can also be used for managers. Two popular managerial simulations are in-basket exercises and business games. Here, managers are put in a hypothetical business setting and asked to respond as they would on the job. The simulation may last a number of days and involve letter and memo writing, telephone calls, scheduling, budgeting, purchases, and meetings.


Interpersonal skills training programs teach employees how to be effective leaders and productive group members. These programs are based on the assumption that an employee can learn how to be a good group participant or a good leader by learning specific behaviors. Many of the interpersonal skills programs involve modeling and role-playing. For example, videotapes of managerial scenarios are used to demonstrate techniques a manager might use to encourage an employee. After the manager has seen the model, he or she might play the role of the encouraging manager and thus be given an opportunity to practice leader behaviors. An advantage of role-playing is that people get the opportunity to see the world from the perspective of the individual who normally fills the role. Consequently, role-playing is a useful tool in helping members of a group in conflict. Role-playing allows group members to see the world from the perspective of the adversary.




Programmed Instruction


Programmed instruction is a self-instructed and self-paced training method. Training material is printed in a workbook and presented in small units or chapters. A self-administered test follows each unit and provides the trainee with feedback concerning how well the material has been learned. If the trainee fails the test, he or she rereads the material. If the trainee passes the test, he or she moves on to the next unit. Each successive unit is more difficult.


Programmed instruction has been used for such topics as safety training, blueprint reading, organizational policies, and sales skills. The advantage of programmed instruction is that trainees proceed at their own pace. Further, because training and tests are self-administered, employees do not feel much evaluation pressure. In addition, when units are short and tests are frequent, learners get immediate feedback concerning their performance.


Computers have increasingly replaced the function of the workbook. Computer-assisted instruction is useful because the computer can monitor the trainee’s performance and provide more information in areas where the trainee is having trouble. A potential drawback of programmed instruction is that employees may react to the impersonal nature of training. Further, if the employees are not committed to the program, they may find it easier to cheat.




The Need for Ongoing Training

Over the last two hundred years, there have been dramatic changes in both the nature of jobs and the composition of the workforce. Consequently, there have also been dramatic changes in the scope and importance of training. The history of formal employment training dates back thousands of years. Training programs were essential for jobs in the military, church, and skilled trades. Prior to the Industrial Revolution, however, only a small percentage of the population had jobs that required formal training. Training for the masses is a relatively new concept. At the beginning of the Industrial Revolution, the vast majority of workers lived in rural areas and worked on small farms. Training was simple and took place within the family. During the Industrial Revolution, the population started to migrate to the cities, seeking jobs in factories. Employers became responsible for training. Although early factory work was often grueling, the jobs themselves were relatively easy to learn. In fact, jobs required so little training that children were often employed as factory workers.


Since the Industrial Revolution, manufacturing processes have become increasingly technical and complex. Now, many jobs in manufacturing require not only lengthy on-the-job training but also a college degree. In addition, technology is changing at an ever-increasing pace. This means that employees must spend considerable time updating their knowledge and skills.


Just as manufacturing has become more complex, so has the process of managing an organization. Alfred Chandler, a business historian, suggests that one of the most important changes since the Industrial Revolution has been the rise of the professional manager. Chandler suggests that management used to be performed by company owners, and managerial skills were specific to each company. Today, managers work for company owners and are trained in universities. Because management functions are so similar across organizations, managers can take their skills to a wide variety of companies and industries.


In contrast to the increasingly technical nature of jobs, there has been an alarming increase in the number of illiterate and poorly trained entrants into the workforce. There has also been an increase in the number of job applicants in the United States who do not speak English. In response to these problems, many companies have begun to provide remedial training in reading, writing, and mathematics. Companies are thus taking the role of public schools by providing basic education. Training and development programs will continue to be essential to organizational survival. As the managerial and technological worlds become more complex, and as the number of highly skilled entrants into the workforce declines, companies will need to focus on both remedial training for new employees and updating the knowledge and skills of older employees. The use of the Internet for distance-learning training programs is expected to increase, offering opportunities for people in remote locations who traditionally have not had access to local training resources.




Bibliography


Bandura, Albert. Social Learning Theory. Englewood Cliffs: Prentice, 1977. Print.



Biech, Elaine. Developing Talent for Organizational Results: Training Tools from the Best in the Field. San Francisco: Pfeiffer, 2012. Print.



Craig, Robert L., ed. The ASTD Training and Development Handbook: A Guide to Human Resource Development. 4th ed. New York: McGraw-Hill, 1996. Print.



Landy, Frank J., and Don A. Trumbo. “Personnel Training and Development: Concepts, Models, and Techniques.” Psychology of Work Behavior. New York: McGraw-Hill, 2003. Print.



Latham, Gray P. “Human Resource Training and Development.” Annual Review of Psychology. Vol. 39. Stanford: Annual Reviews, 1988. Print.



Moskowitz, Michael. A Practical Guide to Training and Development: Assess, Design, Deliver, and Evaluate. San Francisco: Pfeiffer, 2008. Print.



Noe, Raymond. Employee Training and Development. 4th ed. New York: McGraw, 2008. Print.



Sauser, William I., and Ronald R. Sims. Managing Human Resources for the Millennial Generation. Charlotte: Information Age, 2012. Print.



Wexley, K. N., and Gary P. Latham. Developing and Training Human Resources in Organizations. 3d ed. Upper Saddle River: Prentice, 2002. Print.



Wilson, John P. International Human Resource Development: Learning, Education and Training for Individuals and Organizations. 3d ed. London: Kogan. Print.

Monday, December 29, 2014

What is Streptococcus?


Definition


Streptococcus is a genus of gram-positive cocci, or bacteria, with a thick peptidoglycan layer in their cell walls (gram-positive) that appear under the microscope as chains of two or more spherical cells (cocci). The streptococci can grow in low concentrations of oxygen or without oxygen and are distinguished from bacteria in the genus Staphylococcus in the laboratory by the production of the enzyme catalase by staph species.




The streptococci are classified in a number of ways, including by the identity of molecules on the cell surface and by the presence and variety of hemolysin, or enzyme, that lyse red blood cells. Alpha-hemolytic species lyse red blood cells and oxidize hemoglobin to leave an opaque green residue on blood agar petri dishes. Beta-hemolytic species leave a transparent halo around colonies on blood agar petri dishes. Gamma-hemolytic species exhibit neither of these traits.




Natural Habitat and Features

Streptococci are a part of the normal microbiota of humans and other mammals.
Some streptococci can cause infectious diseases. The progression from latency to
infectious disease is not well understood, but scientists have investigated the
possibility that virus-induced genetic changes in streptococcal species are
responsible for the sudden appearance of “flesh-eating” disease, or
necrotizing
fasciitis.




Pathogenicity and Clinical Significance


S. pyogenes is a notable member of the beta-hemolytic
streptococci. Pyogenes is an opportunistic pathogen widely distributed in humans. It causes acute bacterial pharyngitis,
commonly known as strep throat, and infections of the skin and circulatory
system. These bacteria are able to evade the human immune system
though various means. That is, the cells are covered in hyaluronic acid, which is
a component of human connective tissue and, therefore, non-immunogenic; and a
series of proteins, M proteins, prevents the engulfment of bacterial cells by
immune cells. Two toxins that destroy immune cells also cause beta-hemolysis of
red blood cells.



Pyogenes strains that produce erythrogenic exotoxins, the
extracellular proteins responsible for the scarlet fever rash, may produce one of
three varieties. Exposure to one variety does not induce immunity to the others,
so a person may have recurring infection. These toxins are not encoded on the
bacterial chromosome, but on plasmids. Prompt antibiotic therapy of
strep throat has reduced the incidence of scarlet
fever.


Acute rheumatic fever and acute glomerulonephritis are also consequences of
untreated strep throat. The symptoms of rheumatic fever occur about three to
four weeks following strep throat and include pains in the joints and long-term
damage to the heart, likely because of an autoimmune response. Glomerulonephritis
is swelling of the kidneys following strep throat or a streptococcal skin
infection.


Streptococcal impetigo is a localized skin infection caused by
pyogenes. Erysipelas is an acute infection of the
skin with fever. Strains that express the enzyme streptokinase may dissolve a
blood clot to penetrate to deeper tissue. Infection of deep muscle and fat tissue, the lungs, and blood can
be life-threatening. Necrotizing fasciitis (infection of muscle and fat tissue)
kills about 20 percent of infected persons, and streptococcal toxic shock syndrome
(an infection causing low blood pressure and shock and injury to the kidneys,
liver, and lungs) kills up to 60 percent of infected persons.


Another beta-hemolytic species, agalactiae, is the major cause
of meningitis, pneumonia, and infections of the
bloodstream in newborns. The female genital tract is the natural habitat for
agalactiae, with 25 to 35 percent of the female population
being carriers. Newborns are infected at birth or during their stay in a hospital
nursery. Antibiotic therapy of pregnant women who carry
agalactiae prevents transmission to their fetuses at
birth.


Most human cases of bacterial pneumonia are caused by alpha-hemolytic
pneumoniae. This species grows as pairs of cocci coated in a
thick carbohydrate capsule. Colonies on blood agar petri dishes are surrounded by
transparent agar and a mucoid appearance. Pneumoniae is an
inhabitant of the upper respiratory tract of up to 70 percent of the population.
In immunocompromised hosts, such as many elderly persons, and in those with a
viral
infection, this strain causes pneumonia. The infection in the
lungs results in fluid retention and difficulty in breathing. Recovery follows
after five to six days, even without antibiotic treatment. An increase of
circulating antibodies accompanies a decrease in the severity of the
symptoms. Penicillin or erythromycin hastens recovery, while a few persons with
pneumococcal-pneumonia, primarily the elderly, die even though they are being
treated with antibiotics.


Spinal meningitis caused by pneumoniae had been the second
leading cause of bacterial meningitis. This changed when a glycoconjugate
vaccine was added to the infant immunization schedule in the United States and
other countries.



Mutans, mitis, and sanguinis are alpha-hemolytic streptococci that are normal inhabitants of the human mouth. Mutans and mitis are found in dental plaque. S. mutans produces dextran from sucrose. Dextran is the sticky component of dental plaque that allows many bacterial species to stick to tooth surfaces. When bacteria grow on the teeth, they produce acid that contributes to the creation of cavities. Non-hemolytic species, also called gamma-hemolytic streptococci, are not human pathogens.




Drug Susceptibility


Streptococcal
infections are treated primarily with antibiotics. Widespread
incidence of resistance has not occurred to the extent that it has in the
staphylococcal species. Multiply resistant strains have been documented, though.
The treatment of strep throat has become more difficult because of
antibiotic
resistance of non-strep bacteria in the throat. Other
bacteria can destroy penicillin and other beta-lactam antibiotics and, thus,
shield sensitive pyogenes from their effect.




Bibliography


Brachman, Philip S., and Elias Abrutyn, eds. Bacterial Infections of Humans: Epidemiology and Control. 4th ed. New York: Springer, 2009. A college-level introduction that focuses on the mechanisms of pathogenicity.



Parker, James N., and Philip M. Parker, eds. The Official Patient’s Sourcebook on “Streptococcus pneumoniae” Infections. San Diego, Calif.: Icon Health, 2002. Draws from public, academic, government, and peer-reviewed research to provide a wide-ranging handbook for patients with pneumonia infections.



Tortora, Gerard J., Berdell R. Funke, and Christine L. Case. Microbiology: An Introduction. 10th ed. San Francisco: Benjamin Cummings, 2010. A great reference for those interested in exploring the microbial world. Provides readers with an appreciation of the pathogenicity and usefulness of microorganisms.

In J.R.R. Tolkien's The Hobbit, what did Thorin promise Bard in exchange for the Arkenstone?

In J.R.R. Tolkien's The Hobbit, Thorin grudgingly promises to give Bard a "one fourteenth share of the hoard in silver and gold" (262)—Bilbo's original allotment of treasure—in exchange for the precious Arkenstone, the Heart of the Mountain. Bard acquired the Arkenstone through covert means, as Bilbo secreted it away from the Lonely Mountain in the dead of night and gave it to the lake men. He originally did so not out of spite for the dwarves, but to force Thorin to share some of the treasure with the beleaguered citizens of Lake Town, as Thorin could not bear to be parted from the beloved gem. By doing so, Bilbo hoped to avoid a likely war, but, unfortunately for the well-meaning hobbit, a battle erupts anyway when the loathsome goblins arrive and force the dwarves, men, and elves to unite against them. This scene, in which an incensed Thorin condemns Bilbo and even threatens to kill him, is a poignant and frightening illustration of greed, as former friends are turned against one another in a bitter quarrel over money.

What is glutamine?




Cancers treated or prevented: All cancers currently treated with chemotherapy or radiation





Delivery routes: Glutamine can be taken orally by capsules, powder, or tablets. In the clinical setting, glutamine can be part of an enteral liquid formula given by feeding tube through the nose, stomach, or small intestine. Glutamine can also be given intravenously.



How this compound works: Although glutamine is found largely as a component of proteins (skeletal muscle in particular), it serves a variety of functions in the body. Stress conditions such as injury, burns, critical illness, or high-intensity exercise cause a greatly increased need for glutamine. Glutamine is considered a conditionally essential amino acid under these stress conditions, since dietary supplementation of glutamine is necessary. The gastrointestinal tract is the largest user of glutamine, particularly as a source of energy. Glutamine is important in wound healing and helps to mobilize components of the immune system. It also helps to maintain the integrity of the intestinal lining to prevent entry of bacteria and fungi.


Cancer cells have a great demand for glutamine as an energy source; this can deplete glutamine stores in muscle and other body tissues. Laboratory studies indicate that glutamine is necessary for the functioning of T lymphocytes and natural killer cells, which are components of the immune system. The depletion of body glutamine, therefore, could compromise the role of the immune system in the protection against cancer. Some researchers were concerned about supplementing cancer patients with glutamine, thinking that supplementation could increase tumor growth. Such supplementation has been found, however, to increase glutamine stores in the body and to improve intestinal and immune function.


In addition, studies have indicated that glutamine may alleviate the side effects of chemotherapy and radiation therapy. Glutamine supplementation has resulted in decreased intestinal mucosa ulceration and mouth inflammation. Peripheral neuropathy (numbness in extremities, motor weakness) often limits chemotherapeutic dosages. Glutamine may reduce the severity of neurological disorders, thereby permitting more effective dosages. Researchers believe that glutamine may work by restoring cellular levels of glutathione, a molecule that contains a sulfur group which binds to drugs and carcinogens. Glutamine supplementation increases the glutathione level in the body, thereby helping to reduce toxic drug levels. Glutamine supplementation has been shown to increase the accumulation of the chemotherapeutic drug methotrexate inside tumor cells, thereby increasing its killing effect.



Side effects: Since glutamine is so abundant in the body, even doses of up to 21 grams daily are well tolerated. Side effects are mainly gastrointestinal and include constipation and bloating.




Bibliography


Farkas, Etelka, and Maxim Ryadnov. Amino Acids, Peptides and Proteins. Cambridge: Royal Society of Chemistry, 2014. Digital file.



Gaurav, Kumar, et al. "Glutamine: A Novel Approach to Chemotherapy Induced Toxicity." Indian Journal of Medical and Paediatric Oncology 33.1 (2012): 13–20. Print.



Hensley, Christopher T., Ajla T. Wasti, and Ralph J. DeBerardinis. "Glutamine and Cancer: Cell Biology, Physiology, and Clinical Opportunities." Journal of Clinical Investigation 123.9 (2013): 3678–84. Print.



Topkan, Erkan, et al. "Influence of Oral Glutamine Supplementation on Survival Outcomes of Patients Treated with Concurrent Chemoradiotherapy for Locally Advanced Non-Small Cell Lung Cancer." BMC Cancer 12.1 (2012): 502–11. Print.



Yang, Lifeng, et al. "Metabolic Shifts toward Glutamine Regulate Tumor Growth, Invasion and Bioenergetics in Ovarian Cancer." Molecular Systems Biology 10.5 (2014). Print.

What is the importance of alleles in humans and other organisms?

Alleles are alternative genes for a specific trait. Alleles can be thought of as variations of a particular gene. When an organism has two identical genes (alleles) for a trait, it is said to be homozygous and when an organism has two different genes (alleles) for a trait, it is said to be heterozygous for that trait. An example is a person who has two alleles for brown eyes and is homozygous for the trait of eye color. 


All of the alleles in an organism make up its genome. An organism's genotype is its specific set of genes while its phenotype are all of its observable traits. Due to mutation and natural selection, many loci along the DNA have various alleles. More alleles may lead to a greater variety of traits in offspring. This is particularly important because habitats on Earth are constantly changing and if a species' genes remained the same, they may not provide a selective advantage in a new environment. Thus, the species would die out.


The alleles an organism possesses may or may not confer an advantage for survival in a particular environment. However, when an organism creates gametes or sex cells through the process of meiosis, due to the law of segregation, an organism's traits determined by its genes separate from each other creating a variety of sex cells. Also, due to the law of independent assortment, gene pairs separate into gametes independent of one another assuring that a huge number of possible combinations of genes can be transmitted to the next generation.


Alleles may be dominant or recessive, incompletely dominant or co-dominant. When an organism receives its gene pairs from its parents, for every trait, based on laws of heredity, different alleles may be observed or hidden in an offspring. For instance, if a person inherits an allele to produce melanin (A) and another allele that doesn't allow melanin production (a), this individual (Aa) is heterozygous for the trait of melanin production, but will have normal pigmentation because allele (A )is dominant to allele (a). However, this individual is carrying the allele for albinism. 


To summarize, the alleles an organism possesses comprise its genotype which in turn determines its phenotype. The environment also plays a role in gene expression. Alleles are important because it is their combination within an organism that may help it to survive in a particular environment and if it is considered to be "fit" it will reproduce and perhaps pass those adaptations down to future offspring. 

Sunday, December 28, 2014

What is diastrophic dysplasia?


Risk Factors

Those whose biological parents both carry a mutant copy of the SLC26A2 gene are at risk of inheriting diastrophic dysplasia. This disease is found in all populations and occurs equally in men and women. It is particularly prevalent in Finland.







Etiology and Genetics

Diastrophic dysplasia is an autosomal recessive genetic disease and is caused by mutations in the SLC26A2 gene. Someone must possess two copies of the mutant form of this gene to have this disorder. The SLC26A2 gene is located on the long arm of chromosome 5, in band regions 32–33. If both parents carry one mutant copy of SLC26A2, then each sibling has a 25 percent chance of receiving two mutant copies and contracting diastrophic dysplasia.


The SLC26A2 gene encodes the information for the synthesis of a protein called solute carrier family 26, member 2. This protein is embedded in the cell membrane. The cell membrane is composed of phosphate-containing lipids that border the cell and delimit the cell interior from the exterior. The structure of the cell membrane prevents charged and large polar molecules from entering or exiting the cell. If the cell needs such molecules, then specific transport proteins inserted into the membrane facilitate the import or export of particular molecules. Solute carrier family 26, member 2 is a transport protein that allows the entrance of sulfate ions into cells.


Sulfate ions are essential for the production of normal cartilage. A major component of cartilage is a group of proteins called proteoglycans. Proteoglycans are proteins with long sugar chains attached to them, but these sugar molecules also have sulfate ions linked to them. The cells that produce cartilage are called chondrocytes, and if these cells do not possess a normal solute carrier family 26, member 2, then they cannot properly import sulfate ions for the synthesis of proteoglycans and they will make abnormal cartilage.


Cartilage also establishes the pattern for bone development. The long bones of the arms, legs, and other structures initially develop as long cartilage rods that are replaced later by bone (endochondral ossification). Because the cartilage precursors act as templates for the bones, if the original cartilage rods are abnormal, then the bones that replace them will also be abnormal. Chondrocytes in individuals with diastrophic dysplasia lack the ability to import sulfate ions and make normal proteoglycans, and therefore, they make structurally abnormal cartilage that is replaced by abnormal bone. The ends of the cartilage rod make the joint-specific cartilage, which is also abnormal in diastrophic dysplasia.


In developing humans, not all cartilage is used to make bone. Many structures, like joints, the voice box (larynx), external ears, and the windpipe (trachea) are made, largely, from cartilage. These structures are also abnormal in individuals with diastrophic dysplasia and often do not function properly.




Symptoms

The main characteristics of this disease include short stature and short arms and legs. The joints show permanent shortening (contractures). The feet tend to turn downward and inward (club feet). The thumbs are placed farther back on the hand (hitchhiker thumbs). The spine is abnormally curved (kyphosis). About one-third of babies with diastrophic dysplasia are born with a hole in the roof of the mouth (cleft palate). Two-thirds of newborn children also show swollen ears.




Screening and Diagnosis

Ultrasound can detect clinical features such as shortened limbs with a normal-sized skull, a small chest, hitchhiker thumbs, and joint contractures as early as twelve weeks of gestation. X-ray analysis reveals poorly developed and malformed bones. Tissue (histopathological) analysis reveals abnormal cartilage that contains too few sulfate-containing proteoglycans. Molecular genetic testing can confirm the diagnosis.




Treatment and Therapy

In children, physical therapy and casting can help joint problems, and surgery can correct club feet. In young adults, a surgical technique called arthroplasty, which replaces abnormal joints with synthetic articulations made from cobalt chromoly and high molecular weight polyethylene, relieves pain and increases hip and knee joint mobility. Spinal surgery can fix excessive curvature of the spine. One caveat with surgical therapies is that deformities tend to reoccur after orthopedic surgery.




Prevention and Outcomes

Rib and windpipe abnormalities can prevent proper breathing, which causes increased death rates in newly born babies with diastrophic dysplasia. If the baby survives, then surgical corrections probably will be required to allow the child to walk and to reduce the abnormal curvature of the spine. Spinal curvature and the health of the joints should be checked annually.



Obesity tends to place too great a load on the joints and should be avoided. If they survive early childhood, then children with diastrophic dysplasia have normal intelligence and can excel in academic, social, and artistic endeavors.




Bibliography


Anbazhagan, Arunkumar, and Asha Benakappa. "Not Just Cerebral Palsy: Diastrophic Dysplasia Presenting as Spastic Quadriparesis." Journal of Pediatrics 164.6 (June 2014): 1493–94. Print.



Hudgins, Louanne, et al., eds. Signs and Symptoms of Genetic Conditions: A Handbook. New York: Oxford UP, 2014. Print.



Jones, Kenneth Lyons, Marilyn Crandall Jones, and Miguel Del Campo Casanelles. Smith's Recognizable Patterns of Human Malformation. 7th ed. Philadelphia: Elsevier, 2013. Print.



McKay, Scott D., et al. "Review of Cervical Spine Anomalies in Genetic Syndromes." Spine 37.5 (Mar. 2012): E269–77. Print.



Moore, L. Keith, and T. V. N. Persuad. Before We Are Born. 7th ed. Philadelphia: Saunders, 2008. Print.



Read, Andrew, and Dian Donnai. New Clinical Genetics. Bloxham, Oxfordshire, England: Scion, 2007. Print.



Schwartz, Nancy B. “Carbohydrate Metabolism II: Special Pathways and Glycoconjugates.” In Textbook of Biochemistry with Clinical Correlations, edited by Thomas M. Devlin. 5th ed. New York: Wiley-Liss, 2002. Print.

What is race?


Conflicting Definitions of Race

Few ideas have had such a contentious history as the use of the term “race.” Categorization relied on consideration of salient traits such as skin color, body form and hair texture to classify humans into distinct subcategories. The term “race” is currently believed to have little biological meaning, in great part because of advances in genetic research. Studies have revealed that a person’s genes cannot define their ethnic heritage and that no gene exists exclusively within one race/ethnocultural group. Biomedical scientists remain divided on their opinion about “race” and how it may be used in treating human genetic conditions.








For a racial or subspecies classification scheme to be objective and biologically meaningful, researchers must decide carefully which heritable characteristics (passed to future generations genetically) will define the groups. Several principles are considered. First, the unique traits must be discrete and not continually changing by small degrees between populations. Second, everyone placed within a specific race must possess the selected trait’s defining variant. All the selected characteristics are found consistently in each member of the group. For example, if blue eyes and brown hair are chosen as defining characteristics, everyone designated as belonging to that race must share both of those characteristics. Individuals placed in other races should not exhibit this particular combination. Third, individuals of the same race must have descended from a common ancestor, unique to those people. Many shared characteristics present in individuals of a race may be traced to that ancestor by heredity. Based on the preceding defining criteria (selection of discrete traits, agreement of traits, and common ancestry), pure representatives of each racial
category should be detectable.


Most researchers maintain that traditional races do not conform to scientific principles of subspecies classification. For example, the traits used to define traditional human races are rarely discrete. Skin color, a prominent characteristic employed, is not a well-defined trait. Approximately eleven genes influence skin color significantly, but fifty or so are likely to contribute. Pigmentation in humans results from a complex series of biochemical pathways regulated by amounts of enzymes (molecules that control chemical reactions) and enzyme inhibitors, along with environmental factors. Moreover, the number of melanocytes (cells that produce melanin) do not differ from one person to another, while their level of melanin production does. Like most complex traits involving many genes, human skin color varies on a continuous gradation. From lightest to darkest, all intermediate pigmentations are represented. Color may vary widely even within the same family. The boundary between black and white is an arbitrary, humanmade border, not one imposed by nature.


In addition, traditional defining racial characteristics, such as skin color and facial characteristics, are not found in all members of a race. For example, many Melanesians, indigenous to Pacific islands, have pigmentation as dark as any human but are not classified as “black.” Another example is found in individuals of the Cherokee Nation that have Caucasoid facial features and very dark skin, yet have no European ancestry. When traditional racial characteristics are examined closely, many groups are left with no conventional racial group. No “pure” genetic representatives of any traditional race exist.


Common ancestry must also be considered. Genetic studies have shown that Africans do not belong to a single “black” heritage. In fact, several lineages are found in Africa. An even greater variance is found in African Americans. Besides a diverse African ancestry, on average 13 percent of African American ancestry is Northern European. Yet all black Americans are consolidated into one race.


The true diversity found in humans is not patterned according to accepted standards of a subspecies. Only at extreme geographical distances are notable differences found. Human populations in close proximity have more genetic similarities than distant populations. Well-defined genetic borders between human populations are not observed, and racial boundaries in classification schemes are most often formed arbitrarily.




History of Racial Classifications

Efforts to classify humans into a number of distinct types date back at least to the 19th Dynasty of Ancient Egypt. The sacred text Book of Gates described four distinct groups: “Egyptians,” “Asiatics,” “Libyans,” and “Nubians” were defined using both physical and geographical characteristics. Applying scientific principles to divide people into distinct racial groups has been a goal for much of human history.


In 1758, the founder of biological classification, Swedish botanist Carolus Linnaeus, arranged humans into four principal races: Americanus, Europeus, Asiaticus, and Afer. Although geographic location was his primary organizing factor, Linnaeus also described the races according to subjective traits such as temperament. Despite his use of archaic criteria, Linnaeus did not give superior status to any of the races.


Johann Friedrich Blumenbach, a German naturalist and admirer of Linnaeus, developed a classification with lasting influence. Blumenbach maintained that the original forms, which he named “Caucasian,” were those primarily of European ancestry. His final classification, published in 1795, consisted of five races: Caucasian, Malay, Ethiopian, American, and Mongolian. The fifth race, the Malay, was added to Linnaeus’s classification to show a step-by-step change from the original body type.


After Linnaeus and Blumenbach, many variations of their categories were formulated, chiefly by biologists and anthropologists. Classification “lumpers” combined people into only a few races (for example, black, white, and Asian). “Splitters” separated the traditional groups into many different races. One classification scheme divided all Europeans into Alpine, Nordic, and Mediterranean races. Others split Europeans into ten different races. No one scheme of racial classification came to be accepted throughout the scientific community.




Genetics and Theories of Human Evolution

Advances in DNA technology have greatly aided researchers in their quest to reconstruct the history of Homo sapiens and its diversification. Analysis of human DNA has been performed on both nuclear and mitochondrial DNA. The nucleus is the organelle that contains the majority of the cell’s genetic material. Mitochondria are organelles responsible for generating cellular energy. Each mitochondrion contains a single, circular DNA molecule. Research suggests that Africa was the root of all humankind and that humans first arose there 100,000 to 200,000 years ago. Several lines of research, including DNA analysis of humanoid fossils, provide further evidence for this theory.


Many scientists are using genetic markers to decipher the migrations that fashioned past and present human populations. For example, DNA comparisons revealed three Native American lineages. Some scientists believe one migration crossed the Bering Strait, most likely from Mongolia. Another theory states that three separate Asian migrations occurred, each bringing a different lineage.




Genetic Diversity Among Races

Three primary forces produce the genetic components of a population: natural selection, nonadaptive genetic change, and mating between neighboring populations. The first two factors may result in differences between populations, and reproductive isolation, either voluntary or because of geographic isolation, perpetuates the distinctions. Natural selection refers to the persistence of genetic traits favorable in a specific environment. For example, a widely held assumption concerns skin color,
primarily a result of the pigment melanin. Melanin offers some shielding from ultraviolet solar rays. According to this theory, people living in regions with concentrated ultraviolet exposure have increased melanin synthesis and, therefore, dark skin color conferring protection against skin cancer. Individuals with genes for increased melanin have enhanced survival rates
and reproductive opportunities. The reproductive opportunities produce offspring that inherit those same genes for increased melanin. This process results in a higher percentage of the population with elevated melanin production genes. Therefore, genes coding for melanin production are favorable and persist in these environments.


The second factor contributing to the genetic makeup of a population is nonadaptive genetic change. This process involves random genetic mutations (alterations). For example, certain genes are responsible for eye color. Individuals contain alternate forms of these genes, or alleles, which result in different eye color. Because these traits are impartial to environmental influences, they may endure from generation to generation. Different populations will spontaneously produce, sustain, and delete them.


The third factor, mating between individuals from neighboring groups, tends to merge traits from several populations. This genetic mixing often results in offspring with blended characteristics.


Several studies have compared the overall genetic complement of various human populations. On average, any two people of the same or a different race diverge genetically by a mere 0.1 percent. It is estimated that only 0.012 percent contributes to traditional racial variations. Hence, most of the genetic differences found between a person of African descent and a person of European descent are also different between two individuals with the same ancestry. The genes do not differ. It is the proportion of individuals expressing a specific allele that varies from population to population.


Upon closer examination, it was found that the continent of Africa is unequaled with respect to cumulative genetic diversity. Numerous races are found in Africa, Khoisan Africans of southern Africa being the most distinct. Therefore two people of different ethnicities who do not have recent African ancestry (for example Northern Europeans and South East Asians) have more similar genetics than any two distinct African ethnic groups. This finding supports theories of early human migration in which humans first evolved in Africa and a subset left the continent, experienced a population bottleneck, and then established the human populations around the world.




Human Genome Diversity Project and Advances in Research

Many scientists are attempting to reconcile the negativities associated with racial studies. The Human Genome Diversity Project (HGDP), was initiated by Stanford University in 1993 and functions independently from the Human Genome Project. The HGDP aims to collect and store DNA from ethnically diverse populations around the world, creating a library of samples to represent global human diversity. Results of future studies may aid in gene therapy treatments and greater success with organ transplantation. As a result, a more thorough understanding of the genetic diversity and unity in the species Homo sapiens will be possible.


At the population level, human diversity is greatest within racial/cultural groups rather than between them. Originally, geneticists who studied genetic diversity of human populations were limited to data from very few genetic loci (locations in the genome that are of interest); however, recent studies are able to simultaneously analyze hundreds to thousands of loci. It is currently estimated that 90 percent of genetic variation in human beings is found within each purported racial group, while differences between the groups only equate to the remaining 10 percent.


A second method of studying human genetic diversity is to compare ethnically diverse individuals and search for similarities and differences in their genomes. Early studies involved only a few dozen genetic loci and as a result did not find individuals to cluster (group together) based on their geographic origin. Recent studies, however, were able to analyze substantially more genetic loci and resulted in data with stronger statistical power. These studies focused on individuals from three distinct geographic areas: Europe, sub-Sahran Africa, and East Asia. Indeed, individuals clustered or shared more genetic similarities with others of the same geographic region. Participants from Africa were found to have the greatest diversity, which is in agreement with population studies. Another cluster consisted exclusively of Europeans, and a third comprised the Asian individuals. However, when individuals from neighboring regions were also analyzed, such as South Indians, the analysis showed similarities to both East Asians and Europeans. This finding may be explained by the numerous migrations between Europe and India during the past ten thousand years.
Many individuals did not cluster with their geographic cohorts, demonstrating that individuals are not easily categorizeable into neat groups of races but tend to share more genetic similarities with people from their region.


Race or an individual’s ancestry can sometimes provide useful information in medical decision making, as gender or age often do. Certain genetic conditions are more common among ethnocultural groups. For example, hemochromatosis is more prevalent within Northern Europeans and Caucasians, whereas sickle-cell disease is more often found in Africans and African Americans. Meanwhile other genetic diseases are equally prevalent across racial groups, as seen in spinal muscular atrophy (SMA). If a disease-causing gene is common, then it is likely to be relatively ancient and thus shared across ethnicities. Moreover, some genetic conditions remain prevalent in populations because they provide an adaptive advantage to the individual, as seen in sickle-cell disease carriers being protected against malarial infection. Likewise, an individual’s response to drugs may be mediated by their genetic makeup. A gene called
CYP2D6
is involved in the metabolism or breakdown of many important drugs such as codeine and morphine. Some individuals have no working copy of this gene whereas others have one or two copies that function properly. The majority of individuals with no working copies are of European heritage (26 percent), whereas fewer Asian (6 percent) and African populations (7 percent) fall into this category. Thus it may be tempting to make medical decisions based on a patient’s ethnic heritage; however, this may lead to inaccurate diagnoses (missing sickle-cell disease in an Asian individual) or inappropriate drug administration (prohibiting a Caucasian person from taking codeine). Ideally, each individual should have medical decisions made based on their genetic makeup in lieu of their ethnic heritage. Future patients may be able to first request an analysis of their genome, which would aid their physicians in making some genetically appropriate medical
decisions.




Sociopolitical Implications

Race is often portrayed as a natural, biological division, the result of geographic isolation and adaptation to local environment. However, confusion between biological and cultural classification obscures perceptions of race. When individuals describe themselves as “black,” “white,” or “Hispanic,” for example, they are usually describing cultural heredity as well as biological similarities. The relative importance of perceived cultural affiliations or genetics varies depending on the circumstances. Examples illustrating the ambiguities are abundant. Nearly all people with African American ancestry are labeled black, even if they have a white parent. In addition, dark skin color designates one as belonging to the black race, including Africans and aboriginal Australians, who have no common genetic lineage. State laws, some on the books until the late 1960’s, required a “Negro” designation for anyone with one-eighth black heritage (one black great-grandparent).


Unlike biological boundaries, cultural boundaries are sharp, repeatedly motivating discrimination, genocide, and war. In the early and mid-twentieth century, the eugenics movement, advocating the genetic improvement of the human species, translated into laws against interracial marriage, sterilization programs, and mass murder. Harmful effects include accusations of deficiencies in intelligence or moral character based on traditional racial classification.


The frequent use of biology to devalue certain races and excuse bigotry has profound implications for individuals and society. Blumenbach selected Caucasians (who inhabit regions near the Caucasus Mountains, a Russian and Georgian mountain range) as the original form of humans because in his opinion they were the most beautiful. All other races deviated from this ideal and were, therefore, less beautiful. Despite Blumenbach’s efforts not to demean other groups based on intelligence or moral character, the act of ranking in any form left an ill-fated legacy.


In conclusion, race remains a contentious issue both in many fields of science and within the greater society. Recent genomic studies at both the individual and the population level have shown that the majority of human genetic composition is universal and shared across all ethnocultural groups. Shared genetics is most commonly found in individuals who originate from the same geographic region. However, there is no scientific support for the concept of distinct, “pure,” and nonoverlapping races. Unfortunately throughout human history, the use and abuse of the term “race” has been pursued for sociopolitical gains or to justify bigotry toward and abuses of individuals. It is now known that human genetic diversity is a continuum, with natural selection, nonadaptive genetic change, and mating as the true driving forces for human genetic diversity.




Key terms



eugenics

:

a movement concerned with the improvement of human genetic traits, predominantly by the regulation of mating




Human Genome Diversity Project

:

an extension of the Human Genome Project in which DNA of native people around the world is collected for study




population

:

a group of geographically localized, interbreeding individuals




race

:

a collection of geographically localized populations with well-defined genetic traits





Bibliography


Cavalli-Sforza, Luigi L. The Great Human Diasporas: A History of Diversity and Evolution. Translated by Serah Thorne. Reading, Mass.: Addison-Wesley, 1995. Argues that humans around the world are more similar than different.



_______, et al. The History and Geography of Human Genes. Princeton, N.J.: Princeton University Press, 1996. Often referred to as a “genetic atlas,” this volume contains fifty years of research comparing heritable traits, such as blood groups, from more than one thousand human populations.



Fish, Jefferson M., ed. Race and Intelligence: Separating Science from Myth. Mahwah, N.J.: Lawrence Erlbaum, 2002. An interdisciplinary collection disputing race as a biological category and arguing that there is no general or single intelligence and that cognitive ability is shaped through education.



Garcia, Jorge J. E. Race or Ethnicity? On Black or Latino Identity. Ithaca, N.Y.: Cornell University Press, 2007. Essays discuss whether racial identity matters and consider issues associated with assimilation, racism, and public policy.



Gates, E. Nathaniel, ed. The Concept of “Race” in Natural and Social Science. New York: Garland, 1997. Argues that the concept of race, as a form of classification based on physical characteristics, was arbitrarily conceived during the Enlightenment and is without scientific merit.



Gibbons, A. “Africans’ Deep Genetic Roots Reveal Their Evolutionary Story.” Science 324 (2009): 575. Describes the largest study ever conducted of African genetic diversity, which reveals Africans are descendants from 14 distinct ancestral groups that often correlate with language and cultural groups.



Gould, Stephen Jay. The Mismeasure of Man. Rev. ed. New York: W. W. Norton, 1996. Presents a historical commentary on racial categorization and a refutation of theories espousing a single measure of genetically fixed intelligence.



Graves, Joseph L., Jr. The Emperor’s New Clothes: Biological Theories of Race at the Millennium. New Brunswick, N.J.: Rutgers University Press, 2001. Argues for a more scientific approach to debates about race, one that takes human genetic diversity into account.



Herrnstein, Richard J., and Charles Murray. The Bell Curve: Intelligence and Class Structure in America. New York: Free Press, 1994. The authors maintain that IQ is a valid measure of intelligence, that intelligence is largely a product of genetic background, and that differences in intelligence among social classes play a major part in shaping American society.



Jorde, L. B., and S. P. Wooding. “Genetic Variation, Classification, and ’Race.’” Nature Genetics 36, no. 11 (2004): S28. A review article that provides an overview of human variation and discusses whether current data support historic ideas of race, and what these findings imply for biomedical research and medicine.



Kevles, Daniel J. In the Name of Eugenics: Genetics and the Uses of Human Heredity. Cambridge, Mass.: Harvard University Press, 1995. Discusses genetics both as a science and as a social and political perspective, and how the two often collide to muddy the boundaries of science and opinion.



Royal, C., and G. Dunston. “Changing the Paradigm from ’Race’ to Human Genome Variation.” Nature Genetics 36 (2004): S5-S7. Commentary suggests we begin to think outside the box and see ethnic groups as genomic diversity rather than distinct races.



Valencia, Richard R., and Lisa A. Suzuki. Intelligence Testing and Minority Students: Foundations, Performance Factors, and Assessment Issues. Thousand Oaks, Calif.: Sage Publications, 2000. Historical and multicultural perspective on intelligence and its often assumed relation with socioeconomic status, home environment, test bias, and heredity.

What is incompetency?


Introduction

The terms “incompetency” and “incompetence” refer to a state of diminished mental functioning that, for example, precludes an individual from giving informed consent to go undergo a particular medical procedure. Other contexts in which the issue of incompetency—or its obverse, competency—might arise include competence to consent voluntarily to psychiatric hospitalization, competence to give informed consent to participate in research, and competence in right-to-die issues, such as requesting physician-assisted suicide. In legal contexts, situations requiring determination of whether an individual has the requisite mental abilities for competent decision making include competence to execute a will, competence to enter a plea, competence to stand trial, competence to waive Miranda and other constitutional rights, competence to waive death sentence appeals, and competence to be executed.






Psychologists and other mental health professionals rarely use the terms “incompetency” and “incompetent” because they imply a global deficit that has little practical meaning or application. Alzheimer’s disease patients, for example, are at risk for autonomy-restricting interventions, including institutionalization and guardianship. As greater attention is paid to preserving individual rights, increased emphasis is placed on identifying, in functional terms, specific mental tasks and skills that people retain and lose. “Mental capacity” is the term used to describe the cluster of mental skills that people use in their everyday lives. It includes memory, logic and reasoning, the ability to calculate, and the mental flexibility to shift attention from one task to another. Describing a person’s ability or mental capacity to perform particular tasks, such as remembering to pay bills or calculating how much change is owed, enables professionals to assess vulnerability more effectively and develop suitable treatment and service plans.


Medical decision-making capacity is defined as the ability to give informed consent to undergo a particular medical test or intervention or the ability to refuse such intervention. Capacity may extend to the question of disclosure of sensitive confidential information or an individual’s permission to participate in research. When a legal determination of competency has not been made, decisional capacity is used to describe an individual’s ability to make a health care decision.




Conditions and Factors Affecting Mental Capacity

Among the conditions that affect mental capacity are mental retardation, mental illness, and the progressive dementia seen in older adults. There are also a variety of other medical conditions that can interfere with mental capacity—for example, the encephalopathy seen in patients with advanced liver disease. It should be noted that capacity can and will vary over time, either as a disease process progresses, as in a dementing illness, or as clinical conditions wax and wane, as in a patient with a bipolar disorder. In the case of delirium, for example, a patient may be capable of participating in treatment decisions in the morning but incapable of making a decision at a later point in that day. A variety of factors, some of which are treatable, may contribute to mental decline. These include poor nutrition, depression, and interactions among medications.


Mental, or decision-making, capacity is also treatment- or situation-specific. For example, a person may have the ability to agree to a diagnostic procedure, yet be unable to comprehend fully the consequences of accepting a particular medication or surgical intervention. When assessing capacity, one is determining whether an individual is capable of deciding about a specific treatment or class of treatments rather than making a global determination of incompetence. A person will sometimes be found to have the capacity to make some decisions and yet not others, especially when more complex information must be presented and understood.




The Role of the Mental Health Practitioner

Psychologists and other mental health practitioners are often asked to evaluate a person’s cognitive functioning to determine whether the individual is competent, for example, to execute a will or to assign power of attorney. For individuals obviously in mental decline, the role of the psychologist is to determine whether it is appropriate to appoint a guardian to manage the person’s financial affairs and to make medical decisions regarding long-term care. To determine treatment decision-making capacity, the psychologist assesses the individual’s understanding of the disorder and the recommended treatment, appreciation of the situation and treatment choices, ability to reason and evaluate options and consequences of choices, and ability to express a choice.


Given the lack of consensus on both the exact criteria and the method for assessing mental competency, it is important that psychologists undertake a broad examination to determine an individual’s ability to make decisions affecting his or her own welfare. A competency evaluation begins with a review of the person’s medical and psychological records and history, with information being supplied by family members as well as the individual being evaluated. Following the record review, the individual is seen for a clinical interview, which includes several measures of orientation, short-term memory, and reasoning ability.


Following the clinical interview, psychological testing is administered for an objective assessment of cognitive functioning. The psychologist then offers an expert opinion, based on the evaluation, regarding whether the person is capable of making decisions regarding his or her welfare and finances.


This process is also done retrospectively at times, for example, to determine if an individual was competent when the person’s last will was executed. The competency evaluation has been characterized as an iterative process in which the clinician is both assessing and attempting to maximize the individual’s capacity and autonomy.




Instruments for Making Assessments

Although there is no definitive assessment instrument or battery of instruments that should be used to determine competency, a variety of commonly used instruments are available to the psychologist. Personality tests such as the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and the Personality Assessment Inventory (PAI), can be used to assess both abnormal and normal personality traits. The Wechsler Adult Intelligence Scale-IV (WAIS-IV) can be used to assess overall intelligence. If organic brain damage is suspected, the Luria-Nebraska, a neuropsychological test, can be administered.


There are a variety of instruments that are specifically designed to look at competency issues, including Capacity to Consent to Treatment Instrument (CCTI), the MacArthur Competency Assessment Tools, the Competency Screening Test (CST), the Competency to Stand Trial Assessment Instrument(CAI), the Interdisciplinary Fitness Interview-Revised (IFI-R), the Georgia Court Competency Test (GCCT), the Evaluation of Competency to Stand Trial-Revised (ECST-R), the Aid to Capacity Evaluation, and the Competence Questionnaire (CQ), for which versions for pediatric medical patients and children in the juvenile justice system have been developed. The reliability and validity of all these instruments varies, and many of them were developed for very specific subjects. This must be kept in mind when they are employed in an evaluation.




Competency in the Legal Context

Often, forensic mental health professionals are called on to make a determination regarding a person’s competency in a legal context. Psychologists may be asked to determine a person’s competency to stand trial, to waive rights, to enter a plea, or to be executed. This deals specifically with whether the person was cogent of his or her actions and the results of those actions. This question may revolve around present-time actions or actions in the past. Assessing competency in the past can be particularly difficult. The psychologist’s report would include psychiatric, medical, and substance abuse history, as well as results from mental status and psychological testing. A key part of the evaluation is the determination of the defendant’s ability to understand the proceedings. Within the legal system, competency of adults is assumed, and the burden of proof rests on the individual questioning competency. Competency evaluations are often required for children in the juvenile justice system, and young children are assumed to not be competent.


The courts recognize a difference between competency and credibility in a court of law. An individual may be deemed to be incompetent, meaning that they have diminished capacity to understand; however, they may still be a credible witness, meaning that they can still relate (often to a judge or jury) events that have occurred. An individual may not be aware of what it means to waive rights, yet may still be able to tell the jury his or her whereabouts at the time a crime was committed.




Bibliography


Arias, Jalayne J. "A Time to Step In: Legal Mechanisms for Protecting Those with Declining Capacity." American Journ. of Law & Medicine 39.1 (2013): 134–59. Print.



Grisso, T., and P. S. Applebaum. Assessing Competence to Consent to Treatment: A Guide for Physicians and Other Health Professionals. New York: Oxford UP, 1998. Print.



Grisso, T., G. Vincent, and D. Seagrave, eds. Handbook of Mental Health Screening and Assessment in Juvenile Justice. New York: Guilford, 2005. Print.



Grisso, T., et al. “Juveniles’ Competence to Stand Trial: A Comparison of Adolescents’ and Adults’ Capacities as Trial Defendants.” Law and Human Behavior 27 (2003): 333–63. Print.



Looman, Mary. "Establishing Parental Capability as a Legal Competency in Child Maltreatment Cases." Annals of the American Psychotherapy Association 13.2 (2010): 45–52. Print.



O'Donnell, Philip C., and Bruce Gross. "Developmental Incompetence to Stand Trial in Juvenile Courts." Journ. of Forensic Sciences 57.4 (2012): 989–96. Print.



Sachs, G. A. “Assessing Decision-Making Capacity.” Topics in Stroke Rehabilitation 7.1 (2000): 62–64. Print.



Schopp, Robert F. Competence, Condemnation, and Commitment: An Integrated Theory of Mental Health Law. Washington: APA, 2001. Print.

Saturday, December 27, 2014

In To Kill A Mockingbird, does Mayella Ewell know the consequences of kissing Tom Robinson?

Harper Lee never explicitly tells us that Mayella Ewell knows the consequences of kissing Tom Robinson, but we can assume that she is, in fact, fully aware of the repercussions of her actions. Indeed, she claims that Tom Robinson raped her because she knows that, if the truth comes out, she'll face not only the hatred of the Maycomb community, but abusive treatment at the hands of her father.


Mayella Ewell's life is pretty grim. Living in abject poverty and isolation, Mayella has almost no positive human contact. Her father is abusive in more ways than one and she has no friends, and so the only positive connection she has is Tom Robinson, which explains why she tries to kiss him. A white woman having intimate relations with a black man would have been extremely taboo in the South at the time the novel takes place, and anyone involved in such a relationship would have faced harsh treatment at the hands of her community. Moreover, Bob Ewell is already prone to abuse, and so the vileness of his behavior would only increase exponentially if the truth about Mayella's actions came to light. We know that Mayella is aware of these consequences because she claims that Tom tried to rape her, and so it's clear that her fear of the consequences of her actions is more important to her than protecting an innocent man. 

In Of Mice and Men, George says to Lennie: "When I think of the swell time I could have without you, I go nuts. I never get no peace." How does...

George shows his loyalty to Lennie by sticking with him, coaching him as to what to say in job interviews, planning for both of them, and taking care of him in small ways such as making dinner when they are on the road. George does sometimes lash out at Lennie, as in this quote, because Lennie can get on his nerves and because he wants to keep Lennie under his control. However, George knows he is as emotionally dependent on Lennie as Lennie is on him. George doesn't really want to have "swell" times drinking and partying: what sustains him is the friendship and innocent companionship Lennie provides. As Lennie says, George will stick by Lennie and Lennie by him and the two will be stronger together. George also shows his loyalty by constantly retelling Lennie their sustaining myth: that they will buy a farm together, settle down on it and live off "the fat of the land." 

Friday, December 26, 2014

What is the most significant episode/event in "All Summer in a Day"?

The most significant episode in the story is when the children lock Margot, the only child in the class who remembers the sun and the one most pining for sunshine, into a windowless closet right before the sun comes out on Venus for the one hour it will in seven years. Bradbury describes the closet as like a dark tunnel, shows us the door trembling as Margot bangs on it and throws herself against it, crying, and then the children "smiling" in the triumph of their cruelty as they head out into the emerging sun. Bradbury only needs to use that single word, smiling, to convey the children's sinister delight at thwarting Margot's deepest desire.


Bradbury then juxtaposes the horror of Margot in the dark closet against the joy of the children experiencing the sun. They forget her in their delight. Bradbury is a master of description and we see and feel with the children the flaming bronze sun, the sky like a giant blue tile, and the heat of the sun like an iron on their skin. However, as readers, Margot lurks in the back of our minds as we wonder if the children will remember her in time.

Wednesday, December 24, 2014

How do Sammy's actions in "A & P" reveal his character? In what ways are his thoughts and actions at odds with each other?

At the end of the story, Sammy quits his job after his manager, Lengel, tells three girls, who have been shopping in their bathing suits, that they need to dress decently when they come into the A & P. Sammy says to Lengel, "You didn't have to embarrass them." It seems that Sammy is quitting out of chivalry, to stand up for the girls who Lengel has judged and disrespected. Sammy's action at the end of the story seems misguided, and his words to Lengel ring hollow, since the readers have seen the way Sammy has judged  other customers and since the reader has heard the objectifying thoughts Sammy has had about the girls.


From the beginning of the story, Sammy has thought of the customers in the store as "sheep," and he has humorously called one lady in his checkout line a witch.



...if she'd been born at the right time they would have burned her over in Salem.



Sammy has also been judging the girls and objectifying their bodies from the moment they enter the store. Furthermore, Sammy has little respect for the girls:



You never know for sure how girls' minds work (do you really think it's a mind in there or just a little buzz like a bee in a glass jar?)



After quitting, Sammy walks outside to see the girls, calling them "my girls." They are gone, and it seems Sammy thinks that his gesture would win the girls, but they don't seem to notice his "heroic" act. Calling them "my girls" further shows his objectification of the girls. It could be argued that Sammy has disrespected the girls far more than Lengel did.

Tuesday, December 23, 2014

In the novel Fahrenheit 451, what are Montag's two childhood memories in "Part Two: The Sieve and the Sand"?

Montag recalls two childhood events throughout "Part Two: The Sieve and the Sand." At the beginning of "Part Two: The Sieve and the Sand," Montag attempts to read and memorize Bible verses while he rides on the subway. Montag is continually distracted by an advertisement for Denham's Dentifrice that is blaring through the loudspeakers on the train. Montag compared his failure to retain and remember the Bible verses, to a time when he was young and went to the beach with his cousin. His cousin bet him a dime that he could not fill a sieve with sand. The faster Montag poured the sand into the sieve, the faster it sifted through. Montag's mind is similar to the sieve, and the information he is attempting to retain is essentially the sand in the analogy.


Another childhood memory that Montag recalls takes place while he is staring at Mildred's friends and listening to them discuss their superficial, immoral lives. Montag says that the women's faces reminded him of the faces of the saints that he looked at as a child when he went to church. He says that the saint's faces meant nothing to him, as he tried to get a sense of what religion was and understand its meaning. Montag felt numb as he looked at the porcelain statues as a child, similar to how he views his parlor with Mildred's friends talking about nonsense.

What are some Italian Renaissance elements found in "My Last Duchess"?

The Duke of Ferrara who narrates "My Last Duchess" is what Browning would consider a prototypical Renaissance Italian character. He may be based on Alfonso II d'Este (22 November 1533 – 27 October 1597), a Renaissance Italian nobleman. 


The Duke is a collector of art, both of his own period and of works imitating the classical past. He is extremely wealthy, but he does not see aesthetic or spiritual value in the works he collects, nor does he value people for their inner selves, but instead he is purely worldly, both in the sense of sophisticated and cosmopolitan but also in the sense of valuing only the tangible material goods of this world. Browning regards this habit of collecting as characteristic of the Renaissance.


The most typical Renaissance element of the poem is the portrait of the Duchess by Frà Pandolf. This portrait is in a distinct Italian Renaissance style, which emphasizes realism (in contrast to the more spiritual focus of medieval art) and delicate coloring. He may be based on an earlier painter about whom Browning writes elsewhere, Frà Filippo Lippi (1406 – 1469), about whom Browning wrote elsewhere, who painted delicately beautiful portraits, but was also sexually quite active despite the putative celibacy of the monastic vocation. 

Monday, December 22, 2014

What is chorionic villus sampling?


Indications and Procedures


Chorionic villus sampling can be performed between the ninth and thirteenth weeks of
pregnancy to detect genetic and chromosomal abnormalities. The procedure is recommended when there is increased risk of genetic disorders in the fetus such as Down syndrome, sickle cell disease, and muscular dystrophy.



Chorionic villus sampling involves collecting a small sample of the chorionic villi, the finger-like projections on the developing placenta, which delivers food and oxygen to the fetus. A sample of chorionic villi can be obtained from the point at which the placenta attaches to the uterine wall, either by inserting a needle through the abdomen or by entering the cervix with a small flexible catheter through the vagina. The choice of approach depends on the position of the placenta. An antiseptic cleansing solution is applied to the area prior to sampling, and ultrasound is used to locate the fetus and the
placenta and its villi.


A 10- to 25-milligram sample is collected using a syringe, which is then purified and sometimes cultured. Since the chorionic villi originate from the same cell as the fetus, they normally have the same genetics. Results are available within one to two weeks.




Uses and Complications

Along with exposing genetic and chromosomal disorders, chorionic villus sampling can be used to determine the sex of the embryo but should never be used for this purpose alone because of the risks involved. Testing can be done early in the pregnancy. Therefore, should the woman choose to terminate her pregnancy, an easier first-trimester
abortion can be performed. If the results from the test are favorable, the parents have early peace of mind.


Possible complications from chorionic villus sampling include vaginal bleeding, cramping, and uterine infection. More serious risks involve Rh incompatibility between maternal and fetal blood, spontaneous abortion (miscarriage), and even possible fetal injury. A 2003 intervention review published in the Cochrane Database of Systematic Review found that the rate of miscarriage is significantly higher with chorionic villus sampling than with amniocentesis, performed after sixteen weeks and yielding the same information.


Some studies suggest that chorionic villus sampling itself may cause some birth defects; others do not. Also, the procedure can be inaccurate. Abnormalities may occur in some placental cells but not in the fetus. This might lead to aborting a healthy fetus. With the guidance of a physician, the risks and benefits should be compared with other available procedures.




Bibliography


A.D.A..M. Medical Encyclopedia. "Chorionic Villus Sampling." MedlinePlus, August 7, 2012.



Caughey, Aaron B., Linda M. Hopkins, and Mary E. Norton. “Chorionic Villus Sampling Compared with Amniocentesis and the Difference in the Rate of Pregnancy Loss.” Obstetrics and Gynecology 108, no. 3 (September, 2006): 612–616.



"Chorionic Villus Sampling." Mayo Foundation for Medical Education and Research, October 10, 2012.



Ettorre, Elizabeth, ed. Before Birth: Understanding Prenatal Screening. Brookfield, Vt.: Ashgate, 2001.



Filkins, Karen, and Joseph F. Russo, eds. Human Prenatal Diagnosis. 2d rev. ed. New York: Marcel Dekker, 1990.



Harper, Peter S. Practical Genetic Counselling. 6th ed. New York: Oxford University Press, 2004.



Lichtman, Ronnie, Lynn Louise Simpson, and Allan Rosenfield. Dr. Guttmacher’s Pregnancy, Birth, and Family Planning. Rev. ed. New York: New American Library, 2003.



Montemayor-Quellenberg, Marjorie, and Andrea Chisholm. "Chorionic Villus Sampling—Transabdominal." Health Library, March 15, 2013.



Montemayor-Quellenberg, Marjorie, and Andrea Chisholm. "Chorionic Villus Sampling—Transcervical." Health Library, March 15, 2013.



Moore, Keith L., and T. V. N. Persaud. The Developing Human: Clinically Oriented Embryology. 9th ed. Philadelphia: Saunders/Elsevier, 2013.



Pierce, Benjamin A. The Family Genetic Sourcebook. New York: John Wiley & Sons, 1990.

How does the choice of details set the tone of the sermon?

Edwards is remembered for his choice of details, particularly in this classic sermon. His goal was not to tell people about his beliefs; he ...