The Social History of the American Family: An Encyclopedia 9781452286167, 9781452286143

The American family has come a long way from the days of the idealized family portrayed in iconic television shows of th

4,306 115 25MB

English Pages 2111 Year 2014

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Social History of the American Family: An Encyclopedia
 9781452286167,  9781452286143

Table of contents :
Cover......Page 1
Volume 1......Page 2
Copyright......Page 5
Contents......Page 6
List of Articles......Page 8
Reader's Guide......Page 16
About the Editors......Page 23
List of Contributors......Page 24
Introduction......Page 32
Chronology......Page 38
A Chapter......Page 50
B Chapter......Page 144
C Chapter......Page 206
D Chapter......Page 364
E Chapter......Page 446
Volume 2......Page 512
F Chapter......Page 513
G Chapter......Page 647
H Chapter......Page 705
I Chapter......Page 757
J Chapter......Page 827
K Chapter......Page 835
L Chapter......Page 845
M Chapter......Page 877
N Chapter......Page 979
Volume 3......Page 1017
O Chapter......Page 1018
P Chapter......Page 1038
Q Chapter......Page 1146
R Chapter......Page 1150
S Chapter......Page 1202
T Chapter......Page 1368
U Chapter......Page 1450
V Chapter......Page 1462
W Chapter......Page 1476
Y Chapter......Page 1524
Volume 4......Page 1532
Volume 4 Contents......Page 1533
Primary Documents......Page 1536
Glossary......Page 1968
Resource Guide......Page 1978
Appendix......Page 1984
Index......Page 2020
Photo Credits......Page 2111

Citation preview

The Social History of the

American Family

The Social History of the

American Family An Encyclopedia Volume 1

EDITORS

Marilyn J. Coleman Lawrence H. Ganong University of Missouri

FOR INFORMATION: SAGE Publications, Inc. 2455 Teller Road Thousand Oaks, California 91320 E-mail: [email protected] SAGE Publications India Pvt. Ltd. B 1/I 1 Mohan Cooperative Industrial Area Mathura Road, New Delhi 110 044 India SAGE Publications Ltd. 1 Oliver’s Yard 55 City Road London EC1Y 1SP United Kingdom SAGE Publications Asia-Pacific Pte. Ltd. 3 Church Street #10-04 Samsung Hub Singapore 049483

Copyright © 2014 by SAGE Publications, Inc. All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. Library of Congress Cataloging-in-Publication Data The social history of the American family : an encyclopedia / Marilyn Coleman, Lawrence H. Ganong, editors. volumes cm Includes index. ISBN 978-1-4522-8616-7 (hardcover : alk. paper) 1. Families--United States--History--Encyclopedias. I. Coleman, Marilyn. II. Ganong, Lawrence H. HQ535.S63 2014 306.850973--dc23 2014018725

Executive Editor: Jim Brace-Thompson Cover Designer: Gail Buschman Reference Systems Manager: Leticia Gutierrez Reference Systems Coordinators: Laura Notton Anna Villasenor Reference Production Manager: Eric Garner Marketing Manager: Carmel Schrire

Golson Media President and Editor: J. Geoffrey Golson Production Director: Mary Jo Scibetta Senior Author Manager: Joseph Golson Layout Editors: Kenneth W. Heller, Tammy Loverdos, Paul Streeto, Amy Weiss Copyeditors: Theresa Kay, Barbara Paris, Kathy Wilson Peacock Production Editor: TLK Editing Services Proofreaders: Michele Chesley, Mary Le Rouge, Barbara Paris Indexer: J S Editorial

14 15 16 17 18 10 9 8 7 6 5 4 3 2 1

Contents Volume 1 List of Articles vii Reader’s Guide xv About the Editors xxii List of Contributors xxiii Introduction xxxi Chronology xxxvii

A B C

1 95 157

Articles

D E

315 397

Volume 2 List of Articles vii Articles

F G H I J

463 597 655 707 777



K L M N

785 795 827 929

Volume 3 List of Articles vii Articles

O P Q R S

967 987 1095 1099 1151



T U V W Y

1317 1399 1411 1425 1473

Volume 4 List of Primary Documents v Primary Documents 1481 Glossary 1913 Resource Guide 1923 Appendix. America's Families and Living Arrangements: 2012 1929 Index 1965 Photo Credits 2056

List of Articles A AARP Abortion Acculturation ADC/AFDC Addams, Jane Adler, Alfred Adolescence Adolescent and Teen Rebellion Adolescent Pregnancy Adoption, Closed Adoption, Grandparents and Adoption, International Adoption, Lesbian, Gay, Bisexual, and Transgender People and Adoption, Mixed-Race Adoption, Open Adoption, Second Parents and Adoption, Single People and Adoption Laws Advertising and Commercials, Families in Advice Columnists African American Families Agnostics Alan Guttmacher Institute Alcoholism and Addiction Alimony and Child Support Almshouses AMBER Alert

American Association for Marriage and Family Therapy American Family Association American Family Therapy Academy American Home Economics Association “Anchor Babies” Annie E. Casey Foundation Anorexia Arranged Marriage Artificial Insemination Asian American Families Assimilation Assisted Living Assisted Reproduction Technology Association of Family and Conciliation Courts Atheists Attachment Parenting Attachment Theories Automobiles B Baby Boom Generation Baby M Baby Showers Bandura, Alfred Baptism Bar Mitzvahs and Bat Mitzvahs Barbie Dolls “Best Interests of the Child” Doctrine Bettelheim, Bruno vii

viii

List of Articles

Birth Control Pills Birth Order Birthday Parties Blogs Books, Adult Fiction Books, Adult Nonfiction Books, Children’s Boomerang Generation Bowen, Murray Bowlby, John Boy Scouts Brazelton, T. Berry Breadwinner-Homemaker Families Breadwinners Breastfeeding Bronfenbrenner, Urie Brown v. Board of Education Budgeting Bulimia Bullying C Camp Fire Girls Caregiver Burden Caring for the Elderly Catholicism CD-ROMs Cell Phones Center for Missing and Exploited Children Central and South American Immigrant Families Child Abuse Child Advocate Child Care Child Custody Child Health Insurance Child Labor Child Safety Child Study Association of America Child Support Child Support Enforcement Childhood in America Childless Couples Child-Rearing Experts Child-Rearing Manuals Child-Rearing Practices Children’s Aid Society Children’s Beauty Pageants Children’s Bureau Children’s Defense Fund

Children’s Online Privacy Protection Act Children’s Rights Movement Children’s Television Act Chinese Immigrant Families Christening Christianity Christmas Church of Jesus Christ of Latter-day Saints Circumcision Civil Rights Act of 1964 Civil Rights Movement Civil Unions Cocooning Cohabitation Collectivism Comic Strips Commercialization and Advertising Aimed at Children Common Law Marriage Communes Community Property Companionate Marriage Conflict Theory Constitution, U.S. Contraception: IUDs Contraception: Morning After Pills Contraception and the Sexual Revolution Constructionist and Poststructuralist Theories Cooperative Extension System Coparenting Council on Contemporary Families Courtship Covenant Marriage Credit Cards C-Sections Cult of Domesticity Cults Cultural Stereotypes in Media Curfews Custody and Guardianship D Date Nights Dating Dating Web Sites Day Care Deadbeat Dads Death and Dying Defense of Marriage Act Delinquency

List of Articles



Demographic Changes: Age at First Marriage Demographic Changes: Aging of America Demographic Changes: Cohabitation Rates Demographic Changes: Divorce Rates Demographic Changes: Zero Population Growth/Birthrates Department Stores Desegregation in the Military Digital Divide Direct Home Sales Disability (Children) Disability (Parents) Discipline Disney/Disneyland/Amusement Parks Divorce and Religion Divorce and Separation Domestic Ideology Domestic Masculinity Domestic Partner Benefits Domestic Violence Dowries Dr. Phil Dr. Ruth DREAM Act Dreikurs, Rudolf Drive-Ins Dual-Income Couples/Dual-Earner Families E Earned Income Tax Credit Easter eBay Ecological Theory Education, College/University Education, Elementary Education, High School Education, Middle School Education, Postgrad Education, Preschool Education/Play Balance Egalitarian Marriages Elder Abuse E-Mail Emerging Adulthood Empty Nest Syndrome Engagement Parties Engagement Rings Equal Rights Amendment Erectile Dysfunction Pills Estate Planning

Estate Taxes Ethnic Enclaves Ethnic Food Evangelicals Every Child Matters Evolutionary Theories Extended Families F Facebook Fair Labor Standards Act Families and Health Family and Medical Leave Act Family Businesses Family Consumption Family Counseling Family Development Theory Family Farms Family Housing Family Life Education Family Mediation/Divorce Mediation Family Medicine Family Planning Family Research Council Family Reunions Family Service Association of America Family Stress Theories Family Therapy Family Values Fatherhood, Responsible Father’s Day Fathers’ Rights Feminism Feminist Theory Fertility Film, 1930s Film, 1940s Film, 1950s Film, 1960s Film, 1970s Film, 1980s Film, 1990s Film, 2000s Film, 2010s Film, Silent First Generation Flickr Focus on the Family Food Shortages and Hunger Food Stamps

ix

x

List of Articles

Foster Care Foster Families Fragile Families Freud, Sigmund Frontier Families Frozen Food Functionalist Theory Funerals

Homelessness Homemaker Homestead Act Hooking Up Household Appliances Housing Crisis Housing Policy Human Genome Project

G Games and Play Gated Communities Gatekeeping Gay and Lesbian Marriage Laws Gender Roles Gender Roles in Mass Media Genealogy and Family Trees Generation Gap Generation X Generation Y Genetics and Heredity German Immigrant Families Gesell, Arnold Lucius Girl Scouts Godparents Grandparenting Grandparents Day Grandparents’ Rights Great Awakening Great Society Social Programs Green Card Marriages Groves Conference on Marriage and the Family

I Immigrant Families Immigration Policy Incest Indian (Asian) Immigrant Families Individualism Industrial Revolution Families Infertility Information Age Inheritance Inheritance Tax/Death Tax In-Laws Intensive Mothering Interfaith Marriage Intergenerational Transmission Internet Internet Pornography, Child Interracial Marriage Intersex Marriage Interventions Irish Immigrant Families Islam It Takes a Village Proverb Italian Immigrant Families

H Half-Siblings Hall, G. Stanley Hanukkah Head Start Health Care Power of Attorney Health of American Families Healthy Marriage Initiative Higher Education Act Hite Report HIV/AIDS Hochschild, Arlie Holt, Luthor Home Economics Home Health Care Home Mortgage Deduction

J Japanese Immigrant Families Judaism and Orthodox Judaism K Kindergarten Kinsey, Alfred (Kinsey Institute) Korean Immigrant Families Kwanzaa L Language Brokers Later-Life Families Latino Families Learning Disorders Leisure Electronics



Leisure Time Levittown Life Course Perspective Living Apart Together Living Together Apart Living Wage Love, Types of M MADD Magazines, Children’s Magazines, Women’s “Mama’s Boy” and “Daddy’s Girl” Marital Division of Labor Marketing to and Data Collection on Families/Children Maslow, Abraham Masters and Johnson Maternity Leaves McDonald’s Me Decade Mead, Margaret Mealtime and Family Meals Medicaid Medicare Melting Pot Metaphor Mental Disorders Merger Doctrine Mexican Immigrant Families Middle East Immigrant Families Middle-Class Families Midlife Crisis Midwestern Families Military Families Million Man March Minimum Wage Minuchin, Salvador Miscegenation Mommy Wars Montessori Mother’s Day Mothers in the Workforce Moynihan Report Multigenerational Households Multilingualism Multiple Partner Fertility Multiracial Families Music in the Family Myspace Myth of Motherhood

List of Articles

xi

N National Affordable Housing Act National Center on Child Abuse and Neglect National Center on Elder Abuse National Child Labor Committee National Council on Family Relations National Partnership for Women and Families Native American Families Natural Disasters Natural Families Nature Versus Nurture Debate New Deal New Fatherhood No-Fault Divorce Nuclear Family Nursing Homes O Obesity Office of Child Support Enforcement One Percent, The Online Shopping Open Marriages Orphan Trains Other Mothers Overmothering P Palimony Parent Education Parent Effectiveness Training Parental Controls Parental Supervision Parenting Parenting Plans Parenting Styles Parents as Teachers Parents Without Partners Passover Paternity Leaves Paternity Testing Patriarchal Terrorism Personal Computers Personal Computers in the Home Persons of Opposite Sex Sharing Living Quarters Pets PFLAG Planned Parenthood Polio Polish Immigrant Families

xii

List of Articles

Polyamory Polygamy Postpartum Depression Poverty and Poor Families Poverty Line Power of Attorney Preferential Treatment Prenatal Care and Pregnancy Prenuptial Agreements PREP Program PREPARE/ENRICH Programs Problem Child Prohibition Promise Keepers Promise Rings Protestants Psychoanalytic Theories Pure Food and Drug Act of 1906 Q Quinceañera Ceremonies R Radio: 1920 to 1930 Radio: 1931 to 1950 Radio: 1951 to 1970 Rape Rational Choice Theory Reading to Children Reality Television Relational Dialectics Religious Holidays Religiously Affiliated Schools Remarriage Retirement Richards, Ellen Rituals Rockwell, Norman Roe v. Wade Rogers, Carl Runaways and Homeless Youth Rural Families S Saints Days Same-Sex Marriage Satir, Virginia School Shootings/Mass Shootings Segregation Self-Help, Culture of

Separate Sphere Ideology Sex Information and Education Council of the United States Shakers Shared Custody Sharia Law Shelters Sheppard-Towner Maternity and Infancy Protection Act of 1921 Shopping Centers and Malls Sibling Rivalry Single-Parent Families Skinner, B. F. Skype Slave Families Smart Marriages Conferences Soccer Moms Social Exchange Theory Social Fatherhood Social History of American Families: Colonial Era to 1776 Social History of American Families: 1777 to 1789 Social History of American Families: 1790 to 1850 Social History of American Families: 1851 to 1900 Social History of American Families: 1901 to 1920 Social History of American Families: 1921 to 1940 Social History of American Families: 1941 to 1960 Social History of American Families: 1961 to 1980 Social History of American Families: 1981 to 2000 Social History of American Families: 2001 to the Present Social Mobility Social Security Society for Research in Child Development Society for the Prevention of Cruelty to Children Southern Families Southwestern Families Speed Dating Spock, Benjamin Sports Standard North American Family

List of Articles



Standard of Living Stay-at-Home Fathers Stepchildren Stepfamilies Stepfamily Association of America Stepparenting Stepsiblings Student Loans/College Aid Suburban Families Suicide Sun City and Retirement Communities Sunday School Supermarkets Surrogacy Swaddling Sweet Sixteen Swinging Symbolic Interaction Theory Systems Theory T Tabula Rasa TANF Technology Teen Alcohol and Drug Abuse Teen Pregnancy. See Adolescent Pregnancy Telephones Television Television, 1940s Television, 1950s Television, 1960s Television, 1970s Television, 1980s Television, 1990s Television, 2000s Television, 2010s Television for Children Temperance Tender Years Doctrine Texting Thanksgiving Theater Third Wave Feminism Tossed Salad Metaphor Toys Trailer Parks Transgender Marriage Trusts

xiii

Truth in Lending Act of 1968 Twenty-Four-Hour News Reporting and Effect on Families/Children Twitter U Uniform Parentage Act Union Families Urban Families Utopian Experiments and Communities V Vacations Valentine’s Day Video Games Vietnam War Vietnamese Immigrant Families W War on Poverty War on Terror Watson, John B. Wealthy Families Wedding Showers Weddings Weekends Welfare Welfare Reform Westward Expansion Wet Nursing White Flight White House Conference on Families “White Trash” Widowhood Wife Battering Wii Wills Work and Family Working Mothers. See Mothers in the Workforce Working-Class Families/Working Poor WPA Y YMCA YouTube Yuppies YWCA

Reader’s Guide Families, Family Life, Social Identities Birth Order Birthday Parties Child-Rearing Practices Coparenting Discipline Divorce and Separation Egalitarian Marriages Ethnic Food Family Values Foster Families Fragile Families Frozen Food Games and Play Genealogy and Family Trees Generation Gap Generation X Generation Y Genetics and Heredity Half-Siblings Infertility Intergenerational Transmission Love, Types of Me Decade Multigenerational Households Multilingualism Multiracial Families Nuclear Family Open Marriages Persons of Opposite Sex Sharing Living Quarters

Retirement Shared Custody Standard North American Family Stepsiblings Toys Families and Culture Acculturation Almshouses Anorexia Arranged Marriage Assimilation Automobiles Baby Showers Bar Mitzvahs and Bat Mitzvahs Barbie Dolls Books, Adult Fiction Books, Adult Nonfiction Books, Children’s Bulimia Child-Rearing Experts Child-Rearing Manuals Child-Rearing Practices Children’s Beauty Pageants Collectivism Credit Cards C-Sections Dating Death and Dying Department Stores xv

xvi

Reader’s Guide

Direct Home Sales Dowries Drive-Ins Engagement Parties Engagement Rings Ethnic Enclaves Extended Families Family Reunions Father’s Day First Generation Funerals Gated Communities Gatekeeping Grandparents’ Day HIV/AIDS Home Economics Hooking Up Immigrant Families Individualism It Takes a Village Proverb Kindergarten Kwanzaa Language Brokers Levittown Marital Division of Labor McDonald’s Midlife Crisis Mother’s Day Music in the Family Online Shopping Orphan Trains Pets Polio Promise Rings Quinceañera Ceremonies Rape Reading to Children Rockwell, Norman Shakers Shopping Centers and Malls Speed Dating Sports Supermarkets Swaddling Sweet Sixteen Thanksgiving Theater Utopian Experiments and Communities Vacations

Valentine’s Day Wedding Showers Weddings Weekends Wet Nursing WPA Families and Experts Addams, Jane Adler, Alfred Advice Columnists Bandura, Alfred Bettelheim, Bruno Bowen, Murray Bowlby, John Brazelton, T. Berry Bronfenbrenner, Urie Dr. Phil Dr. Ruth Dreikurs, Rudolf Family Life Education Family Mediation/Divorce Mediation Family Therapy Freud, Sigmund Gesell, Arnold Lucius Hall, G. Stanley Hite Report Hochschild, Arlie Holt, Luthor Kinsey, Alfred (Kinsey Institute) Maslow, Abraham Masters and Johnson Mead, Margaret Minuchin, Salvador Montessori Richards, Ellen Rogers, Carl Satir, Virginia Skinner, B. F. Spock, Benjamin Watson, John B. Families and Religion Agnostics Atheists Baptism Catholicism Christening Christianity Christmas



Church of Jesus Christ of Latter-day Saints Circumcision Cults Divorce and Religion Easter Evangelicals Godparents Great Awakening Hanukkah Islam Judaism and Orthodox Judaism Natural Families Passover Polygamy Promise Keepers Protestants Religious Holidays Religiously Affiliated Schools Rituals Saints Days Sharia Law Sunday School Temperance Families and Social Change Abortion Adolescent Pregnancy Baby Boom Generation Birth Control Pills Breadwinner-Homemaker Families Civil Rights Movement Cocooning Commercialization and Advertising Aimed at Children Communes Contraception: IUDs Contraception: Morning After Pills Contraception and the Sexual Revolution Cult of Domesticity Day Care Demographic Changes: Age at First Marriage Demographic Changes: Aging of America Demographic Changes: Cohabitation Rates Demographic Changes: Divorce Rates Demographic Changes: Zero Population Growth/Birthrates Desegregation in the Military Disney/Disneyland/Amusement Parks Domestic Masculinity Erectile Dysfunction Pills

Reader’s Guide Fatherhood, Responsible Feminism Great Society Social Programs Healthy Marriage Initiative Information Age Intensive Mothering Internet Interracial Marriage Leisure Electronics Leisure Time Marketing to and Data Collection on Families/Children Military Families Mommy Wars Myth of Motherhood Natural Disasters New Deal New Fatherhood Other Mothers Overmothering Parent Education Polyamory School Shootings/Mass Shootings Segregation Social Fatherhood Sun City and Retirement Communities Swinging Technology Third Wave Feminism Urban Families Vietnam War War on Terror Westward Expansion White Flight Yuppies Families and Social Issues, Problems, and Crises Alcoholism and Addiction Bullying Caregiver Burden Caring for the Elderly Child Abuse Child Custody Child Support Enforcement Delinquency Disability (Children) Disability (Parents) Domestic Violence Elder Abuse

xvii

xviii

Reader’s Guide

Family Counseling Food Shortages and Hunger Homelessness Incest Interventions Learning Disorders Mental Disorders Multiple Partner Fertility Obesity Postpartum Depression Runaways and Homeless Youth Shelters Single-Parent Families Suicide Teen Alcohol and Drug Abuse Widowhood Wife Battering Families and Social Media Blogs Digital Divide E-Mail Facebook Internet Pornography, Child Myspace Personal Computers in the Home Texting Twitter YouTube Families and Social Stratification/Social Class African American Families “Anchor Babies” Asian American Families Central and South American Immigrant Families Chinese Immigrant Families German Immigrant Families Indian (Asian) Immigrant Families Inheritance Irish Immigrant Families Italian Immigrant Families Japanese Immigrant Families Korean Immigrant Families Latino Families Melting Pot Metaphor Mexican Immigrant Families Middle East Immigrant Families Middle-Class Families Military Families

Miscegenation One Percent, The Polish Immigrant Families Poverty and Poor Families Tossed Salad Metaphor Trailer Parks Vietnamese Immigrant Families Wealthy Families “White Trash” Working-Class Families/Working Poor Families and Technology Artificial Insemination CD-ROMs Cell Phones eBay Flickr Household Appliances Human Genome Project Personal Computers in the Home Skype Telephones Television Video Games Wii Families and the Economy Boomerang Generation Budgeting Child Labor Dual-Income Couples/Dual-Earner Families Family Businesses Family Consumption Family Farms Housing Crisis Living Wage Minimum Wage Social Mobility Social Security Standard of Living Welfare Families in America Adolescence Adolescent and Teen Rebellion Assisted Living Attachment Parenting Breadwinner-Homemaker Families Breadwinners

Reader’s Guide



Breastfeeding Childhood in America Childless Couples Cohabitation Courtship Date Nights Dating Web Sites Divorce and Separation Domestic Ideology Education, College/University Education, Elementary Education, High School Education, Middle School Education, Postgrad Education, Preschool Education/Play Balance Emerging Adulthood Empty Nest Syndrome Fertility Foster Care Gender Roles Grandparenting Home Health Care Homemaker In-Laws Later-Life Families Living Apart Together Living Together Apart “Mama’s Boy” and “Daddy’s Girl” Miscegenation Mothers in the Workforce Nature Versus Nurture Debate Nursing Homes Parental Controls Parental Supervision Parenting Parenting Styles Patriarchal Terrorism Preferential Treatment Prenatal Care and Pregnancy Problem Child Remarriage Same-Sex Marriage Self-Help, Culture of Separate Sphere Ideology Sibling Rivalry Soccer Moms Stay-at-Home Fathers Stepchildren Stepfamilies

xix

Stepparenting Surrogacy Tabula Rasa Transgender Marriage Families in Mass Media Advertising and Commercials, Families in Cultural Stereotypes in Media Film, 1930s Film, 1940s Film, 1950s Film, 1960s Film, 1970s Film, 1980s Film, 1990s Film, 2000s Film, 2010s Film, Silent Gender Roles in Mass Media Magazines, Children’s Magazines, Women’s Radio: 1920 to 1930 Radio: 1931 to 1950 Radio: 1951 to 1970 Reality Television Television, 1940s Television, 1950s Television, 1960s Television, 1970s Television, 1980s Television, 1990s Television, 2000s Television, 2010s Television for Children Twenty-Four-Hour News Reporting and Effect on Families/Children Family Advocates and Organizations AARP Alan Guttmacher Institute American Association for Marriage and Family Therapy American Family Association American Family Therapy Academy American Home Economics Association Annie E. Casey Foundation Association of Family and Conciliation Courts Boy Scouts Camp Fire Girls

xx

Reader’s Guide

Center for Missing and Exploited Children Child Study Association of America Children’s Aid Society Children’s Bureau Children’s Defense Fund Children’s Rights Movement Cooperative Extension System Council on Contemporary Families Every Child Matters Family Medicine Family Research Council Family Service Association of America Focus on the Family Girl Scouts Groves Conference on Marriage and the Family Head Start MADD National Center on Child Abuse and Neglect National Center on Elder Abuse National Child Labor Committee National Council on Family Relations National Partnership for Women and Families Parent Effectiveness Training Parents as Teachers Parents Without Partners PFLAG Planned Parenthood PREP Program PREPARE/ENRICH Programs Sex Information and Education Council of the United States Smart Marriages Conferences Society for Research in Child Development Society for the Prevention of Cruelty to Children Stepfamily Association of America White House Conference on Families YMCA Family Law and Family Policy ADC/AFDC Adoption, Closed Adoption, Grandparents and Adoption, International Adoption, Lesbian, Gay, Bisexual, and Transgender People and Adoption, Mixed-Race Adoption, Open Adoption, Second Parents and Adoption, Single People and Adoption Laws

Alimony and Child Support AMBER Alert Assisted Reproduction Technology Baby M “Best Interests of the Child” Doctrine Brown v. Board of Education Child Advocate Child Care Child Health Insurance Child Safety Child Support Children’s Online Privacy Protection Act Children’s Television Act Civil Rights Act of 1964 Civil Unions Common Law Marriage Community Property Constitution, U.S. Covenant Marriage Curfews Custody and Guardianship Deadbeat Dads Defense of Marriage Act Domestic Partner Benefits DREAM Act Earned Income Tax Credit Equal Rights Amendment Estate Planning Estate Taxes Fair Labor Standards Act Family and Medical Leave Act Family Planning Fathers’ Rights Food Stamps Gay and Lesbian Marriage Laws Grandparents’ Rights Green Card Marriages Health Care Power of Attorney Health of American Families Higher Education Act Home Mortgage Deduction Homestead Act Housing Policy Immigration Policy Inheritance Tax/Death Tax Maternity Leaves Medicaid Medicare Merger Doctrine Million Man March



Moynihan Report National Affordable Housing Act No-Fault Divorce Office of Child Support Enforcement Palimony Parenting Plans Paternity Leaves Paternity Testing Poverty Line Power of Attorney Prenuptial Agreements Prohibition Pure Food and Drug Act of 1906 Roe v. Wade Sheppard-Towner Maternity and Infancy Protection Act of 1921 Student Loans/College Aid TANF Tender Years Doctrine Trusts Truth in Lending Act of 1968 Uniform Parentage Act War on Poverty Welfare Reform Wills Family Theories Attachment Theories Conflict Theory Constructionist and Poststructuralist Theories Ecological Theory Evolutionary Theories Family Development Theory Family Stress Theories Feminist Theory Functionalist Theory Life Course Perspective Psychoanalytic Theories Rational Choice Theory

Reader’s Guide Relational Dialectics Social Exchange Theory Symbolic Interaction Theory Systems Theory History of American Families Frontier Families Immigrant Families Industrial Revolution Families Midwestern Families Native American Families Rural Families Slave Families Social History of American Families: Colonial Era to 1776 Social History of American Families: 1777 to 1789 Social History of American Families: 1790 to 1850 Social History of American Families: 1851 to 1900 Social History of American Families: 1901 to 1920 Social History of American Families: 1921 to 1940 Social History of American Families: 1941 to 1960 Social History of American Families: 1961 to 1980 Social History of American Families: 1981 to 2000 Social History of American Families: 2001 to the Present Southern Families Southwestern Families Suburban Families Union Families Urban Families

xxi

About the Editors Marilyn J. Coleman, Ed.D., is a Curators’ Professor Emerita of human development and family studies at the University of Missouri (MU). Her research interests are primarily postdivorce relationships, especially remarriage and stepfamily relationships. She has coauthored eight books and has published well over 175 journal articles and book chapters. Coleman has won numerous national and campus awards for teaching, research, and service, including the First Annual MU Graduate Faculty Mentor Award, MU Alumnae Anniversary Award for Outstanding Contributions to the Education of Women, the UMC Faculty/Alumni Award, Lifetime Contribution Award by the Stepfamily Association of America, the Kansas State University Distinguished Service Award, Fellow of the National Council on Family Relations (NCFR), and the NCFR Felix Berardo Mentoring Award.

xxii

Lawrence H. Ganong, Ph.D., is a professor and co-chair of human development and family studies and a professor in the Sinclair School of Nursing at the University of Missouri. He has coauthored over 200 articles and book chapters as well as seven books, including Stepfamily Relationships (2004), Handbook of Contemporary Families (2004) with Marilyn Coleman, and Family Life in 20th Century America (2007), with Coleman and Kelly Warzinik. His primary research program has focused on postdivorce families, especially stepfamilies, and he is particularly interested in understanding how family members develop satisfying and effective relationships after structural transitions. Ganong earned a B.A. from Washburn University, master's degrees from Kansas State University and the University of Missouri, and a Ph.D. from the University of Missouri.

List of Contributors Jenna Stephenson Abetz University of Nebraska, Lincoln Ann-Marie Adams Fairfield University St. Clair P. Alexander Loma Linda University Zahra Alghafli Louisiana State University Katherine R. (Russell) Allen Virginia Tech University Kawika Allen Brigham Young University Carter Anderson Western Washington University Hanne Odlund Andersen Independent Scholars Y. Gavriel Ansara University of Surrey Joanne Ardovini Metropolitan College of New York Stepanie Armes University of Kentucky Veronica I. Arreola University of Illinois at Chicago Tiffany Ashton American University Chris Babits Teachers College, Columbia University Deborah Bailey Central Michigan University

Chasity Bailey-Fakhoury Grand Valley State University John Barnhill Independent Scholar Rebecca Barrett-Fox Arkansas State University Katie Marie Barrow Virginia Polytechnic Institute and State University Juandrea Bates University of Texas at Austin Deborah L. Bauer University of Central Florida Suzanne K. Becking Fort Hays State University Jayne R. Beilke Ball State University Rachel T. Beldner University of Wisconsin–Madison Marcia Malone Bell University of Kentucky Jacquelyn J. Benson University of Missouri Mark J. Benson Virginia Polytechnic Institute and State University Israel Berger University of Sydney James J. Berry University of Evansville xxiii

xxiv

List of Contributors

Amber Blair Georgia Southern University M. Blake Berryhill Kansas State University Kristyn Blackburn University of Kentucky Daniel Blaeuer Florida International University Sarah Jane Blithe University of Nevada, Reno Hannah B. Bloyd-Peshkin Knox College Christopher J. Blythe Florida State University Derek M. Bolen Angelo State University Stephanie E. Bor University of Utah Sarah E. Boslaugh Kennesaw State University Ronda L. Bowen Independent Scholar Jill R. Bowers University of Illinois at Urbana-Champaign Odette Boya Resta Johns Hopkins University Kay Bradford Utah State University Dawn O. Braithwaite University of Nebraska–Lincoln Shannon Brenneman Michigan State University Melanie E. Brewster Teachers College, Columbia University Bob Britten West Virginia University Greg Brooks  University of Missouri Edna Brown University of Connecticut Carol J. Bruess University of St. Thomas Maysa Budri Texas Woman’s University Kelly Campbell California State University, San Bernardino Gustavo Carlo University of Missouri–Columbia Bret E. Carroll California State University, Stanislaus

Alexandra Carter University of California, Los Angeles J. A. Carter University of Cincinnati Shannon Casey Alliant International University Kimberly Eberhardt Casteline Fordham University Raúl Medina Centeno University of Guadalajara Edward Chamberlain University of Washington, Tacoma Yiting Chang University of Vermont Ashton Chapman University of Missouri–Columbia Charles Cheesebrough National Council on Family Relations Gaowei Chen University of Hong Kong Emily R. Cheney Independent Scholar Laura Chilberg Black Hills State University Ming Ming Chiu State University of New York, Buffalo Amy M. Claridge Florida State University Pamela Clark University of Southern Mississippi Crystal Renee Clarke Loma Linda University Beverly Ann G. Clemons Loma Linda University Susan Cody-Rydzewski Georgia Perimeter College Amanda Coggeshall University of Missouri Jessica A. Cohen St. Mary’s University Aaron Samuel Cohn Saint Louis University Colleen Colaner University of Missouri Danielle Colborn Stanford University Lynn Comerford California State University, East Bay Luis Diego Conejo University of Missouri



Stacy Conner Kansas State University Stephen A. Conrad Indiana University Morgan E. Cooley Florida State University Bruce Covey Central Michigan University Carolyn Cowan University of California, Berkeley Philip Cowan University of California, Berkeley John Crouch Independent Scholar Annamaria Csizmadia University of Connecticut Ming Cui Florida State University Sarah Curtiss University of Illinois at Urbana-Champaign Kathy DeOrnellas Texas Woman’s University James I. Deutsch Smithsonian Institution David J. Diamond Alliant International University Evan Emmett Diehnelt University of Wisconsin–Madison Heather Dillaway Wayne State University Diana C. Direiter Lesley University David C. Dollahite Brigham Young University Karen L. Doneker Mancini Towson University Brigitte Dooley University of Kentucky Marina Dorian Alliant International University Allyson Drinkard Kent State University at Stark Len Drinkard U.S. Department of Labor Donna Duffy University of North Carolina, Greensboro Melanie L. Duncan University of Florida Jillian M. Duquaine-Watson University of Texas at Dallas

List of Contributors Benedetta Duramy Golden Gate University School of Law Meredith Eliassen San Francisco State University Kathleen L. Endres University of Akron Ashley Ermer University of Missouri Caitlin Faas Mount St. Mary’s University Raúl Fernández-Calienes St. Thomas University School of Law Isabella Ferrari University of Modena and Reggio Emilia Andrea M. Ferraro University of Akron Anthony Ferraro Florida State University Jessica Fish Florida State University Joel Fishman Duquesne University Jacki Fitzpatrick Texas Tech University Ana G. Flores Our Lady of the Lake University David Frederick Chapman University Laura M. Frey University of Kentucky Caren J. Frost University of Utah Dixie Gabalis Central Michigan University Kathleen M. Galvin Northwestern University Cayo Gamber George Washington University Chelsea L. Garneau Florida State University Stephen M. Gavazzi Ohio State University Rebecca L. Geiger California University of Pennsylvania Deborah Barnes Gentry Heartland Community College Michael D. Gillespie Eastern Illinois University Nerissa Gillum Texas Woman’s University

xxv

xxvi

List of Contributors

Betty J. Glass University of Nevada, Reno Abbie E. Goldberg Clark University Judith G. Gonyea Boston University Mellissa S. Gordon University of Delaware Loranel M. Graham Our Lady of the Lake University Heath A. Grames University of Southern Mississippi Helena Danielle Green University of Connecticut Glenda Griffin Sam Houston State University Hagai Gringarten St. Thomas University Brenda J. Guerrero Our Lady of the Lake University Linda Halgunseth University of Connecticut Kristin Haltinner University of Idaho Robert L. Hampel University of Delaware Jason D. Hans University of Kentucky Myrna A. Hant University of California, Los Angeles Brent Harger Albright College Victor Harris University of Florida Joy L. Hart University of Louisville Jaimee Hartenstein Kansas State University Nicholas Daniel Hartlep Illinois State University Ralph Hartsock University of North Texas Trevan G. Hatch Louisiana State University Cynthia Hawkins DeBose Stetson University College of Law Francis Frederck Hawley Western Carolina University Amber Nichole Hearn Loma Linda University

Lauren Heiman Texas Woman’s University Keri L. Heitner University of the Rockies Jason A. Helfer Knox College Jennifer C. Helgren University of the Pacific Jacqueline Henke Arkansas State University Kelsey Henke University of Pittsburgh Rosanna Hertz Wellesley College W. Jeff Hinton University of Southern Mississippi Donna Hancock Hoskins Bridgewater College Claire Houston Harvard University Robert Hughes, Jr. University of Illinois at Urbana-Champaign Andrea N. Hunt University of North Alabama Shann Hwa Hwang Texas Woman’s University Masako Ishii-Kuntz Ochanomizu University Anthony G. James Miami University of Ohio Juyoung Jang University of Minnesota J. Jacob Jenkins California State University, Channel Islands Michael Johnson Washington State University Glenda Jones Sam Houston State University Janice Elizabeth Jones Cardinal Stritch University Mark S. Joy University of Jamestown Michael Kalinowski University of New Hampshire Nazneen Kane College of Mount St. Joseph Debra Kawahara Alliant International University Spencer D. C. Keralis University of North Texas



Charissa Keup Independent Scholar Shenila Khoja-Moolji Teachers College, Columbia University Timothy S. Killian University of Askansas Claire Kimberly University of Southern Mississippi Lori A. Kinkler Clark University Christopher Kline Westmoreland County Community College Nicholas Koberstein University of Connecticut Patrick Koetzle Georgetown University Law Center Erin Kostina-Ritchey Texas Tech Unviersity Jonathan M. Kremser Kutztown University of Pennsylvania Bill Kte’pi Independent Scholar Arielle Kuperberg University of North Carolina at Greensboro Cornelia C. Lambert University of Oklahoma Katherine Landry Sam Houston State University John J. Laukaitis North Park University Marcie Lechtenberg Kansas State University Andrew M. Ledbetter Texas Christian University Jessica Marie Lemke Niagara University Melinda A. Lemke University of Texas at Austin Lara Lengel Bowling Green State University Ashlie Lester University of Missouri Xiaohui Li University of Minnesota, Twin Cities Jennie Lightweis-Goff Tulane University Theresa Nicole Lindsay Texas Woman’s University Hui Liu Michigan State University

List of Contributors Sally A. Lloyd Miami University, Ohio Kim Lorber Ramapo College of New Jersey Gordon E. MacKinnon Rochester College Flor Leos Madero Angelo State University Sarah E. Malik University of Evansville Marie L Mallet Harvard University Louis Manfra University of Missouri Melinda Stafford Markham Kansas State University Salina Loren D. Marks Louisiana State University Michelle Martinez Sam Houston State University Erynn Masi de Casanova University of Cincinnati Chalandra Matrice Bryant University of Georgia Greg Matthews Washington State University Mahshid Mayar Bielefeld University Graham McCaulley University of Missouri Marta McClintock-Comeaux California University of Pennsylvania Melina McConatha Rosle West Chester University of Pennsylvania Samira Mehta Emory University Kelly Melekis University of Vermont Dixie Meyer Saint Louis University Monika Myers Arkansas State University Katharina Miko Vienna University of Economics Douglas Milford University of Illinois at Chicago Michelle Millard Wayne State University Margaret Miller Independent Scholar

xxvii

xxviii

List of Contributors

Monica Miller-Smith University of Connecticut Cory Mills-Dick Goddard Riverside Community Center, New York Elissa Thomann Mitchell University of Illinois at Urbana-Champaign Sarah Mitchell University of Missouri Kelly Monaghan University of Florida Julia Moore University of Nebraska-Lincoln Mel Moore University of Northern Colorado Martha L. Morgan Alliant International University Danai S. Mupotsa University of the Witwatersrand Felicia Murray Texas Woman’s University Lorenda A. Naylor University of Baltimore Margaret Nelson Middlebury College Tara Newman Stephen F. Austin State University Tim Oblad Texas Tech University D. Lynn O’Brien Hallstein Boston University C. Rebecca Oldham Texas Tech University Winetta A. Oloo Loma Linda University Yok-Fong Paat University of Texas at El Paso Shari Paige Chapman University Kay Pasley Florida State University Michael Pawlikowski State University of New York, Buffalo Kelley J. Perkins University of Delaware Raymond E. Petren Florida State University Sarah L. Pierotti  University of Missouri

Elizabeth M. Pippert Independent Scholar Jennifer Burkett Pittman Ouachita Baptist University Mari Plikuhn University of Evansville Tyler Plogher University of Evansville Scott W. Plunkett California State University Northridge Pedro R. Portes University of Georgia Danielle Poynter University of Missouri Amber M. Preston California University of Pennsylvania Daniel J. Puhlman Florida State University Elizabeth Rholetter Purdy Independent Scholar Janice Kay Purk Mansfield University Karen D. Pyke University of California, Riverside Helénè Quanquin Université Sorbonne Nouvelle Mark R. Rank Washington University, St. Louis Alan Reifman Texas Tech University Jennifer S. Reinke University of Wisconsin–Stout Jon Reyhner Northern Arizona University Gabriella Reznowski Washington State University Wylene Rholetter Auburn University Jason Ribner Alliant International University Neil Ribner Alliant International University Amanda J. Rich York College of Pennsylvania Michele Hinton Riley St. Joseph’s College of Maine Barbara J. Risman University of Illinois at Chicago Rebecca Ruitto University of Connecticut



Amanda Rivas Our Lady of the Lake University Andrea Roach University of Missouri Daelynn R. Roach California University of Pennsylvania Kelly M. Roberts Oklahoma State University Dianna Rodriguez Rutgers University David J. Roof Ball State University Joy Rose Museum of Motherhood Lisa H. Rosen Texas Woman’s University Ariella Rotramel Connecticut College Brian Rouleau Texas A&M University Elisabetta Ruspini University of Milano-Bicocca Luke T. Russell University of Missouri Elizabeth Ryznar Harvard Medical School Margaret Ryznar Indiana University Robin C. Sager University of Evansville Erin Sahlstein Parcell University of Wisconsin, Milwaukee Stephanie Salerno Bowling Green State University Karin Sardadvar FORBA–Working Life Research Centre, Vienna Megha Sardana Columbia University Antoinette W. Satterfield U.S. Naval Academy Julia Sattler Technical University of Dortmund Jacob Sawyer Teachers College, Columbia University Hans C. Schmidt Pennsylvania State University–Brandywine Maria K. Schmidt Indiana University, Bloomington

List of Contributors Sarah Schmitt-Wilson Montana State University David G. Schramm University of Missouri Stephen T. Schroth Knox College Michaela Schulze Universität Siegen Shannon Scott Texas Woman’s University Kelli Shapiro Texas State University Constance L. Shehan University of Florida Karen Shephard University of Pittsburgh Aya Shigeto Nova Southeastern University Morgan Shipley Michigan State University Sara Denise Shreve University of Iowa Julie Ahmad Siddique William Paterson University Leslie Gordon Simons Arizona State University Deborah M. Sims University of Southern California Christina A. Simmons University of Georgia Skultip Sirikantraporn Alliant International University Brent C. Sleasman Gannon University Kristy L. Slominski University of California, Santa Barbara Malcolm Smith University of New Hampshire Christy Jo Snider Berry College Catherine Solheim University of Minnesota at Twin Cities Christina Squires University of Missouri Wade Stewart Utah State University Sandra Stith Kansas State University Jason Stohler University of California, Santa Barbara

xxix

xxx

List of Contributors

Lisa Strohschein University of Alberta Katherine Scott Sturdevant Pikes Peak Community College Omar Swartz University of Colorado, Denver Sarah L. Swedberg Colorado Mesa University Marilyn E. Swisher University of Florida Whitney Szmodis Lehigh University Aileen Tareg Yap Comprehensive Cancer Program Ken B. Taylor New Orleans Baptist Theological Seminary Jay Teachman Western Washington University Lucky Tedrow Western Washington University Alice K. Thomas Howard University Joel Touchet University of Lousiana Juliana Maria D. Trammel Savannah State University Bahira Sherif Trask University of Delaware Elizabeth Trejos-Castillo Texas Tech University Jessica Troilo West Virginia University Kristin Turney University of California, Irvine Kourtney T. Vaillancourt New Mexico State University Zach Valdes Sam Houston State University Kristen Van Ness University of Connecticut Chris Vanderwees Carleton University H. Luis Vargas University of the Rockies Esperanza Vargas Jiménez University of Guadalajara Michael Voltaire Nova Southeastern University Kimberly Voss University of Central Florida

John Walsh Shinawatra University Yuanxin Wang Temple University Kelly A. Warzinik University of Missouri Shannon E. Weaver University of Connecticut Lynne M. Webb Florida International University Kip A. Wedel Bethel College Adele Weiner Metropolitan College of New York Robert S. Weisskirch California State University, Monterey Bay Brenda Wilhelm Colorado Mesa University Keira Williams Texas Tech University Samantha Williams California State University, Stanislaus Bethany Willis Hepp Towson University Michael Wilson Arkansas State University Laura Winn Florida Atlantic University Rachel Winslow Westmont College Cindy Winter National Council on Family Relations Armeda Wojciak Florida State University Rachel Lee Wright Eastern Washington University Deniz Yucel William Paterson University of New Jersey Hye-Jung Yun Florida State University James M. Zubatsky University of Minnesota Andrew Zumwalt Univeristy of Missouri Anisa Zvonkovic Virginia Polytechnic Institute and State University

Introduction Over the past few years there have been rousing debates among social critics and cultural commentators about the status of the American family and its future. On one side of this debate are those claiming the American family is in decline. These critics point to demographic statistics regarding increased rates of unmarried parenthood, divorce, cohabitation, and lowered rates of marriage and births to married parents as evidence that families are in trouble. Technological changes that allow infertile individuals and couples to rear children, policy changes that permit gay and lesbian couples to legally marry, and societal shifts in gender role expectations for mothers and fathers are also decried by some social commentators as signs of family decline and deviance. On the other side of the debate are those who argue that these transitions do not mean that American families are troubled and headed for a bleak future. Instead, they assert that families are doing relatively well and that it is a narrow and static view of family life that is in decline and not families themselves. These observers see family diversity rather than deviance, and family adaptation rather than decay. Twenty-first-century families are more diverse than families in the past, the argument goes, because the world is more complex, and family members have had to adapt to their changing social environments. Moreover, families have always been more diverse than

cultural ideologies have portrayed; the Standard North American Family (SNAF), a self-contained nuclear family consisting of a mother who takes care of the children and the home, a father who works outside the home as the major or only breadwinner, and one or more children sharing a household, has always been part of American social history, but this family form has not, in contrast to some cultural stereotypes, been the only American family form. The SNAF is both heterosexual and patriarchal in nature and produces stigma and marginalization of those in other family types—and yet other family types are multiplying rapidly and flourishing. It seems like a good time to examine the social history of the family: Where have we been? Where are we now? In fact, the nuclear family has not always been the dominant family structure in U.S. history. Prior to the 20th century, households were not only places to live—they were places in which the family “business” was conducted. Households often contained more than one generation, and it was not uncommon for unrelated individuals such as household servants, other workers, apprentices, and boarders to live with families. Living quarters were cramped and privacy was limited. It has only been in the past 100 years or so that most family households became private enclaves of related individuals. During the same period, family members’ employment increasingly moved xxxi

xxxii

Introduction

outside the household, and homes became centers of family life, with work, leisure, and other activities conducted elsewhere. At the root of the debate about the present and future status of American families is poor understanding about how families have functioned in the past. It is hard to determine whether families are declining, holding their own, or thriving without an understanding of what families and family life was like in the past. Families have changed, but how have they changed, and why? The purpose of this encyclopedia, The Social History of the American Family: An Encyclopedia, is to address these questions. In this volume, a diverse array of authors examine how families and family life have changed, paying attention not only to changes within families but also examining the social and historical contexts for those changes. Families do not exist in a vacuum, and historical and social phenomena have influenced families just as they have been affected by families. As editors of this encyclopedia, we have attempted to include aspects of the social history of American families that have the greatest relevance for understanding how families and family members have been shaped over time. American families have been affected by (a) demographic and population shifts; (b) changes in the economy, work, and leisure; (c) educational, cultural, and social movements; (d) advances in technology and science; (e) “great” events such as wars; and (f) evolving societal norms. In this encyclopedia we have tried to be exhaustively inclusive about all of these contextual factors as they have related to family functioning. Families in turn have affected American society and its many social institutions. As a fundamental unit of society, families exert powerful influences on virtually every aspect of the culture. Consequently, we have tried to comprehensively include encyclopedia entries that reflect these family impacts on society. In this volume, we have attempted to help readers get a sense of the profound societal and familial changes that have occurred in the nearly 400 years that an American culture has existed. For instance, if an American family from 1700 could be transported in a time machine to the present, they would be amazed at the differences in how family members spend their days now compared to the colonial era. They

would be stunned to see everyone leave the family home to go to work or school because in their day, work, school, and home were in the same place. The technology available to present-day families would shock colonial Americans. They might envy the ease with which meals are prepared and be fascinated by the media links to the rest of the world via television, personal computers, smartphones, and other gizmos that would likely seem magical to them. On the other hand, they might be surprised at how little time presentday family members spend together, with work, school, and even electronic gadgets separating family members from each other. Colonial Americans might be surprised to find many household chores (e.g., cooking, cleaning, making household repairs) being outsourced to professionals. These visitors from 1700 might quickly learn to like the ease of modern life, but they might also wonder why family members seem to be so stressed. These visitors from another time would need to know about the many things that have happened since their era to be able to understand why families in the 21st century function as they do. This encyclopedia would supply that information. Families Are Hard to Study Families are hard to study, whether one is studying how families are in the present or how families were in the past. One reason that families are difficult to study is because nearly everyone has a personal experience in a family unit or two (e.g., the one they grew up in and perhaps a family they formed as an adult), and nearly everyone has ideas about how other families live from personal observations (e.g., hanging out at the neighbor’s house) or from media portrayals (e.g., families on television, in movies). In short, because families and images of families are common, it strikes people that there is little to be learned by the formal study of family life. Arlene Skolnick referred to this stance as “pluralistic ignorance”—we know a lot about one or two families and erroneously assume this makes us an expert on families in general. Scientists call this overgeneralizing from a nonrepresentative sample—as “naïve scientists,” most people “gather data” from their families of origin and perhaps other family units and draw conclusions about all families. Often people think that the way their families thought and felt and



the ways their family members interacted with each other and with outsiders were how all families functioned. Consequently, students often see little point in studying families—what is there to learn that we don’t already know? Another challenging aspect of studying families is that virtually every dimension of family life is value laden, and people tend to hold very strong values and beliefs about families. These personal values interfere with being able to examine family life thoroughly and clearly. Put another way, our “should’s get in the way of the is’s.” That is, our personal beliefs about what we think mothers, fathers, spouses, children, and other family members should be doing influences the methods by which evidence regarding these behaviors is gathered, analyzed, and interpreted. As the philosopher Ashley Brilliant once wrote, “Seeing is believing; I wouldn’t have seen it if I didn’t believe it.” Scientists refer to this as a bias toward hypothesis-confirming evidence—meaning that individuals see what they expect to see more easily than they see evidence that does not confirm preexisting biases, beliefs, or hypotheses. For instance, if a scholar strongly believed that married-couple families were the only effective type of families in which to raise children, he or she might seek to find evidence to support that belief and ignore, or at least downplay, other information that did not fit with this value stance. We do not suggest that most family scholars purposefully let their values shade or shape their conclusions. However, we are saying (this is a strongly held value of ours) that personal values affect (a) what gets studied in family scholarship, (b) the questions that are raised, (c) the data that are gathered, and (d) the interpretation of the data. Consequently, the study of families has a different emotional valence for most people than does, say, studying frog behavior or the history of transportation. We do not suggest that frog scholars and transportation historians do not hold intense passion and strong beliefs about their work, but we do suggest that the emotional values related to family study hit closer to home (pun intended) for most people, and most individuals seem to have vested interests in topics related to family life. One family value shared by many in the United States is the belief that family behavior is private. Because this value is widely held in our culture,

Introduction

xxxiii

this, too, makes the study of families challenging and difficult. Making family business offlimits to outsiders means that scholars have had a hard time getting at the unvarnished reality of what sociologist Erving Goffman termed “backstage behavior” in family life. Some subject matter is nearly impossible to observe directly, such as sexual behavior, and so scholars must rely on self-reports or other types of evidence, often from secondary sources. Other aspects of family life that are nearly impossible to investigate in vivo, such as marital decision making, force scientists to devise laboratory settings and self-report methods to try to capture as closely as possible what goes on in the privacy of family households. Because we value family privacy, there are limits to what can be asked of people because a lot of areas of family life are “none of your business” if you are an outsider—family members either lie or refuse to answer questions that are too personal and private. It is not hard to see how family privacy feeds into pluralistic ignorance—the tendency to overrely on our own family experiences to generalize about all families. However, it should be noted that there is also a privacy value within families—children rarely know all of the backstage behavior of their parents’ relationship together, for instance, and parents are unaware of all that transpires between their children when they are not around. This also makes family study complicated. In addition, each family member experiences family life uniquely. Decades ago Jesse Bernard wrote a classic treatise in which she argued for the perspective that each marriage actually consisted of two marriages: his marriage and her marriage. Her point could be broadened to all of family life: every relationship in a family is viewed and experienced differently by each participant in that relationship. This makes family study a tricky endeavor because we must pay careful attention to who is being studied as we draw conclusions— mothers might share different truths about child rearing than fathers, and they both will likely see things in a different light than children. This may seem like common sense, but this point has often been ignored by researchers, who often have relied on women as family informants without recognizing that mothers and wives might have

xxxiv

Introduction

divergent views from fathers and husbands about issues such as marital power or child rearing. Not surprisingly, outsiders see a family through a dissimilar lens than family members do. Family members obviously have access to a lot more family information than outsiders do, but sometimes outsiders notice things that members of the family cannot because they are not members of the family rule system that dictates family behaviors and interactions that family members can and cannot attend to or even know about consciously. The divergence of insiders’ and outsiders’ views presents scholars with several dilemmas, such as what do the divergent views mean and how does one sort out the truth from multiple “truths.” All of these challenges to studying family life apply to examining the history of families as well as of current family dynamics. In addition, there is a belief, related to the ubiquitous nature of families and the often strongly held values about how families should function, that families are timeless. This perspective renders obsolete the historical analyses of families—what would be the point? Americans have been criticized as being uninterested in history in general. When families are the topic, the lack of interest by Americans may be due to the widespread belief that there are essential aspects of families that are unchanged over time. For example, the roles of fathers as primary breadwinners and mothers as primary parents are seen by many as inherent in the biology and psychology of men and women, and, as a result, are essentially unaltered across history. This “essentialist” position leads people to ignore the effects of economics, historical events, and sociocultural proceedings on families. It also ignores the ways in which family members react to external events (e.g., creating solutions to social and environmental problems) over time. Such external phenomena are not relevant because families and family roles are enduring and unchanging. This ignoring of social and historical contexts for families allows people to talk about “traditional” families and “traditional” family values without irony and without recognition of what is left out of this picture; such language implicitly insinuates that families are timeless, and that what we observe and believe to be true about families today is the way it always has been.

This perspective of a timeless family life also ignores the diversity of family experiences, even within the same historical time period. When context is seen as irrelevant, then the racial, ethnic, and economic diversities of family experiences are overlooked. Unfortunately, for a variety of reasons this contextual-free orientation to the study of families has been widespread over the years, so much of what we know about families in the United States has been based on white, middle-class families. This is a problem for a multicultural society in which there is a great deal of racial and ethnic diversity. Knowing about one segment of society does not necessarily mean that you can generalize findings to society as a whole. Capturing the diversity of families is a constant challenge to family researchers and is particularly acute for looking at families in the past because there may have been little interest in earlier periods in preserving evidence about certain minority groups. A final reason for why family study is so daunting has to do with how family scholarship is evaluated by the public. If family scholars report findings that reflect what a layperson believes to be true about families, they often are greeted with this response: “Of course you found this result— everyone knows that is true. You are just confirming commonsense wisdom.” On the other hand, if the results of family scholarship tweak conventional wisdom or refute what is widely believed to be true, different responses are likely. The most common reaction is disbelief: “No way! That is not how we did things in my family. This can’t be correct.” Another frequent response when findings are at odds with an individual’s firmly held values is “Those researchers are ideologues who have bent their evidence to show what they want to find. I don’t believe these results because these researchers are [fill in the blank—liberals, conservatives, atheists].” We have seen college students and adults attending public lectures about families become visibly anxious and even angry when they are told the most simple, straightforward facts about families and family life (e.g., demographic data collected by the Census Bureau or other federal agencies) when those facts threaten their beliefs in some way. Once we were accused of “supporting” divorce because we shared divorce statistics with an audience as part of a talk on



remarriage. It is easy to irritate people by studying families. In summary, families are hard to study because researchers have to (1) examine phenomena that are at once both extremely familiar and unknown and private; (2) explore topics about which people feel so strongly they are moved to emotional reactions with little provocation and yet are reluctant to share what they do with outsiders; (3) sort out often conflicting evidence from multiple family members; (4) make sense of divergent views between themselves and family members; (5) include external influences on family phenomena; (6) recognize the diversity of family experiences due to race, ethnicity, social class, and other social statuses; and (7) be cognizant of how changes over time in society affect family functioning. Studying families is complicated. Nuanced results are often unappreciated because they diverge from perceived truths and personal values. Families Are Important to Study Family study is fascinating, partly due to the challenges families present as a scholarly phenomenon. Approached with an open and questioning mind, it is easy to be surprised by what families really do (versus what we think they do or think they should do). If individuals can set aside the notion that we know all we need to know about families, and that families are all the same (except that maybe our own family is better or worse than most), then family scholarship can be eyeopening. Are family feelings, behaviors, and interactions timeless and unchanging? Or, have some dimensions of family life changed drastically over time while others dimensions of family life remain constant? What have been the major societal influences on families? How have families changed society? What can we learn about families of the present and future by studying the families of the past? We can think of many questions about changes and continuities in American families. We think the answers can help us to understand contemporary families better and potentially to aid in addressing future family problems. Given that this is a book about the history of American families, it should be clear that we

Introduction

xxxv

are not going to ignore the historical context in which families live and work. In fact, changes and continuities in American families will be at the forefront of these encyclopedia entries. We also have attempted to include articles related to what is known about other contexts that affect families—racial, ethnic, social class, geographic, and other diversities are presented. We also have taken care not to present families as systems that only react to social and historical forces. Instead, entries examine the ways in which family members and families have proactively and strategically attempted to survive and thrive as they encountered such social factors as changes in the economy, wars, and shifting norms regarding men and women. We tried to keep writers’ values as muted as possible. We tried to include entries that explain changes and continuities in families over the century. No doubt what we chose to include reflects in part our interests and values; other editors might have selected different family issues to explore, or they might have asked writers to cover the same issues in different ways. We are family social scientists with acute interests in and appreciation of history, but we are not historians. The entries in this encyclopedia were written by individuals from many academic disciplines, including, of course, history. It is likely that our interpretations of historians’ scholarship is not the same as if we had shared their academic training—each discipline has its own peculiar epistemic values about research and the acquisition of knowledge. No doubt we share some of the scholarly values of family historians, but we also doubtlessly have been educated to sift through evidence using methods that historians would not employ, and we see things through the eyes of scholars who have spent most of their lives studying contemporary families, not families of the past. The reader will have to judge whether this is a strength or a weakness. Marilyn J. Coleman Lawrence H. Ganong University of Missouri

Chronology 1631: A Massachusetts law prescribes the death penalty for adultery, which is defined as a man and a married woman engaging in sexual relations. 1639: The first divorce is granted in colonial America, on the grounds of bigamy, at a time when divorce is available in England only through an act of Parliament. 1660: The common punishments for adultery in New England include fines, public whippings, and the requirement to wear initials proclaiming oneself as an adulterer (as immortalized in Nathaniel Hawthorne’s 1850 novel The Scarlet Letter). 1764: The Pennsylvania Supreme Court decision in Davey v. Turner affirms the joint deed sale system of conveyance, which requires that the property a married woman brings into a marriage cannot be sold without her consent.

States; they acquire new members by adopting orphans and by adult converts. 1785: Pennsylvania passes a law allowing divorce on grounds that include desertion, adultery, and bigamy, and also allowing women to apply for separation on the grounds of misconduct and cruelty. 1800: According to U.S. census records, the birthrate for white Americans is 55 per 1,000 population, a rate that will fall steadily throughout the century to 31.5 by 1890 and 30.1 by 1900. 1800: The average family in the United States has 7 children, a number that will decline to 3.5 children by 1900. 1800: Every New England state, as well as New York, New Jersey, and Tennessee, has a law allowing divorce, but the southern states do not.

1773: Massachusetts expands the grounds for divorce to include male, as well as female, adultery.

1803: The first divorce is granted in the state of Virginia, based on a wife’s infidelity with a slave.

1774: Mother Ann Lee, founder of the United Society of Believers in Christ’s Second Appearing, better known as the Shakers, moves to America. The Shakers, who practice celibacy and communal living, establish several colonies in the United

1804: Ohio allows divorce on the grounds of desertion, bigamy, extreme cruelty, and adultery; by mid-century, the possible grounds are broadened to include drunkenness, gross neglect, and fraudulent contract. xxxvii

xxxviii

Chronology

1821: Connecticut becomes the first U.S. state to pass legislation restricting abortion.

the New York Dispensary for Poor Women and Children.

1824: Indiana allows divorce on any grounds that a court finds reasonable and just; this relaxed standard makes Indiana a popular destination for people unable to get a divorce in their home state.

1854: The first of the orphan trains, which continue until 1929 and transport a quarter of a million children from eastern states to the Midwest, the western United States, Canada, and Mexico; the orphan trains are initially organized by the New York Children’s Aid Society and are intended to remove orphans and immigrant children from urban environments and place them with farming families.

ca. 1840: U.S. states begin overturning aspects of the feme covert principle and allow married women some rights (e.g., controlling their own property) previously denied them. 1846: Founding of the Oneida Community in New York State, where complex marriage was practiced from 1846 to 1879; women and men belonging to the community were married not to an individual but to the entire group, and could change sexual partners at will. 1848: At the Seneca Falls Convention in New York State, Elizabeth Cady Stanton and Lucretia Mott present their Declaration of Sentiments and Resolutions, stating their demands for women’s equality in language modeled on the Declaration of Independence. 1850: Life expectancy at birth in the United States is 40.4 years for white males, and 42.9 years for white females; by 1890, this increases only slightly, to 42.5 years for white males and 44.5 years for white females. 1850: Founding of the Female Medical College of Pennsylvania, later the Women’s Medical College of Pennsylvania, the first medical school founded specifically to train women as physicians. 1851: Massachusetts passes the Adoption of Children Act, the first modern adoption law that prioritizes the interests of the child over those of adults. 1852: The first day nursery is opened in New York City to care for the children of women who need to work to support their families. 1853: Elizabeth Blackwell, the first woman in the United States to receive a medical degree, founds

1860: Twenty U.S. states and territories have laws restricting abortion. 1860: The U.S. census finds that 33,149 Chinese men live in the United States, but only 1,784 Chinese women. This gender imbalance encourages the development of systems of prostitution, and in 1860 an estimated 85 percent of Chinese women living in San Francisco are indentured servants. Many are coerced into prostitution. 1861: The Woman’s Hospital of Philadelphia begins accepting patients; it treats women and children, and also provides care through a dispensary and home visits. 1862: The Homestead Act promotes migration to the western United States by allowing individuals and families to take possession of 160 acres of land upon payment of a small filing fee. 1862: The Morrill Anti-Bigamy Act outlaws plural marriage in U.S. territories, a law clearly aimed at Mormons then settling in the Utah Territory. 1865: A woman’s right to maintain ownership and control of her property after marriage is recognized in 29 states. 1868: Massachusetts begins “placing out” children, that is, paying for families to take care of orphan or foster children, with regular visits from a state official. 1870: The rate of divorce in the United States is 1.5 per 1,000 marriages, a statistic that will rise to 4 in 1,000 by 1900.



1870s: Feminists in the United States begin to advocate “voluntary motherhood,” including female control over both sexual activity and motherhood. 1872: Formation of the New York State Charities Aid Association, one of the first child placement programs in the United States. 1873: Passage of the Comstock Law, named after Anthony Comstock, which prohibits sending obscene materials, including information about birth control, through the U.S. mail. 1874: The New York Society for the Prevention of Cruelty to Children is founded in New York City by Henry Bergh, Elbridge Gerry, and John D. Wright; it is the first child protective agency in the world. 1876: The National Woman Suffrage Association presents the Declaration of Rights of Women on July 4, the U.S. centennial, in Philadelphia. ca. 1880: The term date becomes used in American English in the modern sense, as a social meeting between a man and a woman in a public place, with at least overtones of courtship; dating in this sense does not become common among the middle classes for several more decades. 1882: Polygamy becomes a felony following passage of the Edmunds Act. 1883: The British scientist Sir Francis Galton coins the term eugenics, meaning selective breeding to improve the human race; Galton is interested in applying the principle of natural selection, discussed in Charles Darwin’s On the Origin of Species, to human beings. 1890: The median age at first marriage in the United States is 22 years for females and 26 for males. 1892: The International Kindergarten Union (IKU) is founded by Sarah Stewart to prepare an exhibit for the 1893 World Columbian Exposition in Chicago and to promote kindergarten in the United States.

Chronology

xxxix

1893: Lillian Wald and Mary Brewster found the Henry Street Settlement in New York City to provide home nursing care and improve living conditions for the poor; they later become involved in promoting educational and cultural opportunities as well. 1898: In New York City, the St. Vincent de Paul Society establishes the Catholic Home Bureau to place children in homes rather than orphanages; other cities soon adopt this model as well. 1900: According to U.S. census records, the birthrate for white Americans is 30.1 per 1,000 population; this rate will fall steadily across the decades to 18.6 per 1,000 in 1940, then increase to 23 per 1,000 in the baby boom following World War II. 1900–02: Life expectancy at birth in the United States is 32.5 years for African American males and 35 years for African American females, much lower than for white males (48.2 years) and white females (44.5 years). 1904: John Harvey Kellogg creates the Race Betterment Foundation in Michigan to promote the ideas of eugenics; just six years later, Benedict Davenport creates the Eugenics Records Office at Cold Spring Harbor in New York State. 1907: Indiana becomes the first U.S. state to pass an involuntary sterilization law, aimed at “undesirables” such as the mentally retarded, insane, and sex offenders; by 1935, 26 states will pass similar laws. 1909: Evelyn Key publishes The Century of the Child, arguing that women have a particular gift for working with children and also popularizing modern child-rearing practices. 1912: Congress establishes the U.S. Children’s Bureau, which plays a key role in developing adoption regulations, as well as conducting campaigns against child labor and to reduce infant mortality; it is also the first federal agency to be headed by a woman, Julia Lathrop. 1915: The maternal mortality rate in the United States is 607.9 maternal deaths per 100,000 live

xl

Chronology

births, a rate that will be reduced to 12.7 deaths per 100,000 live births by 2007. 1917: Margaret Sanger founds the Birth Control Review, a publication promoting the use of birth control; it continues publication until 1940. 1917: Minnesota passes a law mandating that adoption records be kept confidential; most other states also adopt this practice by the 1940s. 1921: In New York City, Margaret Sanger founds the American Birth Control League (ABCL) during the first American Birth Control Conference; the ABCL quickly becomes the largest birth control organization in the United States and works to make birth control available to women who want it.

American infants have a higher rate of mortality, and a slower decline, than white infants: in 1935, the infant mortality rate for African Americans is 81.9 per 1,000, which declines to 13.2 per 1,000 by 2007, an average annual decrease of 2.6 percent; in contrast, the white infant mortality rate is 51.9 per 1,000 in 1935 and 5.6 per 1,000 in 2007, an average decline of 3.2 percent per year. 1937: The American Medical Association endorses the use of birth control, and North Carolina becomes the first U.S. state to provide birth control through a public health program.

1921: The Child Welfare League of America is founded as a federation of about 70 organizations providing services to children.

1938: In Philadelphia, Marian Stubbs Thomas and a group of African American women found Jack & Jill of America, a social organization intended to provide a way for middle-class African American children to socialize with each other at a time when they are not allowed to socialize with white children of similar social standing.

1923: Margaret Sanger founds the Birth Control Clinical Research Bureau (BCCRB) in New York City, a clinic run by physicians and providing a wide range of services, including marriage counseling, birth control, and gynecological exams.

1939: Sophie van Senden Theis, who previously published the first major outcome study in adoption, publishes The Chosen Baby, a book intended to help adoptive parents explain adoption to their children.

1925: The diaphragm, a barrier method of birth control, begins to be manufactured in the United States.

1939: E. Franklin Frazier, an African American sociologist, publishes The Negro Family in the United States, arguing that the heritage of slavery is a cause of what he sees as the current disordered state of African American families (e.g., poverty, absent fatherhood).

1932: Nevada law allows an individual to qualify as a state resident after six weeks, at which time he/she becomes eligible to file for divorce under the relatively liberal laws of the state. Because divorce is still difficult to obtain in many U.S. states, “divorce tourism” becomes popular, as many temporarily move to Nevada specifically for the purpose of gaining a divorce. 1935: Passage of the federal Social Security Act; Title V of this act includes a program of block grants from the federal government to the states to improve maternal and child health. 1935: Infant mortality in the United States is 55.7 per 1,000 live births, a rate that will decrease to 6.8 per 1,000 by 2007. However, African

1942: The federal government establishes the Lanham Day Care Centers in 42 states in order to care for the children of women working in war industries; the centers are closed in 1946. 1946: A group of African American women in Philadelphia found The Links, Inc., a social and civil rights organization focused on providing opportunities for African American families and young people, and cooperating with other civil rights organizations. 1946: Dr. Benjamin Spock publishes Baby and Child Care, heavily influencing norms of



mothering and child rearing in the United States. One of Spock’s opinions, as expressed in this book, is the belief that women should organize their lives around their children. 1948: Alfred Kinsey and colleagues publish Sexual Behavior in the Human Male, the first “Kinsey Report,” based on extensive interviews with American men; it reveals that the actual sexual behavior of men, including married men, is far different from the ideal of monogamy, and that a surprising proportion of men who consider themselves heterosexual have also had homosexual experiences in adulthood. 1952–53: Television star Lucille Ball continues acting in I Love Lucy while pregnant; though not the first television story line to include a pregnancy (which is referred to on the show only by euphemisms), it is notable because of the popularity of I Love Lucy. 1953: Alfred Kinsey and colleagues publish Sexual Behavior in the Human Female, the second “Kinsey Report,” based on extensive interviews with American women; among the revelations is that large proportions of women have had adulterous affairs, and nearly 20 percent have had lesbian relationships. 1953–58: The National Urban League Foster Care and Adoptions Project conducts a national effort to find adoptive homes for African American children. 1957: The peak year of the post–World War II baby boom in the United States; 4.3 million children are born this year in the United States. 1958: First publication of Standards for Adoption Service by the Child Welfare League of America, with recommendations for legal and social work practice on issues such as confidentiality and matching. 1958–67: The Indian Adoption Project, conducted by the Child Welfare League of America and funded by the federal government, places almost 400 Native American children with white families at a time when the principle of matching

Chronology

xli

(placing adoptive children with families similar in religion, race, and so forth, to their birth parents) dominates adoption practice. 1960: Two new methods of birth control, the IUD (intrauterine device) and the birth control pill, are both approved by the Food and Drug Administration. 1960: According to the U.S. Census Bureau, in just 11 percent of households with children under the age of 18 is a female head of household the primary or sole source of income; by 2011, this will increase to 40 percent. 1960: Psychiatrist Marshall Schechter publishes “Observations on Adopted Children,” claiming that adopted children are far more likely than children raised by their parents to suffer from a variety of emotional problems; his research is challenged on the basis that it is based entirely on patients in his practice rather than a nationally representative sample. 1963: Betty Friedan publishes The Feminine Mystique, bringing attention to the limitations imposed on college-educated women, who are expected to marry and focus their attention on their homes and families, leaving their intellectual and career interests behind. 1965: Daniel Patrick Moynihan, Paul Barton, and Ellen Broderick publish The Negro Family: A Case for National Action, a book assessing African American households by the normative standards of white families and finding them wanting; the heritage of slavery is one reason offered by the authors for this state of affairs. 1965: All U.S. states have laws restricting abortion, although some allow therapeutic abortions (i.e., to save the life of the mother). 1965: Title IX of the federal Social Security Act of 1965 creates the Medicaid program, a federal– state partnership providing health insurance for low-income individuals, including many children. 1965: The Los Angeles Bureau of Adoptions begins an outreach program to encourage single

xlii

Chronology

parents to adopt children; the initial focus is finding African American adoptive parents for African American children, and over the next two years, 40 children are placed with single parents. 1965: In Griswold v. Connecticut, the U.S. Supreme Court declares unconstitutional a Connecticut law prohibiting married couples from using contraception. 1965: The Immigration and Naturalization Act Amendments of 1965, also known as the HartCeller Act, abolishes the national origins formula for immigration to the United States and favors immigrants with family ties or valuable skills. 1966: Historian Barbara Welter publishes “The Cult of True Womanhood: 1820–1860” in the American Quarterly, analyzing the image of women as presented in religious literature and women’s magazines in the first half of the 19th century. Welter argues that these cultural forces created a sort of social control in which women in this period were expected to marry and remain at home, to submit to their husbands, and to act as moral guardians of their children. 1967: The U.S. Supreme Court decision in Loving v. Virginia overturns state laws barring interracial marriage; the case is brought by Mildred and Richard Loving, an interracial married couple who were convicted in Virginia of violating Virginia’s 1924 Racial Integrity Act. 1968: Theologian Mary Daly publishes The Church and the Second Sex, arguing that the Catholic Church is a patriarchal institution that systematically kept women from being able to be full participants in society; this book is so controversial that it almost keeps her from gaining tenure at Boston College. 1968: Promulgation of the Uniform Child Custody Jurisdiction Act, which is adopted by all 50 states by 1980; among other provisions, it requires that a state must honor a custody order issued in another state except under specific circumstances. 1970: According to the U.S. Census Bureau, 40 percent of households in the United States consist

of a married couple with their own children under the age of 18; this percentage will decline to 31 percent by 1980, and 26 percent by 1990. 1970: About 175,000 children are adopted in the United States, the most since accurate records began being kept after World War II. 1970: The Family Planning Services and Population Research Act provides federal funding for family planning services; in 1972, Medicaid is authorized to provide family planning services as well. 1970: The marriage rate in the United States is 10.6 per 1,000, and the divorce rate is 3.5 per 1,000. 1972: The sociologist Robert B. Hill publishes The Strength of Black Families, arguing that differences between African American and normative white families are not necessarily inferiorities but could be strengths; among the examples he offers are religious commitment, extended kinship ties, and adaptive family roles. 1972–78: The television series Maude, created by Norman Lear and starring Bea Arthur and Bill Macy, airs on CBS; the series includes a story line about abortion as well as a lead character who has been divorced multiple times. 1973: The U.S. Supreme Court, in Roe v. Wade, strikes down all state laws restricting abortion during the first three months (first trimester) of pregnancy, and limits the states’ right to restrict abortion between the first trimester and the time that a fetus becomes viable. 1973: The National Center for Health Statistics conducts the first National Survey of Family Growth, conducting interviews with a national sample of women ages 15 to 44 years, gathering information on topics such as maternal and infant health, contraceptive use, infertility, marriage, and divorce. 1973: Joseph Goldstein, Anna Freud, and Alfred J. Solnit publish Beyond the Best Interests of the Child, arguing for the importance to children of continuity in nurturing relationships and



Chronology

xliii

permanent decisions regarding custody in the case of divorce.

the fact that Reagan is the first (and still the only) president to be divorced.

1974: Enactment of the Child Abuse Prevention and Treatment Act (CAPTA), the key federal legislation regarding child abuse and neglect in the United States; CAPTA was most recently reauthorized and amended in 2010.

1982: Scott Thorson, a former live-in partner of Liberace, sues Liberace for palimony; Thorson receives a relatively small settlement of $75,000.

1977: The television miniseries Roots, based on a book by Alex Haley, dramatizes the story of African Americans in the United States through the ancestry of one African slave captured in the 1700s and running up to the current day. The series draws attention to the continuity of the African American experience and the importance of the family; it also motivates many Americans, African American or otherwise, to begin researching their own genealogy. 1977: British sociologist Penelope Leach publishes Your Baby and Child: From Birth to Age Five; it becomes a best seller, reassuring mothers that their own feelings and their observations of their own child would help them make the right parenting decisions. 1978: A survey conducted with a national probability sample of white women born between 1901 and 1910 and having been married at least once finds that 71 percent reported using contraception, with the most popular methods being condoms (54 percent), contraceptive douche (47 percent), withdrawal (45 percent), and rhythm (24 percent). 1978: The Indian Child Welfare Act of 1978 prohibits the unnecessary removal of Native American children from their families and requires that those removed be placed in homes recognizing Indian cultural values. 1979–80: The first administration of the National Incidence Study (NIS), a congressionally mandated survey of the incidence of child abuse and neglect in the United States. 1980: Ronald Reagan is elected president of the United States. A member of the Republican Party, his campaign emphasizes “family values,” despite

1983: The U.S. Supreme Court, in City of Akron v. Akron Center for Reproductive Health, declares as unconstitutional legislation passed in Akron, Ohio, that places several restrictions on a woman’s ability to obtain an abortion, including requiring a 24-hour waiting period and requiring that abortions be performed in hospitals. 1983: No-fault divorce is available in every U.S. state except New York and South Dakota. 1985: Just under 50 percent of U.S. women ages 18–24 and just over 60 percent of U.S. men in that age group live with their parents, in both cases a substantial increase from 1960, when about 35 percent of women and 52 percent of men in that age group lived with their parents. 1986: The Immigration Reform and Control Act, signed into law by President Reagan, offers legal residency to most illegal immigrants who have lived continuously in the United States since December 31, 1981, or earlier. 1987: Elizabeth Pleck publishes Domestic Tyranny: The Making of Social Policy Against Family Violence From Colonial Times to the Present, arguing that societal concern about domestic violence varies according to attitudes about the family. For instance, there is little interest in criminalizing domestic violence in periods when families are idealized and patriarchal roles are dominant, whereas in periods where more concern is paid to the welfare of women and children, the law has more interest in intervening in cases of domestic violence. 1988–98: The television series Murphy Brown, starring Candice Bergen, airs on CBS; the show achieves particular notoriety in 1992, when vice presidential candidate Dan Quayle denounces it for promoting single motherhood when Murphy Brown has a baby.

xliv

Chronology

1990: The divorce rate in the United States is 20.9 per 1,000.

benefits and allows states to refuse to recognize same-sex marriages.

1991: Tennis star Martina Navratilova is sued by her former partner, Judy Nelson, for palimony; the case is settled out of court in 1992.

1996: Bastard Nation is founded by members of Usenet newsgroup alt.adoption to militate for the right of adopted children to gain access to their original birth certificates.

1993: The Family and Medical Leave Act is the first federal law to require some employers (meeting certain requirements) to allow workers to take up to 12 weeks off, without pay, for reasons including the birth or adoption of a child, recovery from a serious health condition, or caring for a family member with a serious health condition. 1993: The Convention on Protection of Children and Co-operation in Respect of Intercountry Adoption, also known as the Hague Convention on Intercountry Adoption, sets out a number of procedures intended to prevent international trafficking of children and protect the interests of everyone involved in international adoptions. 1994: The California ballot initiative Proposition 187, which would ban illegal immigrants from using state social services such as education or health care, is passed by voters, but a federal court rules that it is unconstitutional. 1994: The Howard M. Metzenbaum Multiplacement Act prohibits agencies receiving federal assistance to discriminate against adoptive and foster parents based on race, national origin, or skin color, a change in policy from the previous norm of trying to match a child with an adoptive family based on those factors. 1995: According to the Centers for Disease Control and Prevention, the total fertility rate for U.S. women in the United States is 1.98 per woman, with some variability by race and ethnicity: for Hispanic women the rate is 2.8, for non-Hispanic white 1.78, for non-Hispanic blacks 2.19, and for Asians 1.8. 1996: The U.S. Congress passes the Defense of Marriage Act (DOMA), which is signed into law by President Bill Clinton. DOMA prohibits samesex married couples from receiving marriage

1997: The Children’s Health Insurance Program (CHIP) is created, providing federal funds to states to provide insurance coverage to children who are not eligible for Medicaid but whose families cannot afford to purchase private insurance. 1998: A British physician, Andrew Wakefield, presents research purporting to show that autism is linked to childhood vaccines; although the research is later discredited, many parents choose to not have their children vaccinated, a choice that has been implicated in later outbreaks of vaccinepreventable childhood diseases such as measles. 1998: Adult adoptees in Oregon are grant access to their original birth certificates under Ballot Measure 58. 1998: An article in Nature includes the results of DNA tests that establish that Thomas Jefferson, the third U.S. president, fathered at least one child, and possibly six children, with one of his slaves, Sally Hemings. 1999: All U.S. states have amended their legal codes to recognize rape within marriage as a crime. 1999–2001: Life expectancy at birth in the United States is 76.83 years, but gender and race remain associated with different life expectancies. For white males, life expectancy at birth is 74.4 years; for white females, 79.45 years; for African American males, 68.08 years; and for African American females, 75.12 years. 2000: The U.S. census reports that 77 percent of African American families are headed by a married couple, a decline from the 87 percent of families so headed in 1960. 2000: Foreign-born adopted children become American citizens as soon as they enter the United



Chronology

xlv

States under the Child Citizenship Act of 2000, rather than having to go through the naturalization process.

100,000 for American Indians and Alaska Natives, 11 per 100,000 for Asians and Pacific Islanders, and 9.6 per 100,000 for Hispanics.

2000: The U.S. census includes the category of adopted son/daughter for the first time.

2006: Brokeback Mountain, a 2005 film directed by Ang Lee and featuring a love story between two men, wins Academy Awards for Best Director, Best Original Score, and Best Adapted Screenplay.

2000: Vermont begins allowing same-sex couples to enter into civil unions, which offer the same protections and benefits as marriage. 2001: A review of 21 studies, published in the American Sociological Review, finds no evidence of any notable differences between children raised by gay or lesbian parents and children raised by heterosexual parents. 2002: The National Survey of Family Growth includes men for the first time; interviews are conducted with 7,643 females and 4,928 males ages 15 to 44 years, selected to be nationally representative of the United States. 2002: According to the U.S. Census Bureau, 69 percent of American children live with two parents, 23 percent with their mother only, 5 percent with their father only, and 4 percent in households with neither parent present; of those living in households without a parent present, 44 percent live in the household of a grandparent. 2002: According to the National Survey of Family Growth, women ages 15 to 44 years in the United States expect to have an average of 2.3 children over their lifetime; the same number is found in the 2006 to 2010 cycle of surveys. 2003–07: The Centers for Disease Control and Prevention finds wide differences in maternal mortality by geographic region, from a high of 19 maternal deaths per 100,000 live births to a low of 6.2 maternal deaths per 100,000 in New England. 2005–07: The Centers for Disease Control and Prevention reports wide discrepancies in maternal mortality by race and ethnicity in the United States: for non-Hispanic whites, the maternal mortality rate is 10.4 per 100,000 live births, compared to 34 per 100,000 for non-Hispanic blacks, 16.9 per

2006: Infant mortality in the United States is 6.7 per 1,000 live births, a relatively high rate compared to countries of similar economic and social development, such as France (3.8 per 1,000), Norway (3.2 per 1,000), and Japan (2.6 per 1,000). 2006–10: According to the National Survey of Family Growth, 54.4 percent of women ages 15 to 44 years report being married when their first child was born, 21.9 percent were cohabiting, and 23.6 percent were neither married or cohabiting; by way of comparison, in 2002, 62.3 percent reported being married when their first child was born, 12.4 percent cohabiting, and 25.3 percent neither married nor cohabiting. 2006–10: According to the National Survey of Family Growth, 62 percent of U.S. women ages 15 to 44 report using contraception, with the most popular methods being the birth control pill (17 percent), female sterilization (17 percent), and condoms (10 percent). The pill is the most common method among women ages 20 to 24 (47 percent) and 25 to 29 (33 percent); for older age groups, female sterilization was the most common method, with 30 percent of women ages 30 to 34, 37 percent of women ages 35 to 39, and 51 percent of women ages 40 to 44 reporting using female sterilization as their method of contraception. 2007: According to a survey conducted by the Pew Research Center, 70 percent of U.S. adults say that it is more difficult to be a mother today than it was 20 or 30 years ago, and 60 percent say it is more difficult to be a father; 38 percent of respondents say the influence of societal factors (television, peer pressure, drugs and alcohol, etc.) is the biggest challenge parents face in raising children.

xlvi

Chronology

2008: According to the Centers for Disease Control and Prevention, 41 percent of births in the United States are to unmarried women.

believe marriage is becoming obsolete, versus 28 percent who gave that response in a Time magazine poll of registered voters in 1978.

2008: California voters pass Proposition 8, following a campaign funded by out-of-state sources, banning same-sex marriage; it is ruled unconstitutional in 2010 in U.S. District Court, a decision upheld by the U.S. Supreme Court in 2013.

2010: The CBS television program Modern Family wins the Primetime Emmy Award for Outstanding Comedy Series, a victory it repeats in 2011 and 2012; the show’s story lines involve several nontraditional families, including one consisting of two gay men and an adopted child, and another headed by a single mother.

2008: Annette Gordon-Reed wins the National Book Award for nonfiction for The Hemingses of Monticello: An American Family, a history book examining the relationship between Thomas Jefferson, third president of the United States, and his slave Sally Hemings, in the context of slavery in Virginia at the time. 2008: According to the Pew Research Center, over half (52 percent of U.S. adults are married in 2008, with marriage more common among college graduates (64 percent) than those with a high school diploma or less (48 percent). 2008: According to the Centers for Disease Control and Prevention, about 1 in 88 U.S. children are identified as having an autism spectrum disorder as of 2008, up from 1 in 150 in 2000. 2009: According to the U.S. Census Bureau, about 2.4 million men in the United States are custodial fathers (raising their children while the mother lives elsewhere), and about 11.2 million women are custodial mothers. 2009: The Children’s Health Insurance Program Reauthorization Act reauthorizes and provides additional funds for CHIP (Children’s Health Insurance Program), which provides federal funds to states to subsidize insurance for children ineligible for Medicaid but whose families are too poor to provide private insurance. 2009: According to the Centers for Disease Control and Prevention, the U.S. marriage rate is 6.8 per 1,000 population, and the divorce rate 3.6 per 1,000. 2010: According to a survey conducted by the Pew Research Center, 39 percent of respondents

2010: New York becomes the final U.S. state to allow no-fault divorce, so that divorce can be granted on grounds such as irreconcilable differences rather than needing to establish that one party is at fault (e.g., for committing adultery). 2010: The Affordable Care Act extends CHIP funding through 2015 and maintains CHIP standards through 2019. 2010: According to the U.S. Census Bureau, about 5.4 million married couples in the United States are interracial or interethnic; among these couples, the most common combinations are a white non-Hispanic married to a Hispanic, and a white non-Hispanic married to an Asian non-Hispanic. 2010: According to the National Center for Health Statistics, the average age of first-time U.S. mothers is 25.4 years; by comparison, in 1980 the average age of first-time U.S. mothers was 22.7. 2010: According to the March 2010 Current Population Survey, conducted by the U.S. Census Bureau, 46 percent of adult unauthorized immigrants to the United States are parents of minor children, compared to 29 percent of U.S. natives and 38 percent of authorized immigrants. 2010: A study published in the scholarly journal Demography concludes that children raised by same-sex couples have the same level of educational achievement as those raised by married opposite-sex couples. 2011: According to the Pew Research Center, U.S. mothers with at least one child under age 18 spend an average of 14 hours per week in child



care, and fathers spend an average of 7 hours per week; by way of comparison, in 1965 mothers spent an average of 10 hours per week, and fathers 2.5 hours per week. 2011: According to the Centers for Disease Control and Prevention, 11.7 percent of children born in the United States are preterm and 8.1 percent have low birth weight (below 2,500 grams). 2011: According to the Pew Research Center, almost two-thirds of new mothers have at least some college education, while only 43 percent have just a high school diploma or less; in 1960, just 18 percent of new mothers had at least some college education, while 82 percent had a high school education or less. 2011: According to the Centers for Disease Control and Prevention, the twin birthrate in the United States was 33.2 per 1,000 live births, and the triplet or higher order birthrate was 137 per 100,000 live births. 2011: According to a Pew Research Center report, fathers in the United States spend an average of 7.3 hours per week with their children, a substantial increase from the 2.5 hours per week reported in 1965. 2011: According to the Centers for Disease Control and Prevention, the teen birthrate (live births per 1,000 females ages 15 to 19 years) declined 25 percent between 2007 and 2011. 2011: According to the National Vital Statistics System, the general fertility rate in the United States is the lowest yet reported, at 63.2 births per 1,000 women ages 15 to 44; this represents a 1 percent decline from 2010.

Chronology

xlvii

discretion should be exercised toward illegal immigrants who were brought to the United States as children, and allows individuals who meet certain requirements to apply for deferred action on their immigration status. These requirements include having come to the United States before age 16, having continually resided in the United States for at least five years, and to be in school, have graduated from high school, or be discharged from the military. 2013: According to a survey by the Pew Research Center, 8 percent of households with minor children are headed by a single father, a substantial increase from 1960, when just over 1 percent of households with minor children were headed by a single father. 2013: Thirteen U.S. states recognize marriage equality for same-sex couples—California, Connecticut, Delaware, Iowa, Maine, Maryland, Massachusetts, Minnesota, New Hampshire, New York, Rhode Island, Vermont, and Washington—as does Washington, D.C. 2013: According to a study of Michigan birth certificates from 1993 to 2006, conducted by Douglas Almond and Maya Rossin-Slater of Michigan, unmarried men are slightly (4 percent) more likely to acknowledge paternity following the birth of a boy than the birth of a girl. 2013: According to a report released in February by Gary Gates from the Williams Institute at the University of California, Los Angeles (UCLA), based on the American Community Surveys from 2005 through 2011, there are almost 650,000 same-sex couples in the United States, with about 20 percent of these couples raising children.

2012: According to the Centers for Disease Control and Prevention, 85 percent of children ages 19 to 35 months in the United States are immunized for diphtheria, tetanus, and pertussis (DPT), 94 percent for polio, 92 percent for measles, and 91 percent for chicken pox (varicella).

2013: On June 26, the U.S. Supreme Court rules that Section 3 of the 1996 Defense of Marriage Act is unconstitutional; this ruling allows samesex married couples whose marriage is recognized by the state in which they live to receive federal marriage benefits (e.g., insurance, pensions, protection from the federal estate tax).

2012: The “Deferred Action for Childhood Arrivals” memorandum states that prosecutorial

2013: According to a study released in June by the Pew Research Center, 23 percent of gay or bisexual

xlviii

Chronology

men in the United States are fathers, and 48 percent of lesbian or bisexual women are mothers. 2013: According to a report released in July by the Center for American Progress, 16.6 million people in the United States are in “mixed-status” immigrant families that include both documented and undocumented family members. 2013: According to data from the National Survey of Family Growth, conducted from 2006 to 2010 and its report on August 14, 2013, 6 percent of married women age 15 to 44 in the United States are infertile, a drop from the 8.5 percent infertility rate for this age group reported in 1984. 2013: According to a report released on September 27 by the Centers for Disease Control and Prevention, about half (50.5 percent) of U.S. women who were pregnant or would be pregnant during the 2012 to 2013 flu season had received a flu vaccine before or during the pregnancy. Rates were higher (70.5 percent) among women whose physician both recommended and offered the flu vaccine to them. 2014: In January, Professor Stephen Jenkins of the Institute for Social and Economic Research releases a report, Marital Splits and Income Changes Over the Longer Term, reveals that in Great Britain, men’s income increases by about one-third following divorce, while women’s income falls by more than 20 percent.

2014: On June 4, Emily DeFranco and colleagues publish research in the International Journal of Obstetrics and Gynaecology based on data from the Ohio Department of Health, indicating that ideally women should allow at least 18 months between giving birth and getting pregnant again, with shorter birth intervals associated with higher rates of prematurity. 2014: As of June 15, over 52,000 unaccompanied child migrants have arrived at the U.S.–Mexican border since October 2013, almost twice as many as in the same period a year ago. Many of these child migrants are from Central American countries, such as Honduras and El Salvador, and have been transported through Mexico by human smugglers. 2014: According to a report released by the Pew Research Center on July 17, 18.1 percent of Americans (57 million people) were living in multigenerational households in 2010, up from 17.8 percent in 2011 and 12.1 percent in 1980. 2014: As of July 25, 19 U.S. states, plus Washington, D.C., adopt full marriage equality for same-sex couples and three more states allowing domestic partnerships or civil unions for samesex couples. Sarah E. Boslaugh Kennesaw State University

A AARP Ethel Percy Andrus founded the American Association of Retired Persons (AARP) in 1958 as an offshoot of the National Retired Teachers Association (NRTA). Andrus, a retired high school principal, established the NRTA in 1947 to aid retired teachers in their quest for affordable health insurance, because at the time health insurance was virtually unavailable to older individuals. Although the NRTA was effective, Andrus received thousands of inquiries over the years from older citizens who wanted to know how they could obtain insurance and other benefits, even though they had retired from professions other than teaching. In 1982, the NRTA and AARP merged, and the age of membership was lowered from 55 to 50. The NRTA is still a division of AARP. In 1963, Andrus established the Association of Retired Persons International (ARPI) with offices in Lausanne, Switzerland, and Washington, D.C. The ARPI disbanded in 1969, but AARP continues to address the worldwide aging population through education and service to that segment of society. Champions of Aging Since its inception, the AARP has recognized 10 champions of aging for their contributions to society as a whole and for their service to the population over 50. As AARP’s founder, Andrus is credited

with recognizing the need for the NRTA and eventually AARP, organizations that have benefitted all of society. Edwin E. Witte, commonly referred to as the father of Social Security, chaired the committee that drafted the Social Security Act of 1935 and ensured its speedy passage through Congress due to his testimony. Robert M. Ball is credited with safeguarding Social Security throughout his years of service with the Social Security Administration. He also instituted cost-of-living increases and staved off drastic cutbacks in 1983, which ensured Social Security’s solvency for decades. U.S. Senator Patrick V. McNamara is recognized for making older Americans a priority in Congress and for his role in establishing the Senate Special Committee on Aging. President Lyndon B. Johnson was largely responsible for the passage of the Older Americans Act of 1965, which established the Administration on Aging as part of his Great Society reforms. The goal of the Administration on Aging was to focus on medical care, nutrition, transportation and legal assistance for senior citizens. President Johnson’s original proposal focused on hospital care; however, a competing Republican proposal in Congress included the coverage of physicians’ fees, a feature that most Americans supported. As a result, President Johnson and Congressman Wilbur Mills, the Democratic chair of the powerful House Ways and 1

2 AARP Means Committee, included the fees and the idea that health care should be made available for the poor and disabled in society, which resulted in the Medicaid program. In 1987, champion of aging Congressman Claude Pepper, commonly referred to as “Mr. Senior Citizen,” learned of a secret meeting of select lawmakers to postpone the annual cost-ofliving increase and other money-saving measures affecting senior citizens. He threatened to force a floor vote so that older Americans would know which lawmakers were cutting their benefits. As a result, the meeting and proposed legislative change were cancelled. Robert N. Butler was recognized as the individual who gave aging a good name. Raised by his grandparents, he was one of the first citizens to attempt to change society’s perception of seniors from “geezers” or “codgers” to vital members of the community who have much to offer. He recognized society’s dismissive attitude toward the elderly as a psychosocial disease that harmed everyone, not just older citizens. As a medical and psychiatric researcher in the 1960s, he was instrumental in establishing the field of gerontology. After Congress established the National Institute on Aging in 1974, Butler became its first director. Emma Holder was a champion of nursing home residents for helping them gain a greater voice in their care and well-being. She worked for activist Ralph Nader’s Retired and Professional Action Group and was the coordinator of the Long-Term Care Task Force. In 1975, she organized a group of reform advocates that took their demands for change directly to the nursing homes; she also coauthored Nursing Homes: A Citizens’ Action Guide. Warren Blaney championed the fitness and health aspect of aging by founding Senior Sports International, a nonprofit organization, in 1969. Blaney promoted his idea for the Senior Olympics movement by his use of slogans such as “Creating the New Adult Image.” By 1979, over 750 athletes over the age of 65 participated in the Senior Olympics. Maggi Kuhn challenged the injustices of the 1960s, which for senior citizens also included antiemployment ageism. Already in her 60s and weighing less than 95 pounds, Kuhn led 1,000 protesters around the White House, demanding they be included in a conference on aging. Mounted police confronted the protesters and Kuhn was knocked to

Members of AARP are able to access networks that help them stay more socially active. The organization has volunteers offering advice on long-term care, banking, consumer advocacy, legal matters, and medical care.

the ground. She got back up and continued the protest. Kuhn founded the Gray Panthers, and thanks to her leadership and a membership roll of 100,000 members throughout the United States and five countries, shelobbied against a mandatory retirement age, and won. Congress raised the mandatory retirement age to 70 in 1978 and eliminated it completely in 1986. Additionally, she formed a coalition called the Consultation of Older and Younger Adults for Social Change, as well as the Shared Housing Resources Center, which promoted group housing integrated by age. Programs and Benefits of AARP Membership AARP asserts that it is the best way to make seniors’ voices heard. Frequent Webinars are held for members of AARP, with experts speaking on topics of importance to members. Those topics include Medicare, Social Security, long-term care and care giving, and the integration of senior citizens with extended families and communities. AARP’s areas of focus include the following:

Abortion



• Housing and mobility • How to avoid the risk of the Medicare, Part D coverage gap • Health care law as it affects seniors and other family members • Driver resources and safety The reality today is that few adult children are emotionally or financially prepared to care for an older member of their family. Through the benefits of AARP membership, the senior family member is able to prepare him or herself for the gradual age changes or to make sure that he or she can survive the crisis of a heart attack or stroke. Questions such as which adult child is in charge of the elder’s finances, health care decisions, cooking and cleaning, the eventual dispensation of property and the funeral preparations, can be answered with the assistance of AARP resources. AARP members may visit the organization’s Web site to learn how to make their home a safer environment. They may learn of the various short-term and long-term facilities in their area and explore the advantages of each. Members can also take advantage of AARP’s Advocacy program, which allows them to designate the responsible adult of their own choice to act as durable power of attorney on their behalf. Members may also be guided to legal options in their community. Taking care of these matters while a senior is still healthy will prevent misunderstandings with and between adult children and allow the senior to clarify his or her wishes. In addition, AARP membership allows for social networking that helps seniors stay socially active in the years when they can find themselves home alone more often than not. The American Journal of Public Health reports that older individuals left on their own often find it difficult to manage simple physical and mental chores, and are unable to carry out normal daily tasks such as eating and bathing. Therefore, through the many advantages of the multitude of available programs provided by AARP, an older adult is presented with the tools for caring for themselves in a healthier, safer, financially more responsible manner for a longer and healthier life. In turn, this relieves a burden on the extended family. AARP in the Twenty-First Century AARP is operated by a 22-member volunteer Board of Directors that approves all policies, programs,

3

activities, and services for the association’s 37 million members. These volunteers not only possess a wide range of backgrounds, from long-term care consultants and nursing and social welfare professors to physicians, bank regulators, judges, and consumer advocates, but they have also sat on various Senate and congressional committees. Christopher J. Kline Westmoreland County Community College Margaret J. Miller Independent Scholar See Also: Assisted Living; Caregiver Burden; Caring for the Elderly; Death and Dying; Demographic Changes: Aging of America; Elder Abuse; Estate Planning; Estate Taxes; Extended Families; Funerals; Grandparenting; Inheritance Tax/Death Tax; Medicaid; Medicare; Nursing Homes. Further Readings Gleckman, Howard, Mike McNamee, and David Henry. “By Raising Its Voice, AARP Raises Questions.” BusinessWeek (March 13, 2005). http://www.business week.com/stories/2005-03-13/by-raising-its-voice -aarp-raises-questions (Accessed November 2013). Goyer, Amy. AARP Guide to Caregiving. Washington, DC: AARP, 2012. Jayson, Sharon. “AARP to Coach Aging Boomers Reimagining Their Lives.” USA Today (May 28, 2013). Krugman, Paul. “Demographics and Destiny.” New York Times Book Review (October 20, 1996).

Abortion In May 2013, the House of Representatives passed a bill requiring that women not be permitted to terminate a pregnancy at or beyond 20 weeks, due to the ability of a fetus at this stage to experience pain. This bill was only one piece of a vast and everchanging mound of legislation related to the issue of abortion. The level of controversy surrounding the abortion debate in the 20th and 21st centuries is perhaps unsurpassed by any other social issue of the day. However, a closer look reveals that throughout much of U.S. history, abortion was legal and routinely tolerated.

4 Abortion Prior to the 19th century, the practice of abortion in the United States was largely regulated by English Common Law. British law allowed for abortion up until the moment of quickening, the time at which a pregnant woman could detect fetal movement. While the onset of quickening varies, it is generally thought to occur around the midpoint of pregnancy, often during the middle of the second trimester. Abortions occurring shortly after conception up to the experience of quickening were accepted by doctors, the general public, and some major religious organizations. Quickening, though not medically verifiable, was also accepted by the legal and medical communities as the key defining moment in the moral acceptability of abortion. Abortion in the 19th Century Terminations of pregnancy were commonly practiced prior to this time, although references to abortion, either by physicians or patients, were often laden with vague or ambiguous terms. During the 19th century, missing a period was often referred to as “having a cold.” Many women sought the advice of a doctor for the ailment known as “obstructed menses,” the strategies of which included consumption of potent herbs or poisons, vigorous physical activity, and/or bloodletting, and typically resulted in termination of a pregnancy. Doctors and midwives argued that bloodletting might stimulate a woman’s menstrual flow. Furthermore, physical jolts to the body were thought to induce miscarriage. For this reason, it was not uncommon for doctors to extract a tooth as part of the abortion procedure. Women’s reasons for seeking abortion in the 18th and 19th century were, in many instances, similar to the reasons cited today. For instance, many women sought abortion because of the social stigma of an unplanned pregnancy or pregnancy occurring outside of marriage. Many of the earliest court cases involved women who became pregnant before marriage and wished to avoid the shame associated with an illegitimate pregnancy. During the mid-1800s, however, a demonstrable shift occurred whereby a growing number of married women—many of them white and Protestant—sought abortions. Publications alluding to abortion techniques were fairly common during this period. In 1810, The Female Medical Repository, a guide to women

and children’s health by Joseph Brevitt, was published. Some physicians prepared written materials, such as Thomas Ewell’s Letter to Ladies, published in 1817. It is believed, however, that many women performed self-induced abortions in the 18th and 19th centuries, either through ingesting chemicals believed to act as abortifacients or through the use of physical devices such as knitting needles, hairpins, or scissors. Reportedly, women also relied on a number of plant extracts to abort. These included savin, ergot, seneca snakeroot, and cottonroot. The first articles of legislation aimed at abortion came about during the 1820s. Technically, they did not address the practice of abortion per se, but rather the use of abortifacient substances. These early laws prohibiting the use of abortifacients came under the classification of poison control laws, and were not explicitly designed to regulate abortion. In the 1840s, shortly after such laws were passed, abortion rates surged. Attention to the issue became more pervasive, and abortion became a profitable industry. Geographically, abortion spread from the eastern seaboard toward the midwest and the southeast. Public schools began to educate students about sexual anatomy and physiology during the mid-19th century, and exposing young women to such information may have prompted the rise in abortions at this time. Ann Trow, more commonly known as “Madame Restell,” became the most well-known abortion provider in the United States, advertising her services in major newspapers and distributing her “female monthly pills” in a number of major cities. These publications remained ubiquitous until the passage of anti-obscenity laws in the 1870s that targeted materials aimed at abortion and contraceptive providers. The Comstock laws, as these anti-obscenity laws were known, were named for anti-abortionist and anti-obscenity crusader Anthony Comstock. The Rise of the Anti-Abortionists During the 1850s, a movement against abortion gained ground. Initially prompted by the newly formed American Medical Association, the movement appears to have been motivated less by a moral or medical concern with the procedure than with the need to monopolize it. Trained physicians sought control over this growing and



Pro-life and pro-choice supporters (including members of the Religious Coalition for Reproductive Choice) marched in St. Paul, Minnesota, to oppose or support Planned Parenthood in 2012.

highly profitable market. Increasingly, a cultural conflict developed between doctors and untrained medical personnel, mostly homeopathic laypersons and midwives. By influencing legislation that restricted abortion to physicians only, doctors and medical schools tightened their control over the practice and began to establish a unique legitimacy to their profession. Despite the public role the medical establishment played in eventually criminalizing abortion, a segment of doctors remained willing to provide the service on a private basis. In addition to the ascendance of the medical establishment, shifting demographics was another critical reason for the advent of restrictive abortion legislation. It appears that the largest group to seek abortions around this time was middle-class married white women. This led to the fear among some people of an immigrant “takeover,” especially by eastern European immigrants, African Americans,

Abortion

5

and Catholics, because of falling birth rates among white women. Compulsory motherhood among Protestant white women became popular rhetoric. The movement to criminalize abortion also had roots in antifeminism. Women seeking abortions were ridiculed for their selfishness and their unwillingness to embrace the idea that their biology was their destiny. Abortion was seen as a consequence of women prioritizing their selfish desires over natural law and the mandate of motherhood. Women were discouraged from pursuing power and prominence; this dictum was especially true for women wanting to attend medical school. Horatio Storer, one of the earliest advocates of anti-abortion campaigns, attacked the notion of quickening, arguing that it was only a fleeting sensation felt by a few, and therefore not worthy to serve as the basis for abortion legislation. Ironically, one of the earliest pro-feminist works, The Unwelcome Child, was published in the 1850s by abolitionist and pacifist Henry Clarke Wright. In it, he stated that the abortion trend was the result of men’s callousness and unwillingness to accept another child into the home. Early feminists, such as Elizabeth Cady Stanton, argued that while abortion was sometimes a necessity, it was also degrading and destructive to women’s well-being. Nevertheless, by the mid-1800s, legislation banning abortion except to save a woman’s life was passed. New York, Michigan, and Vermont passed their laws between 1845 and 1846, making abortion punishable by up to a year in jail or a $500 fine. The Medicalization of Abortion The late 19th and early 20th centuries were characterized by a rather hostile climate toward abortion and a fierce campaign to reinforce its criminalization. The American Medical Association focused its attention on re-educating women about the potential for life that occurs at conception and the myth of quickening as a valid indicator of fetal viability. Some physicians adopted a moral stance, arguing that they had a Christian obligation to protect infants and safeguard motherhood. A rather strong campaign was enacted to persuade the American public to equate abortion with mutilation and death. Realizing that the movement to end abortion was futile as long as midwives continued to informally perform the procedure, the Chicago Health Department adopted “twelve rules” for midwives in

6 Abortion 1896. These rules greatly limited the power of midwives to practice medicine, especially with regard to termination of pregnancy. A number of imposing and restrictive ordinances followed. Essentially, midwives were viewed with scorn and suspicion by the medical community, which was quickly gaining stature. An early effort to legalize abortion in the United States developed during the 1930s. The effort, albeit small when compared to that of Europe, provided a basis for the larger wave of resistance to restrictive abortion legislation that developed in the 1960s. The desperate living conditions of the Great Depression challenged the anti-abortionists’ notion of the benefits of cultivating a large family. Extreme poverty, starvation, and tuberculosis factored into the reasons why some women sought abortions. A number of doctors began to perform abortions for “therapeutic reasons.” Overall, however, most abortions performed in the early- to mid-20th century—whether to lower or middle income women—remained illegal. Contrary to conventional wisdom, most abortion providers of the day, even those practicing illegally, were technically competent. Most incidents of abortions leading to maiming and death were attributed to self-induced abortions. Backlash Against Women Raids during the 1940s served to curtail even the practice of therapeutic abortions. Such raids targeted both doctors and their patients, and appeared to be a reaction against the growing independence of women during World War II. In the late 1950s and 1960s, mainstream society experienced a renewed interest in traditional femininity, marriage, and motherhood. As a result, verbal and physical attacks on abortion clinics grew increasingly aggressive, and the names of women seeking abortions were sometimes made public in an effort to shame them. The issue led to a public discussion about the ideal of traditional femininity. Both abortionists (who were often women) and patients were cast in a negative light. The inquisition into women’s lives with regard to abortion was characteristic of McCarthyism and the era’s concern with ideas perceived to be anti-American. Fears over being found out, coupled with prohibitive costs, prompted an increase in self-induced abortions. Many of these methods proved dangerous or even fatal. Women were known to douche with

bleach, drink lye, or pound on their stomachs with hammers or other objects. The result of the criminalization of abortion became apparent. In the 1930s, Cook County Hospital in Chicago treated approximately 1,000 women for abortion-related complications. By the 1960s, the number increased to over 5,000. In the 1920s abortion-related deaths accounted for about 14 percent of maternal mortalities. By the 1960s, they accounted for over 40 percent. Furthermore, as a punitive measure, doctors performing legal, therapeutic abortions were known to forcibly sterilize women simultaneously, often without their knowledge. The 1960s provided a cultural climate conducive to social change. The civil rights movement and antiwar movements facilitated a strong momentum of social resistance. In reality, a small movement questioning existing abortion policy began in the 1950s. This initial effort began with psychiatrists who counseled many distressed women who either had, or were considering, an abortion. In 1955, a conference to discuss the dangers of illegal abortion was arranged by Planned Parenthood, primarily attended by sympathetic doctors and social activists who advocated liberalizing the laws regarding therapeutic abortions. In the early 1960s, the Society for Humane Abortion was founded; as an early feminist organization, the group centered its discussion more on the question of women’s rights to abortion and less on physicians’ fears of legal reprisal. Women in various cities around the country began to organize around the issue. In 1966, the National Organization for Women was formed and the idea of women’s rights gained serious traction. In the early 1970s, even before Roe v. Wade, opinion polls revealed that a majority of Americans favored the legalization of abortion. The 1971 decision in Abele v. Markele upheld the plaintiff ’s assertion that Connecticut’s abortion law was unnecessarily punitive. A women’s rights organization based in New Haven brought the case against the state of Connecticut, the outcome of which was to demonstrate that the state’s abortion law was, in fact, unconstitutional. This victory, however, paved the way for states to relax their abortion laws.

Roe v. Wade From the perspective of major abortion rights organizations, a nationwide legislative action was

Acculturation



critical to the cause. Pro-choice attorneys Linda Coffee and Sarah Weddington, together with the National Abortion Rights Action League (NARAL), identified a suitable plaintiff who came to be known as Jane Roe. In January 1973, the U.S. Supreme Court issued the Roe v. Wade decision that legalized abortion in the United States based on the right to privacy. The years since Roe v. Wade have given rise to a significant amount of abortion-related legislation, mostly restrictive. Realizing that a complete overturn of the right to abortion is unlikely, antiabortion groups have relied on a strategy of promoting and passing legislation to limit the reach of the Roe decision. In the early 1980s, Senator Jesse Helms proposed a bill declaring that life begins at the moment of conception. Various conservative organizations have mobilized to educate the public regarding the perceived dangers, both physical and psychological, of abortion and the need for radical change in laws. The National Right to Life Committee, Americans United for Life, and Operation Rescue are just a few of the more active pro-life groups. Fundamentalist Christians and Roman Catholics have found themselves aligned in the effort to stop abortion. A major setback for the pro-choice movement and for women’s rights in general has been the passage of laws prohibiting public funds to be used for abortion for low-income women. In addition, laws now require a mandatory waiting period before a patient receives an abortion, parental notification for minors, and comprehensive fetal development information. A major focus of the anti-abortion movement has been to emphasize their belief that life begins at the moment of conception. Christian groups are at the forefront of this sanctity-of-life argument. In many ways, the abortion debate in the United States has come full circle. The 2013 bill before Congress, while not explicitly alluding to quickening, identifies the mid-point of gestation as the time after which abortions are no longer permissible. Susan Cody-Rydzewski Georgia Perimeter College See Also: Birth Control Pills; Contraception: IUDs; Contraception: Morning After Pill; Contraception and the Sexual Revolution; Family Planning; Fertility; Primary Document 1917; Roe v. Wade.

7

Further Readings Mohr, James C. Abortion in America: The Origins and Evolution of National Policy, 1800–1900. New York: Oxford University Press, 1978. Olasky, Marvin. Abortion Rites: A Social History of Abortion in America. Wheaton, IL: Crossway, 1992. Reagan, Leslie J. When Abortion Was a Crime: Women, Medicine, and Law in the United States, 1867–1973. Berkeley: University of California Press, 1997. Rovner, Julie. “House Passes Bill That Would Ban Abortions After Twenty Weeks.” National Public Radio (June 18, 2013). http://www.npr.org/blogs/ health/2013/06/18/193197164/house-passes-bill-that -would-ban-late-abortions (Accessed March 2014). Solinger, Rickie. Abortion Wars: A Half Century of Struggle, 1950–2000. Berkeley: University of California Press, 1998.

Acculturation Acculturation is a concept that originated in the discipline of anthropology, and is only one form of culture change as a result of contact with other cultures. The first person credited with its use in the English language is Powell, in the late 1800s; however, the roots of the topic date back to antiquity. In psychology, G. Stanley Hall is designated as the first psychologist to discuss it in 1904, with increasing interest over time by psychologists. An early definition of acculturation by Redfield, Linton, and Herskovits (1936) holds that “acculturation comprehends those phenomena which results when groups of individuals sharing different cultures come into continuous first-hand contact, with subsequent changes in the original culture patterns of either or both groups.” This early definition centered on the group level of the phenomenon, but there was also recognition of the impact on the individual level. Because of psychology’s interest in the individual, “psychological acculturation” was created to distinguish individual level changes from group (culture) level changes from acculturation. A more recent definition by John Berry and David Sam states that acculturation is the process of cultural and psychological change as a result of two cultures meeting. Cultural change consists of changes in collective activities and social institutions, whereas changes

8

Acculturation

at the psychological level consist of an individual’s beliefs, values, behaviors, and customs. Mobility and Voluntariness John Berry identified five acculturating groups based on two factors: mobility and voluntariness. Mobility is determined by whether migration was the cause of contact with another group, or the individuals or group was sedentary and another group descended upon them. Voluntariness of contact refers to whether the contact is voluntary or involuntary. Based on these factors, different acculturating groups are formed. For those who voluntarily migrated, these persons are considered immigrants if they intend to remain permanently in their new location or sojourners if their stay is temporary, such as international students or a person who is placed overseas for a short period of time (e.g., military personnel or business executives). Refugees are individuals who are forced to leave their country of origin and migrate to another location to live. For sedentary groups, indigenous persons are those who remained in their original location and had involuntary contact with persons from another culture, such as American Indians and Native Alaskans; whereas ethnic groups (those who are second generation and beyond) are considered sedentary in terms of location and voluntarily choose to remain living in the host culture. Research has found that the two risk factors of psychological well-being are forced or involuntary contact and/or a temporary situation. Acculturation Attitudes and Strategies John Berry proposed a bilinear model of acculturation. In his model, two fundamental dimensions are examined, which include (1) the maintenance of one’s culture of origin/ancestral heritage and (2) the acquisition or acceptance of the new host culture. The outcome is four acculturation strategies: separation, assimilation, marginalization, and integration. The separation strategy is exhibited when an individual retains the beliefs and values of his or her culture of origin and avoids or withdraws from the host culture. Assimilation means resigning or rejecting one’s culture of origin and accepting and merging with the host culture. Marginalization involves someone who does not maintain his or her cultural identity and does not participate in the host culture. There is a lack of belonging to either their culture of origin or

the host culture. The integration strategy is demonstrated by valuing and accepting one’s cultural identity and the host culture, thereby achieving a bicultural orientation. Research related to acculturation strategies and adaptation found that those who are integrated tend to be better adapted, whereas those who are marginalized are the least well adapted. Additionally, those who are separated or assimilated show immediate adaptation outcomes. As a result, acculturated individuals and groups do better and show better adaptation when they are attached to both their culture of origin and to the host culture. Acculturation in the Family Acculturation has also been examined at the family level. Evelyn Lee and Matthew Mock categorized five different acculturation types of Asian families, ranging from very traditional to more acculturated. The acculturation spectrum runs from traditional families, to culture conflict families, to bicultural families, to highly acculturated families, and new millennium families. Traditional families are those whose members were born and raised in an Asian country and have limited contact with U.S. culture. Generally, these families include those who recently immigrated to the United States and have very limited exposure to U.S. society; this includes families who live in ethnic communities (e.g., Chinatown or Little Italy); older adult immigrants or refugees; and those from agricultural backgrounds. These families tend to retain their culture of origin’s beliefs and values and practice traditional customs. Culture conflict families are families in which differences in acculturation cause conflicts between members. Typically, the conflict is between the older generation adults who are more traditional, and the younger generation that often adopts aspects of the host culture. Conflicts can manifest in terms or values, behaviors, gender roles and expectations, dating, religion, philosophy, and politics. Bicultural families consist of families who have acculturated parents due to the exposure to the host culture prior to immigration and industrialization. These families tend to be bilingual and bicultural, making the adjustment to the host culture much easier because family members are familiar with both eastern and Western cultures.

ADC/AFDC



Highly acculturated families are those in which both the parents and children were born and raised in the host culture. These families have adopted the host culture’s belief system and values, and tend to speak the host culture’s language. Interracial or multiracial families comprise the new millennium families. Within these types of families, negotiation of the various cultures may be necessary. If negotiation in the family is not successful, then conflicts in values, communication, childrearing, and in-laws may ensue. In terms of acculturation, individuals may be at different acculturation stages or using different strategies based on contexts. For instance, an individual may be more assimilated at work, more separated at home, and more integrated in social settings. Thus, acculturation can be fluid and complex. Also, family members may be practicing different acculturation processes. As a result, conflicts may erupt within the family as children or young adults acculturate faster than parents and grandparents. Stress and Acculturation Stress can influence the acculturation process; the stress associated with the acculturation process is referred to as acculturative stress. Acculturative stress is “a response by individuals to life events (that are rooted in intercultural contact), when they exceed the capacity of individuals to deal with them,” according to J. W. Berry and colleagues. Factors that have been found to moderate the relationship between acculturation and stress are the nature of the larger society (e.g., welcoming, hostile, or indifferent); type of acculturating group; modes of acculturation; demographic and social characteristics of the individual and family (e.g., age, educational level, employment and language proficiency); and psychological characteristics of the individual. Having similar characteristics and values as U.S. culture in general, such as individualism, independence, and achievement orientation will likely lead to easier and effective adaptation to in the United States. Debra M. Kawahara Alliant International University See Also: Assimilation; Chinese Immigrant Families; DREAM Act; Extended Families; German Immigrant

9

Families; Immigrant Children; Immigrant Families; Indian (Asian) Immigrant Families; Irish Immigrant Families; Italian Immigrant Families; Japanese Immigrant Families; Korean Immigrant Families; Latino Families; Melting Pot Metaphor; Mexican Immigrant Families; Middle East Immigrant Families; Migrant Families; Migration; Multi-Generational Households; Polish Immigrant Families; Tossed Salad Metaphor; Vietnamese Immigrant Families. Further Readings Berry, J. W. “Acculturation as a Variety of Adaptation.” In Acculturation: Theory, Models and Some New Findings. A. Padilla, ed. Boulder, CO: Westview, 1980. Berry, J. W., Y. H. Poortinga, M. H. Segall, and P. R. Dasen. Cross-Cultural Psychology: Research and Applications, 2nd ed. New York: Cambridge University Press, 2002. Berry, J. W., U. Kim, T. Minde, and D. Mok. “Comparative Studies of Acculturative Stress.” International Migration Review v.21 (1987). Berry, J. W. and D. Sam. “Acculturation and Adaptation.” In Handbook of Cross Cultural Psychology: Vol. 3 Social Behavior and Applications, J. W. Berry, M. H. Segall, I. Kagitcibasi, eds. Boston: Allyn & Bacon. Lee, E. and M. R. Mock. “Asian Families: An Overview.” In Ethnicity & Family Therapy, M. McGoldrick, J. Giordano, and N. Garcia-Preto, eds. New York: Guilford Press, 2005. Ward, C. “Acculturation.” In Handbook of Intercultural Training, 2nd ed., D. Landis and R. Bhagat, eds. Thousand Oaks, CA: Sage, 1996.

ADC/AFDC Between 1935 and 1996, the Aid to Dependent Children (ADC) and Aid to Families with Dependent Children (AFDC) programs provided a safety net for poor families. This categorical aid maintained the well-being of children, and later entire families, when the family breadwinner was unable to work or had abandoned the mother and her children. Original ADC cash payments were considered public pensions to provide proper caregiving and parental support for children of deserving single mothers. By the time the program ended, the perception of a typical

10

ADC/AFDC

recipient reflected socially and demographically marginalized populations within the United States. Over its lifetime, intertwined with social, economic, and political conditions, and growing to disproportionately serve nonwhite families, AFDC helped shape the future of welfare in the United States. Origins The Aid to Dependent Children (ADC) program was established as part of the New Deal’s Social Security Act of 1935 and 1939 amendments. ADC nationalized existing mother’s pension programs, providing joint federal-state funding to aid children of poor mothers whose male breadwinning partner was no longer employed, had become injured or died while working, or who deserted the family. Within federal regulations, the design of each state’s program, including eligibility tests and payment levels, was conditioned by state-specific circumstances such as labor market needs. States often restricted families with children born outof-wedlock or cohabitating adults, and the receipt of aid was focused toward “suitable” or “deserving” homes. Consequently, ADC recipients were usually white children of middle-class mothers. As amended in 1939, a new program for survivors provided insurance for deserving widows, making ADC the program for fatherless families. Survivor’s insurance was available through employees from manufacturing and industrial companies, which discriminatingly employed white males; agriculture and domestic service sectors, dominated by persons of color, were excluded. This change in ADC eligibility meant that the overwhelming majority of children receiving it had abandoned, separated, divorced, or never-married single mothers; prior to 1939, nonwhite families constituted less than 5 percent of recipients, but by the 1950s, they represented were nearly one-third of all beneficiaries. Challenges and Changes ADC, publicly unpopular, grew more so as families of color sought assistance. States tailored eligibility determination within federal guidelines, often with exclusions based on race, class, and gender, or ability. Able-bodied mothers were often denied assistance when employment was available or their family home was “unsuitable” within the state’s moral benchmarks. For the life of ADC, families of color, including African American and Hispanic families,

were largely underrepresented on relief rolls but were overrepresented in low-wage employment. In 1950, amendments created caretaker benefits for deserving mothers, reinforcing the notion that a “suitable” caregiver working in the home was best for children, mirroring old-age, survivors, and unemployment insurance programs. Still, the majority of recipients were white, but racialized sentiments painted ADC as a program that encouraged unemployable, immoral nonwhite families to have children; thus, ADC quickly became synonymous with “welfare”—a guaranteed level of cash assistance that many believed subsidized flawed behavior. As such, amendments in 1956 offered states the option to provide “rehabilitation” services to support family maintenance, with incentives toward self-sufficiency. Further amendments in 1961 followed this theme with a special assistance for unemployed married men to participate in work training programs. However, the training in which most poor breadwinners were engaged—domestic, agricultural, or service employment—were the type of jobs that led them to seek assistance in the first place. Formally, these amendments appeased the public and lawmakers, but the decreasing popularity of cash welfare also enhanced informal moral and/or racial barriers. From the War on Poverty to a War on the Poor The 1960s and 1970s, underscored by movements for civil rights, a war on poverty, and welfare rights, saw the use of public assistance spike, even as public fears of dependency and immorality sparked a welfare crisis. The Public Welfare Amendments of 1962 renamed ADC the Aid to Families with Dependent Children (AFDC), and formally opened assistance to the entire family. Additionally, the Supreme Court struck down eligibility barriers such as discrimination in application determination and “suitable home” policies. Therefore, nonwhite families began applying for and receiving assistance at higher rates, reinforcing the perception that welfare enabled “undesirable” populations. As more families were able to access AFDC, the costs of the program quickly grew; when states ran out of money, the federal program stepped in because eligible families could not be denied service. Self-sufficiency was reinforced through AFDC funds for states to design and implement work programs open to both employable men and women.

Addams, Jane



The 1967 amendments created the Work Incentive Program (WIN), which for the first time required mothers to register for work or education as a condition of aid; this was strengthened in 1971 by compelling mothers with children as young as 6 years old to enlist. Addressing fears of dependency, WIN offered work incentives so that employed mothers, while receiving aid, could deduct childcare expenses and keep a percentage of their wages. These changes appeased critics by linking welfare with work, and conditioned the design of social welfare for the foreseeable future. In the 1970s, the economy slowed, AFDC costs climbed, and multiple presidential proposals to expand “welfare” and strengthen work requirements failed to gain congressional approval. By 1980, AFDC costs and federal deficits grew, and an omnibus budget cut targeted social welfare programs. In 1981, to reduce AFDC rolls and costs, shifts in program eligibility were balanced with incentives to develop work programs. States made once-optional work-related activities mandatory, restrained deductions for work expenses, and off-setting costs of aid, placed poor mothers in unpaid community service. The 1988 Family Support Act replaced WIN with the Jobs Opportunities and Basic Skills Program (JOBS), requiring mothers with children as young as 3 years old to work or enroll in training; recipients could also access funds for childcare and Medicaid when transitioning from AFDC to employment. Other funds were provided for states to propose innovative projects to further develop methods for reducing relief roles. The End of AFDC From 1935 onward, AFDC slowly shifted focus from family maintenance for deserving recipients to a categorical but short-term job-readiness and self sufficiency program. AFDC continually sustained and softened numerous attacks by balancing the needs of poor families with public fears of welfare’s unhealthy dependency. This balancing act laid the ground work for present-day welfare-to-work programs with the passage of the Personal Responsibility and Work Opportunity Reconciliation Act in 1996, which ended AFDC as the only guaranteed social safety net for millions of poor families. Michael D. Gillespie Eastern Illinois University

11

See Also: Earned Income Tax Credit; Food Stamps; Great Society Social Programs; New Deal; Poverty and Poor Families; Poverty Line; Social Security; TANF; Welfare Reform.

Further Readings Katz, Michael. In the Shadow of the Poorhouse: A Social History of Welfare in America, 10th ed. New York: Basic Books, 1996. Kornbluth, Felicia. The Battle for Welfare Rights: Politics and Poverty in Modern America. Philadelphia: University of Pennsylvania Press, 2007. Mink, Gwendolyn, and Rickie Solinger, eds. Welfare: A Documentary History of U.S. Policy and Politics. New York: New York University Press, 2003. Nadasen, Premilla, Jennifer Mittelstadt, and Marisa Chappell. Welfare in the United States: A Documentary History With Documents, 1935–1996. New York: Routledge, 2009.

Addams, Jane Jane Addams became an internationally recognized social reform leader during the Progressive Era. Her practical approach to solving urban problems influenced the emerging fields of sociology and social work. The tolerance and secular humanism that characterized her reform efforts carried over into her international peace initiatives. Early Life Laura Jane Addams was born on September 6, 1860, in Cedarville, Illinois, the daughter of State Senator John H. Addams. Graduating from Rockford Female Seminary in 1881, she was among the first American women to go to college. Her desire for meaningful work conflicted with the “family claim,” her term for the cult of domesticity. She thought that society’s expectation for unmarried daughters to be sacrificial family caretakers was an obsolete tradition that stymied women’s self-fulfillment. Settlement Movement Addams read about England’s settlement movement and visited Toynbee Hall in 1888, inspiring her to establish a settlement house in the United States, which was experiencing a massive influx

12

Addams, Jane Labor Museum functioned like an arts and crafts vocational school. Hull House was a refuge for battered wives and women escaping prostitution. Its nursery and kindergarten provided daycare for the children of working mothers. Its birth control clinic gave contraceptive advice as a means of combating the poverty associated with large families. Because many women of Addams’ generation felt that they had to choose between marriage and career, the friendships that developed among settlement workers often substituted for traditional families. Receiving no encouragement from her family, Addams found emotional support among her peers at Hull House. She characterized her 39-year relationship with Mary Rozet Smith as a marriage.

American social reformer Jane Addams was the first American woman to be awarded the Nobel Peace Prize, and is recognized as the founder of social work in the United States.

of immigrants, rapid urbanization, and industrialization. These social upheavals often destabilized immigrant and poor families. She and Ellen Gates Starr found a suitable property in Chicago’s Nineteenth Ward and opened Hull House, the country’s first settlement house, in 1889 to provide cultural opportunities and educational programming in Chicago’s impoverished industrial district. As a settlement house, Hull House immersed its volunteer middle-class residents in poor neighborhoods, enabling them to gain a holistic understanding of poor people’s lives. A sociological survey of Hull House’s neighborhood was one of the earliest environmental studies of an immigrant community, contributing toward the settlement’s reputation as a leader in social studies research. Responsible for Chicago’s first public playground, the settlement was credited with pioneering civic intervention to meet people’s need for open space. Its innovative

Child Labor and Labor Unions Addams opposed child labor as an unsafe practice that disrupted education, condemned children to lifelong poverty, and blighted their personal development. She was a member of the National Child Labor Committee. Hull House’s support for Illinois’ 1893 child labor law constituted the organization’s first lobbying experience. Many poor families, however, objected to such regulations because they needed their children’s earnings to survive. Recognizing living wage reform as a way for workers to emerge from poverty, Addams supported trade unions. She was vice president of the Women’s Trade Union League’s national board, mediated the 1910 Garment Worker’s Strike in Chicago, and helped raise relief money for striking workers. While some businessmen labeled her a radical, she questioned society’s hypocrisy about the cult of domesticity, juxtaposed against the harsh living conditions of working-class and immigrant women. Suffragist and Pacifist Some people opposed women’s suffrage as a threat to the home, but Addams, as a social worker and early feminist, observed that education and public health issues directly affected the quality of families’ lives, and thus were natural areas for women’s civic involvement. Leading by example, she became Chicago’s first female garbage inspector. She served as the first vice president for the National American Woman Suffrage Association, believing that the United States could not achieve true democracy if half its population was disenfranchised

Adler, Alfred



Before World War I, Addams was one of the most well-known women in the country. Considering war useless for resolving international problems and bad for democracy, she joined the Chicago Peace Society in 1893 and cofounded the American League for the Limitation of Armaments in 1914. She was the first president of America’s Woman’s Peace Party and the National Peace Federation. In 1915, she chaired the International Congress of Women at the Hague in the Netherlands, seeking an end to World War I. Following an antiwar speech at Carnegie Hall in New York City, however, public opinion turned against her. Her opposition to the military draft and support for conscientious objectors resulted in surveillance by the Bureau of Investigation, and she was listed as a possible national enemy by the War Department. In 1928, the Daughters of the American Revolution retracted her honorary membership. An ROTC publication denounced her as America’s most dangerous woman. Nonetheless, in 1931, she became the first American woman to receive the Nobel Peace Prize. Legacy Addams did not think of herself as a socialist, but she acknowledged that she sought a more radically egalitarian democratic ideal than most people. The Hull House experience taught her that poverty was caused by socioeconomic factors requiring government intervention. A pragmatic visionary in pursuit of social justice, her empathy for immigrants and the working poor empowered her efforts on their behalf. She died on May 21, 1935, in Chicago. Financial difficulties closed the Hull House Association in 2012, but the University of Illinois at Chicago’s unaffiliated Jane Addams Hull-House Museum remains open. Betty J. Glass University of Nevada, Reno See Also: Assimilation; Child Labor; Immigrant Families; Industrial Revolution Families; Working-Class Families/Working Poor. Further Readings Jane Addams Hull-House Museum. http://www.uic.edu/ jaddams/hull/hull_house.html (Accessed June 2013). Knight, Louise W. Jane Addams: Spirit in Action. New York: Norton, 2010.

13

Polikoff, Barbara G. With One Bold Act: The Story of Jane Addams. Chicago: Boswell, 1999.

Adler, Alfred Founder of the school of individual psychology, Alfred Adler was born in Austria on February 7, 1870, and trained as a medical doctor and psychotherapist. Initially a confidant of Sigmund Freud, Adler later broke from his mentor and left the field of psychoanalysis as practiced by Freud and his followers. Adler became a pioneer in the psychology of personality, and a prominent expert in child rearing and family issues. In this capacity, he sought to prevent potential problems facing children, rather than merely responding to difficulties after they had occurred. Adler’s work led to a growing interest in promoting social interest and a sense of belonging that marked a shift in parenting styles that reduced both pampering and neglect in many American families. Background Born near Vienna, in the town Rudolfschein, Adler was the second child of a Hungarian grain merchant. After undergoing a series of childhood illnesses, Adler resolved to become a physician, and to that end he studied at the University of Vienna. Although trained as an ophthalmologist, he maintained keen interest in psychology, philosophy, and sociology. After marrying Raissa Epstein in 1897, Adler began practicing as an ophthalmologist but soon swapped this for a general practice. Opening an office in a low-income neighborhood near a permanent circus, Adler’s patients included performers, musicians, and others from nontraditional walks of life. During this period, Adler was introduced to Sigmund Freud and joined his discussion group the Mittwochgesellschaft (Wednesday Society), which regularly met at Freud’s home. The Mittwochgesellschaft introduced Adler to psychotherapy, and his pre-existing interest in the subject grew. He was elected president of the Vienna Psychoanalytic Society in 1910, but the following year Adler and a few others broke with Freud; it was the first break with orthodox psychoanalysis. In 1912, Adler founded the Society for Individual Psychology,

14

Adler, Alfred

which enabled him to share his clinical approach and theoretical works with others. Adler’s work deviated from Freud’s in several ways with regard to perceptions of individuals and their interactions with the world. First, Adler asserted that the social realm (exteriority) is as central to an individual’s development and adjustment as the internal realm (interiority) that Freud focused upon. Second, Adler believed that factors such as gender and politics were vital to the dynamics of power and compensation, extending Freud’s view that libido was the central factor explaining this relationship. Finally, Adler’s socialist beliefs were very different than Freud’s more traditional worldview, which affected how each of them viewed interactions between the individual, the family, and the world at large. Influence Adler’s accomplishments after his break with Freud were comprehensive and compelling. The popularity of Adler’s personality theory created demand for him as a speaker; a changing society was increasingly interested in his more socially oriented approach to the field. He also became well known for his work on birth order, in which he examined how one’s birth order within one’s family influenced one’s psychological strengths and weaknesses and influenced lifestyle. In a three-child family, for example, Adler posited that the oldest child initially received the full attention of his or her parents, but later feels displaced when a younger sibling is born, thus increasing the risk of neuroticism and substance abuse as a result of increased responsibilities for caring for younger children and the loss of favored status as an only child. The youngest child has a greater risk of being overly pampered and spoiled, which reduces his or her social empathy for others. Adler believed that the middle child, spared parents’ increased expectations, dethronement, or overindulgence, was likely to occasionally rebel or act out, yet was also most inclined to be successful. Although Adler did not conduct research to support his theory of birth order, it has continued to be influential and has shaped a good deal of child-rearing advice. Impact on Parenting Adler’s work led him to be increasingly interested in parental education, and he established a series of child guidance clinics during the 1930s. In his work with parents, he emphasized both treatment for

disorders and prevention that would often eliminate the need for later intervention. Adler emphasized the importance of childhood in the development of one’s personality and believed that most forms of psychopathology had their roots in an individual’s early upbringing. Adler believed that assisting a child to be and feel an equal part of the family was the best way to prevent later problems. These problems that could be inoculated against included personality disorders (which Adler termed “neurotic character”) and various neurotic conditions such as anxiety, depression, and other related disorders. Adler believed that the family needed to take on a more democratic nature so that children would become accustomed to exercising a certain degree of power. Continually vigilant against pampering and neglect of children, Adler was an early opponent of corporal punishment. He also believed that a democratic home life was only possible to achieve if also supported by teachers, social workers, physicians, and others charged with tending for children. To that end, Adler recommended that anyone who worked with children take part in parental education so that they could assist the family structure acquire a more democratic nature. Adler’s Jewish heritage contributed to authorities closing his Austrian clinics during the early 1930s, and he immigrated to the United States to teach at the Long Island College of Medicine in New York. Adler’s emphasis on cooperation, reasoned decision making, and theory of parental education were significant contributions to conceptions of the American family. His belief that the desires of the self-ideal were countered by social and ethical demands made many more aware of the influence that society has on the individual, and his work continues to be influential. Adler died on May 28, 1937, while visiting Aberdeen, Scotland. Stephen T. Schroth Knox College See Also: Addams, Jane; Bettelheim, Bruno; Childhood in America; Child-Rearing Practices; Freud, Sigmund; Individualism; Parent Effectiveness Training; Parenting Styles. Further Readings Adler, A. Understanding life. C. Brett., trans. Center City, MN: Hazelden Foundation, 1998.

Carlson, J. and M. P. Maniacci, eds. Alfred Adler Revisited. New York: Routledge, 2012. Mosak, H. and M. P. Maniacci. A Primer of Adlerian Psychology: The Analytic/Behavioral/Cognitive Psychology of Alfred Adler. New York: BrunnerRoutledge, 1999.

Adolescence The many different definitions of adolescence in use today employ a variety of criteria for describing this developmental period. Chronological age can be employed, for instance, which specifies a focus on teenagers (those 13 to 19 years old). Other definitions are based on physical development and emphasize puberty, growth spurts, and the development of adult sex characteristics. Still other definitions rely on markers of psychological maturity (i.e., group identity and individual identity development) or social contexts, such as being a student in middle or high school. These definitions reflect differences of opinion regarding the length of time of adolescence. The focus on chronological age makes the terms adolescent and teenager synonymous. Compare this to a focus on the development of group and individual identity, which in effect then divides this developmental period into early adolescence and late adolescence. Another variation is to break down this developmental period into early, middle, and late adolescence, which tends to emphasize the school environment (middle school, high school, and college, respectively). Adolescence and the Extended Family Understanding adolescence requires understanding the family context in which adolescents grow and develop. Various works on the “families with adolescents” stage of the family life cycle center on the theme of increased family boundary flexibility that simultaneously focuses on the interacting needs and desires of three generations of family members: adolescents, parents, and grandparents. Within this period of development, parent-adolescent relationships are altered in order to allow the adolescent to move more freely out of and back into the family environment; parents free up time that

Adolescence

15

creates a renewed focus on the marital relationship and their career interests, and family members begin to take on more caregiving responsibilities for older family members. Some of the most common issues that arise out of this multigenerational theoretical focus are individuality and intimacy, with parents acting as the pivot point for these developmental concerns. For example, adolescents and parents engage in an almost constant renegotiation of issues that underscore the adolescent’s autonomy claims at the same time that the parents are beginning to communicate about independent living decisions

The period of adolescence is most closely associated with the teenage years, though its physical, psychological, and cultural expressions may begin earlier and end later.

16

Adolescence

with their parents and other older family members. At the same time, adolescents are experiencing the awakening of their sexual desires and begin to pursue romantic relationships, whereas parents often are dealing with sexual issues of their own, either inside of a marriage or as dating partners. Parenting Styles The fact that parents are seen as the pivot points for most adolescent development concerns has translated into a great deal of attention being paid to parenting by theorists and researchers. Perhaps most importantly, literature on parenting styles has developed over the years, with particular attention paid to variables associated with parental responsiveness (such as warmth and affection) and parental demandingness (as rule setting and discipline). In combination, responsiveness and demandingness yield four types of parenting styles—authoritarian, authoritative, permissive, and indifferent— that have been directly associated with variables related to adolescent well-being. Authoritative parenting represents the combination of high responsiveness and high demandingness that is most often associated with positive adolescent outcomes. Authoritative parents retain relationships with their adolescent sons and daughters that are warm, supportive, affectionate, and nurturing, while they maintain a great deal of structure and control. Authoritarian parents are also high in demandingness, but this is combined with low responsiveness. Here, rules and regulations are kept within a much less warm emotional environment. While the authoritarian style of parenting has been portrayed as sub-optimal for white youth, there is some research indicating that authoritarian parenting can lead to positive outcomes for minority youth. Permissive or indulgent parents are low on demandingness but high on responsiveness. Permissive parents have relatively few behavioral expectations that are placed on their adolescent sons and daughters. Instead, the emphasis is on the creation of a warm and accommodating emotional environment. Indifferent or neglectful parents are low in both demandingness and responsiveness. There is no structure or control, nor is there any sense of emotional closeness or connection. The permissive/indulgent and indifferent/neglectful parenting styles have been associated with poor

outcomes in most studies of adolescent development and well-being. While the vast majority of studies underscore the link between healthy adolescent development and an authoritative style of parenting, newer studies have extended this work by examining potential differences between mothers and fathers in terms of their parenting styles, as well as how consistently parenting styles are displayed. Additionally, the literature on parenting styles with adolescent family members has been expanded to include such variables as psychological control, monitoring, and parental knowledge. Family Structure Beyond the parent-adolescent dyad, other efforts have viewed the families of adolescents through a systems lens. The family system is comprised of many subsystems, including the parent-adolescent dyad. However, a systems-oriented approach to studying families with adolescent members emphasizes the family as a whole. Scholarship on family processes includes concepts such as family differentiation, boundary maintenance, expressed emotion, and triangulation. Family flexibility is another area of research on family processes, including variables such as family adaptability, problem solving, and coping strategies. Still other research on families with adolescents focuses on family structure, which largely has to do with the marital status of parents and their biological relationships with youth in the household. These studies rather uniformly have portrayed adolescents coming from two-parent households (and especially married biological parents) as having inherent advantages in comparison to adolescents who reside in single-parent households. There is also consensus that any type of disturbance to the family’s structure (e.g., separation, divorce, or remarriage) tends to impact youth well-being in a negative way in proportion to the degree to which family processes are disrupted by the structural change. More recently, researchers have expressed increased interest in examining how certain family processes play out in the presence of siblings. For instance, one emerging line of research revolves around the degree to which adolescent brothers and sisters are treated similarly or dissimilarly by their parents. Other work focuses on the degree to

Adolescent and Teen Rebellion



which certain processes that are contained within one part of the family system, such as the parent– adolescent relationship, are replicated in other dyads such as the sibling–sibling relationship. Many researchers are examining issues that concern adolescents in stepfamilies (including stepsibling influences), adolescents with same-sex parents, and the role of grandparents and other extended family networks in the lives of adolescents. Studying Adolescence There has been a considerable rise in the number and types of longitudinal studies that have included the use of large and nationally representative databases containing information about families with adolescents. The field has also witnessed the development of more sophisticated statistical methods for dealing with the complexities of these databases on dyads and larger systems. While the vast majority of these studies are quantitative in nature, the literature is witnessing an upsurge in the publication of studies that are qualitative in nature. Excellent examples of more qualitatively based work include studies on parent-adolescent communication about sex, adolescent-to-parent abuse, the fatherhood experiences of violent inner-city youth, father–daughter relationships in low-income minority families, and family dynamics in immigrant families. Researchers have made sizable empirical gains in studying adolescents and their families, yet the literature suffers from an overreliance on the adolescent’s perspective, and whenever a parent’s perspective is utilized in a study, more often than not it is that of the mother. Although methodological limitations of this nature exist throughout the field, the continued reliance on adolescent and mother perspectives in research adolescence research is particularly problematical because studies repeatedly have demonstrated the salience of fathers and siblings. Hence, most researchers believe that future research efforts should incorporate multiple family-member perspectives wherever possible. Stephen M. Gavazzi Ohio State University at Mansfield See Also: Adolescent and Teen Rebellion; Delinquency; Emerging Adulthood; Family Counseling: Parenting; Parenting Styles; Parents as Teachers

17

Further Reading Crosnoe, R. and S. E. Cavanagh. “Families With Children and Adolescents: A Review, Critique, and Future Agenda.” Journal of Marriage and the Family, v.72 (2010). Gavazzi, S. M. Families With Adolescents: Bridging the Gaps Between Theory, Research and Practice. New York: Springer Press, 2011. Peterson, G. W. “Family Influences on Adolescent Development.” In Handbook of Adolescent Behavior Problems, T. P. Gullotta and G. Adams, eds. New York: Springer, 2005. Steinberg, L. Adolescence, 9th ed. New York: McGraw Hill, 2011.

Adolescent and Teen Rebellion In the public eye, adolescence is typically thought of as a time of “storm and stress” that manifests itself in numerous rebellious activities. In contrast, the research literature indicates that a turbulent adolescence is usually preceded by an unsettled childhood. Therefore, rebellion is not caused by movement into adolescence. Rather, rebellion in adolescence is simply more conspicuous. Researchers have studied the focal points surrounding the rebellious adolescent—conflict and problem behavior—and have developed family strengthening initiatives to help family members reduce the likelihood and impact of these difficulties. Conflict involving adolescents can take many forms, and parent–adolescent disagreement and other family functioning variables have been directly associated with a number of adolescent problem behaviors, including delinquency, mental health issues, substance abuse, sexual activity, and poor academic performance. Family Conflict Many family researchers and therapists believe that the most powerful predictor of adolescent adjustment is the amount of disagreement that occurs between parents and their teenage sons and daughters. Other scholars point to the impact that marital conflict has on adolescent development and wellbeing. Sometimes, adolescents are pulled into the

18

Adolescent and Teen Rebellion

conflicts that erupt between two other family members such as the parents, a process known as triangulation. Researchers have generated a significant body of evidence regarding the deleterious effects on adolescents who become caught in the middle of parental conflict, especially with regard to an increased susceptibility to depression, anxiety, and other internalizing problem behaviors. This spillover between marital conflict and parent-adolescent relationships has also been related to greater levels of adolescent risky behaviors. There is a growing emphasis on the ways that conflict-oriented interaction patterns are replicated across other family subsystems, most prominently among relationships between siblings. For instance, greater amounts of negativity and relational aggression between siblings—or those actions taken to harm one another’s social relationships—are significantly related to greater levels of parental hostility and intrusiveness. In turn it is widely acknowledged that siblings play a significant role in the development of delinquent and antisocial behavior. In particular, greater amounts of sibling conflict and more sibling participation in deviant behaviors (“partners in crime”) are strong predictors of delinquency, in addition to other family factors, such as lower amounts of parental monitoring and inconsistent and coercive discipline methods. Mental Health and Rebellion The literature outlines a variety of ways that family factors are associated with the mental health status of adolescents. Parents’ mental health issues leave adolescents more vulnerable to suffering from psychological distress, although high-quality relationships between parents and adolescents function as a protective factor against the development of mental health issues. In turn, disrupted family processes such as incongruent parent–adolescent communication and poor family problem-solving skills have been associated with greater adolescent mental health concerns. Overall levels of family conflict and interparental conflict specifically also are thought to play a critical role in the development of psychological distress in adolescents. Other researchers have taken a more global view of adolescent mental health by examining both internalizing and externalizing syndromes. Here, difficulties that are experienced “inside” of the adolescent (depression and anxiety are the two

most common examples) are compared with those problem behaviors that are experienced “outside” of the adolescent (aggression, conduct disorders, and other instances of acting out). In general, studies support the observation that females are more likely to display internalizing behaviors, whereas males are more likely to display externalizing behaviors. At the same time, however, disrupted family processes such as greater conflict levels, lack of monitoring/supervision, and inconsistent discipline strategies all have been found to mediate the impact of gender on both internalizing and externalizing behaviors. The Family Environment The family environment is also a known predictor of adolescent use of alcohol and other substances. Similar to the literature on mental health issues, there is an intergenerational nature to substance use such that parent substance abuse is highly related to adolescent use and abuse. Clearly, family factors also can serve as both risk factors and protective elements regarding adolescent substance abuse, especially when combined with assessment of peer influences. Although the amount of deviant behavior displayed by peers is a significant predictor of adolescent substance use, family factors such as parental monitoring and supervision have been shown to play an important role in buffering these peer influences. Here the significant impact of siblings in terms of a “contagion effect” is seen, whereby greater amounts of contact with substance-using siblings and their friends are strongly associated with increased adolescent substance use. Adolescent sexual activity, especially the timing of first sexual intercourse, has also been related to many of the same parental and family factors already discussed. Key predictors include the amount of parental monitoring and supervision of the adolescent’s behavior and whereabouts, as well as the sexual experiences of older siblings. Another area of study concerns unsafe sexual practices, pregnancy, and teen parenthood. Here again, poor parentadolescent relationships and siblings who model unhealthy sexual behaviors decrease the likelihood of adolescent contraception use and other safe-sex methods. A growing number of studies have documented the impact of parental and family factors on a variety of adolescent educational issues. For instance, lower parental involvement and greater



hostility levels between parents and adolescents have been associated both with lower grade point averages and other negative changes in academic performance. Although adolescents residing with never-divorced parents typically fare best in terms of educational outcomes, studies that also include indicators of parent-adolescent disagreement report that variable as the strongest overall predictor of grade-point average. This highlights the complex interplay of family structure and family processes. While inter-parental conflict is also routinely associated with bad grades, greater amounts of parental acceptance and monitoring behaviors have been shown to serve as buffers to this spillover effect. While many studies have documented the influence of peers on variables such as being held back, being suspended or expelled, skipping classes, and homework trouble, family factors related to parent-adolescent relationship quality are reported to be much stronger predictors of these academic difficulties. Reducing Adolescent Rebellion A variety of approaches and labels exist within the family-based prevention field that seek to strengthen families through enrichment, support, and skill-building activities, thus reducing the likelihood of rebellious adolescent behavior. Studies indicate that the most effective family strengthening programs share a number of characteristics, including targeted attention to skill development in critical areas associated with parental monitoring and supervision, parent–adolescent communication, and family cohesion. Programs also need to run for a sufficient length of time and at an intensity level that permits solidification of learned skills, concentrating on culturally specific issues in terms of content, recruitment, and retention efforts. These programs should use explicit theoretical principles, offer activities for family members that are developmentally appropriate, use the latest methodological advances in research and evaluation efforts, and closely attend to fidelity issues through the use of well-trained professionals in a manual-driven format. Professionals helping families to prevent or ameliorate the impact of rebellious adolescent behavior must move beyond the idea that implementing activities is something that is done to families. Instead, professionals should provide services for

Adolescent Pregnancy

19

families and with families. The family empowerment movement focuses attention on how family members can experience a sense of control over their lives and the situations that they are facing, thereby giving family members “voice and choice” in the types and amounts of services that they receive. Stephen M. Gavazzi Ohio State University See Also: Adolescence; Family Counseling: Family Therapy; Parenting Styles; Parents as Teachers. Further Readings Gavazzi, S. M. Families With Adolescents: Bridging the Gaps Between Theory, Research and Practice. New York: Springer Press, 2011. Kumpfer, K. L. and R. Alvarado. Effective Family Strengthening Interventions. Washington, DC: U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention, 1998. Small, S. A., S. Cooney, and C. O’Connor. “EvidenceBased Program Improvement: Using Principles of Effectiveness to Enhance the Quality and Impact of Youth and Family-Based Prevention Programs.” Family Relations, v.58 (2009). Steinberg, L. Adolescence. 9th ed. New York: McGrawHill, 2011.

Adolescent Pregnancy Adolescent pregnancy is not a new phenomenon, but societal changes in recent decades have contributed to it being deemed a crisis. Adolescent birth rates were almost twice as high in the mid20th century as they are today, but most at that time were to married couples. Also, in the past, there were fewer negative consequences associated with becoming parents at such a young age. Today, adolescent parenting is linked to higher dropout rates, less likelihood of pursuing higher education, lower career aspirations, and a greater risk of poverty. Although adolescent birth rates have declined among most ethnic groups in recent years, the United States has the highest

20

Adolescent Pregnancy

rate among Western industrialized nations. Contributing to this high adolescent pregnancy rate is the number of adolescents engaging in risky sexual behaviors. By the time U.S. adolescents have graduated high school, nearly two-thirds of them will have had sexual intercourse. Researchers have found that adolescents who engage in sexual behavior at earlier ages have more lifetime sexual partners and a greater likelihood of having an unintended pregnancy. Data on adolescent pregnancy indicates that one out of six adolescent females is expected to become a teen mother. According to data from the 2010s, 750,000 young women under the age of 20 in the United States become pregnant each year. The central aim of most of the extant literature on adolescent pregnancy addresses the consequences of experiencing an early pregnancy, while other research focuses on parental and peer influences of adolescent pregnancy. Consequences of Teen Parenthood Adolescent parents must simultaneously negotiate the tasks of multiple developmental stages, and often the tasks of parenthood conflict with the teen mother’s identity as an adolescent. Research shows that unintended pregnancies can be very stressful for adolescents because of their lack of readiness for parenthood, disruption to their schooling and life plans, and the abrupt financial burden associated with the costs of providing for a child, which places them at greater risk of experiencing poverty. Many adolescent mothers remain unmarried, poor, and dependent on public assistance for an extended period of time. National figures indicate that 79 percent of teenage parents are not married. Moreover, adolescent mothers have increased psychological problems. Research suggests that adolescent mothers are more likely to feel less competent as parents, more likely to suffer from depression, and have low self-esteem. Research on adolescent fathers’ experiences is comparatively scarce. However, available research suggests that many young fathers had absent fathers. A lack of available positive role models can lead to feelings of insecurity, and thus makes them feel uncertain about providing for their own children. Although many adolescent fathers to have an active role in their children’s lives, many need social support and direction to help them in their roles.

Consequences of Teen Parenthood on Offspring In addition to the parenting consequences that arise from adolescent pregnancy, research has also focused on the consequences of teenage pregnancy on the children of teen parents. Decades of research has demonstrated that children of adolescent mothers do not fare as well as those of adult mothers. For example, the adolescent children of mothers who gave birth as teenagers have been found to be more likely to participate in risky behaviors, including drug use and gang involvement, and are more likely to become an adolescent parent. In addition, it has also been reported that children of a teenage parent are shown to have increased risks of developmental delay, academic difficulties, behavioral disorders, early sexual activity, and depression. Adolescent Pregnancy Rates Among Ethnic Groups
 Despite the decline in adolescent pregnancy rates, there are differences in adolescent sexual experiences as a function of race and ethnicity. Disadvantaged, African American, and Hispanic youth have the highest rates of adolescent pregnancy. Research indicates that African American and Hispanic adolescents are more likely to be sexually active, initiate sexual activity earlier, and have more sexual partners than white adolescents. Researchers have found that disadvantage adolescents of minority groups were more likely than their white counterparts to hold positive attitudes toward early sexual behavior, especially those in urban, high-poverty neighborhoods in the United States. It is important to note that more than 85 percent of pregnancies of individuals under the age of 20 are unplanned. Parental and Peer Influences on Adolescent Pregnancy An important intervening factor in the lives of minority and economically disadvantaged youth is the quality of their primary caregivers’ parenting. Parenting strategies experienced during early adolescence have long-term implications for adolescents’ eventual sexual activity and early pregnancy. Behavioral control is a parenting behavior that is associated with less engagement in risky sexual behavior and adolescent pregnancy. Parental behavioral control involves managing adolescent

Adoption, Closed



behavior and activities in an attempt to regulate adolescent’s behaviors, which includes parental monitoring. Parental involvement in and knowledge of their children’s day-to-day activities have been found to be associated with reduced rates of adolescent pregnancy, partially because high parental monitoring limits girls’ deviant peer affiliations. Parental warmth and support have also been found to be important determinants of whether an adolescent engages in risky sexual behaviors that may lead to early pregnancy. Researchers have found that family warmth during early adolescence is correlated with fewer sexual partners across gender and racial/ethnic groups during late adolescence and increased contraceptive use. Similarly, researchers have also suggested that parental support is related to a later onset of adolescent intercourse. Other researchers have examined levels of parent–child closeness and found that high levels of parent–child closeness was associated with decreases in the risk of adolescent pregnancy. In contrast, researchers have found that a lack of closeness in the parent/teen relationship increases the negative influence of peers on adolescent sexual activity. Peer Influences on Adolescent Pregnancy Although parents maintain a greater influence on their children than peers, peer group influence becomes increasingly important during adolescence. The influence of peers on adolescent pregnancy is well documented in the literature. Peer norms have been found to shape adolescents’ sexual attitudes and behaviors. Results of a national survey indicated that adolescent girls had an increased risk of pregnancy when they had friends who were sexually active or pregnant. Other research shows that adolescent parenthood occurs at higher rates among those who show problem behaviors, associate with antisocial peers, or when alcohol or drugs are involved. Overall, families and peers serve as significant socializing contexts for the emergence of risky behaviors and early pregnancy; however, exposure to quality parenting decreases the likelihood that adolescents will associate with a negative peer group and engage in risky sexual behaviors, thereby decreasing the risk for early pregnancy experience. Donna Hancock Hoskins Bridgewater College

21

See Also: Adolescence; Adolescent and Teen Rebellion; Delinquency; Parenting Supervision; Parenting. Further Readings Centers for Disease Control and Prevention (CDC). YRBSS National Youth Risk Behavior Survey: 2005 Health Risk Behaviors by Race/Ethnicity. Atlanta: CDC, 2009. Guttmacher Institute. U.S. Teenage Pregnancies, Births, and Abortions: National and State Trends and Trends by Race and Ethnicity. Washington, DC: Guttmacher Institute, 2010. Kogan, Steve, Leslie Gordon Simons, Yen Chen, Stephanie Burwell, and Gene Brody. “Protective Parenting, Relationship Power Equity, and Condom Use Among Rural African American Emerging Adult Women.” Family Relations, v.62 (2013). Scaramella, Laura, Rand Conger, Ronald Simons, and L. Whitbeck. “Predicting Risk for Pregnancy by Late Adolescence: A Social Contextual Perspective.” Developmental Psychology, v.34 (1998). Simons, Leslie Gordon and Rand Conger. “Linking Gender Differences in Parenting to a Typology of Family Parenting Styles and Adolescent Developmental Outcomes.” Journal of Family Issues, v.28 (2007). Simons, Leslie Gordon, Callie Burt, and Rachel Tambling. “Identifying Mediators for the Influence of Family Factors on Risky Sexual Behavior.” Journal of Child and Family Studies, v.22 (2012). Wallace, Scyatta, Kim Miller, and Rex Forehand. “Perceived Peer Norms and Sexual Intentions Among African American Preadolescents.” AIDS Education and Prevention, v.20 (2008).

Adoption, Closed Adoption is a socially acceptable alternative to having biological children for adoptive parents and to parenting for birth parents, partially due to changing social norms and recent U.S. legislation promoting adoption. However, adoption rates have declined since the 1970s. As of the early 21st century, around 127,000 children are adopted annually in the United States, for a total of approximately 5 million adoptees living in the country. Each adoption involves the same three individuals or groups—the adopted

22

Adoption, Closed

child, the birth family, and the adoptive family—but the process varies in terms of contact and communication among the participants. For many decades, “closed” adoptions were mandated by law. In closed adoptions, the exchange of information between the birth and adoptive families either never transpired, or stops with the adoptive placement agency. Although adoption practices have recently shifted toward openness, there remain circumstances in which closed adoptions are appropriate. Many adults who were adopted as children experienced closed adoptions, and grew up without contact with their birth families. Prior to the 20th century, adoptions tended to be open; beginning in 1917, legislation mandated sealed records and thus became “closed.” These closed adoptions involved adoption workers matching adoptees with adoptive parents, without any contact between birth and adoptive families. This was perceived as in the best interests of adoptive children, reasoning that they would experience less confusion than they would by interacting with two sets of parents. Less confusion, it was supposed, would facilitate healthier identity development. The closed process also protected adoptees from feeling different from their peers and from the social stigma associated with being illegitimate. Often, adoptions remained secretive and were not disclosed to adoptees to prevent conflicts of loyalty that could hinder forming a connection to their adoptive family. Furthermore, closed adoptions were perceived as beneficial for both birth and adoptive parents because ongoing contact would require navigating roles lacking defined societal norms and expectations. Closed adoptions also allowed birth parents to keep an unwanted pregnancy private and to avoid any stigma associated with the situation. Telling Adoptees In the era of closed adoption, authorities debated whether or not adoptees should be told that they were adopted, and if they should be told, when doing so would be developmentally appropriate. Although adoption workers typically suggested that adoptive parents tell their child early in life, reports from adult adoptees suggest that they were often not told until adolescence or adulthood. Many discovered on their own that they were adopted. Parents reported anxiety and fear

around disclosing the adoption; however, telling adoptees early is suggested as a way to combat any negative stigma around adoption and ensure development of healthy self-esteem. Adoptive parents were instructed to emphasize that their children were “chosen,” and that the birth parents’ decision to relinquish them was selfless and loving. Although intended to convince adoptees that they were normal, the effort surrounding this revelation often paradoxically conveyed their difference. Search for Birth Parents In closed adoptions, 30 to 60 percent of adoptees express a desire to search for their birth parents as children, and approximately 55 percent of adult adoptees actually initiate a search. Advances in technology, namely the Internet, and changes in laws give many adoptees access to their sealed birth and adoption records, making the search for birth parents more feasible. Searching for birth parents was once considered a sign of adoptee maladjustment or mental illness and was blamed on poor parenting. However, it has become more common in the United States and is now perceived as normal and adaptive. In fact, the stigma has shifted to the other extreme, as many proponents of open adoption view adoptees’ disinterest in such a search and reunion as maladaptive and unhealthy. Adoptees report searching for birth parents for many reasons, including curiosity, wanting a sense of belonging, seeking medical information, developing personal identity, and genealogy. Because of their loyalty to their adoptive parents, many adoptees report that they do not want to search for their birth parents until “the time is right,” or because they fear rejection. Searching for birth parents is more common among women than men and is often triggered by life transitions, such as pregnancy, marriage, or the death of an adoptive parent. Shift Toward Openness in Adoption Although closed adoptions were once assumed to simplify the experience of adoption for all concerned, the challenges around secrecy and searching for birth families influenced a shift toward open adoption in the United States. Adoptions increasingly involve arrangements on the open end of the continuum. Fully open adoptions generally involve information sharing and ongoing contact and communication among members of both the adoptive

Adoption, Grandparents and



and birth families. This shift is also related to decreasing stigma around nonmarital pregnancy, which historically caused many single mothers to feel pressured into placing their child for adoption in secret and to relinquish all future involvement in the child’s life. Adoption agencies report that the frequency of open adoptions dramatically increased in the late 1980s, primarily due to fewer birth parent placements in general, and more birth parent requests for ongoing contact. Therefore, adoptive parents who sought closed adoptions had more difficulty identifying birth parents who would agree to a closed arrangement. As of 2013, around 5 percent of domestic, voluntary adoptions involve fully closed arrangements. Research suggests that open adoptions may be beneficial to all those involved in the adoption because they provide more opportunity for contact. However, the nature of contact is important, and contact is most beneficial when it is combined with open communication about the adoption. Communication both within the adoptive family and between the birth and adoptive families is associated with healthier identity development among adoptees. The ideal degree of openness varies for each adoption. Especially when children are adopted from the care of their birth parents, generally due to involvement in the child welfare system, ongoing contact may not be healthy. When children are removed from parental care due to abuse or neglect, contact between adoptees and birth families can be distressing for children and may expose them to further abuse. In such cases, closed arrangements may be in the best interest of the adoptees. Amy M. Claridge Florida State University See Also: Adoption, International; Adoption, MixedRace; Adoption, Open; Adoption, Single People and; Adoption Laws. Further Readings Brodzinsky, David and Jesús Palacios, eds. Psychological Issues in Adoption: Research and Practice. Westport, CT: Praeger, 2005. Grotevant, Harold and Ruth McRoy. Openness in Adoption: Exploring Family Connection. Thousand Oaks, CA: Sage, 1998.

23

Pertman, Adam. Adoption Nation: How the Adoption Revolution Is Transforming Our Families and America. Boston: Harvard Common Press, 2011.

Adoption, Grandparents and Becoming a grandparent is a typical life event for many Americans, and grandparenthood is generally regarded as enjoyable and satisfying. Historically, it has not been uncommon for grandparents, particularly grandmothers, to provide care for grandchildren. Multiple macrolevel influences, including advancements in medical technology and the economic recession of the mid-2000s, have contributed to a growing number of intergenerational households, where grandparents are more likely to be providing regular and routine care for their grandchildren. During the 1990s, the number of children in grandparent-headed households without either parent present increased by more than 50 percent. As of the mid-2010s, more than 2.5 million grandparents live with and are responsible for the basic needs of one or more grandchildren under the age of 18. Grandparent care is most frequently experienced in African American households; however, Caucasian households are seeing increasing rates that outpace other racial and ethnic groups since 2000. There are many reasons why a grandparent could become the primary caregiver for his or her grandchild. Commonly, grandparents step up when parents are unwilling or unable to care for their children, for example, in cases of abandonment, death, chronic mental or physical illness, substance abuse, neglect, maltreatment, incarceration, or impoverishment. Grandparents report feeling called to parent their grandchildren out of love and commitment to maintaining family connections and in an effort to keep children out of the formal foster care system. Research highlights the importance of kinship relationships and kinship care for children who experience early adversity; grandparents become important protective forces for children who are at risk of neglect, maltreatment, and severed familial ties. Adoption is one of several options for grandparents who transition into the role of parent to

24

Adoption, Grandparents and

their grandchildren, but it is the least common. Some grandparents assume legal responsibility for their grandchildren with the initial hope that the arrangement will be temporary and that their adult children will be able take over again at some point. Other more common options are transference of guardianship and acquisition of legal custody, both of which give grandparents legal parental rights. In order to formally adopt a grandchild, a grandparent is required to take the child’s parents (one of which is the grandparent’s child) to court. Family court proceedings can be lengthy and may add additional financial and emotional stress to the family, particularly when grandparents and the court systems are called to make decisions about the fitness of their adult children to parent. Some of the broader family and social issues associated with grandparent adoption relate to the general well-being, health, and social support of an aging caregiving population. When an aging grandparent, who may be on a fixed income, takes on the additional (and sometimes unexpected) burden of caring for children, financial planning and support becomes critical. Approximately 20 percent of grandparents raising grandchildren have an income that is below the designated federal poverty level; among these families, where a parent of the grandchild is not present in the home, the median income is $33,000. Many grandparents, especially those who do not formally file for adoption, find themselves unable to include their grandchildren on family health insurance plans or obtain affordable housing with more space to accommodate the grandchildren. While grandparent adoption, guardianship, custody, and care may serve as an important protective factor for children who are at risk, for the grandparents it actually becomes a health risk factor. Research indicates that custodial grandparenting is associated with increased stress that negatively affects grandparents’ sense of well-being and is associated with increased rates of depression. Research further indicates that assuming custody of grandchildren can also be detrimental to an aging person’s physical health. Custodial grandparents report having worse health and less social functioning than noncustodial grandparents and are more likely to experience losses in physical functioning and more physical pain. Social support is particularly important for grandparents who care for children in later life. While this

social phenomenon is steadily increasing, it is still regarded as an out-of-sequence life event; an unexpected family experience. About half of grandparents in this situation also experience increased isolation from friends and people in their age group, relationships that would help reduce stress and provide them with increased well-being. Additionally, many custodial grandparents find the responsibilities and challenges of parenting in the 21st century very different than when they were parenting their children. Their previous experiences may not be applicable to the myriad challenges facing parents with young children in the new millennium. While multiple mental, physical, and social consequences have been associated with grandparenthood, there are also many ways that custodial grandparenting can positively contribute to an aging individual’s life. Relationships with grandchildren include joy, happiness, an increased sense of life-satisfaction, and companionship. Moreover, many grandparents report having made the transition to custodial grandparenthood successfully. Custodial grandparents who self-identify as successful are able to modify their expectations and goals to fit their new roles and responsibilities. Successful custodial grandparents are those who are well-informed about the social services that could support them, and they take advantage of those resources. They also are more intentional when planning family time, setting aside opportunities for recreation and staying connected with friends. Arranging periodic respite from caregiving helps to alleviate stress and increases well-being. Additionally, successful custodial grandparents are those that spend quality time together. Quality time is fundamental for successful grandfamilies because it influences all other characteristics of healthy family life, increasing quality of communication, interaction, and emotional support across generations. Bethany Willis Hepp Towson University See Also: Caregiver Burden; Grandparenting; LaterLife Families; Multigenerational Households. Further Readings Child Welfare Information Gateway. “Grandparents Raising Grandchildren.” https://www.childwelfare

.gov/preventing/supporting/resources/grandparents .cfm (Accessed November 2013). De Toledo, S. and D. E. Brown. Grandparents as Parents: A Survival Guide for Raising a Second Family, 2nd ed. New York Guilford, 2013. Neely-Barnes, S. L., C. Graff, and G. Washington. “The Health-Related Quality of Life of Custodial Grandparents.” Health and Social Work, v.35 (2010). Strom, P. S. and R. D. Strom. “Grandparent Education: Raising Grandchildren.” Educational Gerontology, v.37 (2011).

Adoption, International

25

China, which has many girls in need of adoption, research indicates that the adoptive parents favor female over male children. Indeed, this gender preference cuts across all races, socioeconomic statuses, and ages. Research has also documented that adoptive parents in heterosexual relationships are more likely to prefer girls than individuals in samegender relationships.

Adoption is not always domestic; it can also be international, or what is sometimes called intercountry. International adoption occurs when a family from one country adopts a child from a different country through permanent legal means. Like domestic adoption, international adoption is a legal act insofar as the biological parent(s) transfer their custodial and parental rights to the adoptive parent(s). In 2004, U.S. families adopted 23,000 children from foreign nations, but since then, the number has sharply declined. In 2012, about 8,600 international adoptions were completed that placed children with U.S. parents. The Hague Adoption Convention on the Protection of Children and Co-operation in Respect of Inter-Country Adoption (usually referred to simply as the Hague Adoption Convention) is an international agreement that safeguards intercountry adoptions. It outlines procedures and policies designed to protect not only birth families but also adoptive families.

Ethnic Identity Development Among Transnational Adoptees International adoption frequently results in transcultural and transracial adoptive families, whereby the adoptive parent or parents are from one culture or race, whereas the adoptive child is from another. This causes some counselors and postadoption service workers concern about the ethnic identity development of adoptees. Research has established the importance of adoptive parents being culturally sensitive and willing to address issues of identity with their adoptive child. In the middle decades of the 20th century, adoption counselors and adoption experts suggested that adoptive parents avoid talking about issues of race, culture, and identity with their children. These adoption experts thought that being colorblind was the best approach to raising a child who was culturally or racially different from his or her adoptive parent(s). Adoption experts now believe that colorblind parenting is a highly problematic way of socializing adoptees, and they advocate for more culturally and racially sensitive parenting. Irrespective of adoptive parenting technique (colorblind or race conscious), research confirms that transnational adopted individuals sometimes experience ethnic identity challenges, especially if they grow up in geographic areas in which they are a member of a racial or ethnic minority group, which is often the case of international adoptees.

Boys Versus Girls Girls make up about 64 percent of all children internationally adopted by Americans. What explains this gender differential? Might it be that sending countries have a large number of female children ready to be adopted? For example, China has reported that about 95 percent of children available for adoption in its country are girls. It seems, however, that this is not a significant contributor to adoptive parents’ gender preference. Despite

Adolescent Adoptees A common source of conflict for adolescent adoptees is their sense of grief and abandonment by their birth parents. Sometimes, adolescent adoptees, hoping to become more assimilated and accepted by the dominant cultural group in which they now live, feel the need to reject their cultural heritage. Rejection of this heritage places such adoptees in danger of internalized racism. Social scientific and adoption research substantiates

Adoption, International

26

Adoption, Lesbian, Gay, Bisexual, and Transgender People and

that sometimes adolescent adoptees have identity issues that can lead to behaviors that place them physically and psychologically at risk. However, mitigating risk factors is possible. There is ample evidence in the literature to suggest that when adoptees preserve their birth culture, heritage language, and are supported when searching for their biological parents, they are happier and less apt to develop depression than those who deny their cultural heritage. It is important to stress that while adopted adolescents have the same trouble finding a comfortable identity as other adolescents, international adoptees face unique challenges that follow them into their adult lives.

Bledsoe, Julia and Brian Johnson. “Preparing Families for International Adoption.” Pediatrics in Review, v.25 (2004). Levy-Shiff, Rachel, Naomi Zoran, and Shmuel Shulman. “International and Domestic Adoption: Child, Parents, and Family Adjustment.” International Journal of Behavioral Development, v.20 (1997).

International Adoptees as Adults Adult adoptees face challenges that can have long and lasting impacts on their lives and the lives of their adoptive families. This is especially true if certain issues were ignored during their adolescence. Possible life challenges may include unresolved grief, a sense of loss, low self-esteem, depression, substance abuse, and fear of abandonment, among others. An interesting thing about international adult adoptees is that their status as an adoptee impacts the way that they parent their own children. For instance, adult adoptees may want to search for their biological parents so they can learn more about their medical history; this is especially true if an adoptee wants to have biological children of his or her own. Adult adoptees may also search for biological parents as a coping mechanism: instead of attending professional counseling sessions, they keep themselves busy to avoid confronting their feelings.

Adoption by lesbian, gay, and bisexual (LGB) people (i.e., sexual minorities) has increased over the past several decades. However, no research has examined rates or experiences of adoption by transgender people. Sexual minorities who seek to become parents may consider reproductive technologies (artificial insemination or surrogacy) or adoption as a means of building their families. Sexual minorities who pursue adoption may decide on international adoption, public domestic adoption (through the child welfare system), and private domestic adoption (in which birth parents and adoptive parents are matched through an adoption agency). Sexual minorities may choose private domestic open adoption because they are attracted to the idea of maintaining contact with birth parents, or being able to provide their child with information about their birth parents, or because of the greater likelihood of adopting an infant compared to international or public adoption. Sexual minorities may select international adoption to avoid the long wait associated with the domestic private adoptions of infants, or because they suspect that birth mothers are unlikely to choose them as adoptive parents because they are gay. Same-sex couples who pursue international adoption must weigh such considerations against the reality that if they choose to adopt internationally, some countries will not allow them to adopt as a couple, and they might have to closet their relationship (at this time, no country allows same-sex couples to adopt; thus, some couples choose one partner to pose as a single parent), which can create intra- and interpersonal tension. Finally, sexual minorities who

Nicholas D. Hartlep Illinois State University See Also: Adoption, Mixed-Race; Adoption Laws; Multi-Racial Families. Further Readings Bartholet, Elizabeth. “International Adoption: Current Status and Future Prospects.” Adoption, v.3 (1993). Bartholet, Elizabeth. “International Adoption.” In Children and Youth in Adoption, Orphanages, and Foster Care, Lori Askeland, ed. Westport, CT: Greenwood Press, 2005.

Adoption, Lesbian, Gay, Bisexual, and Transgender People and



Adoption, Lesbian, Gay, Bisexual, and Transgender People and

The American Psychological Association has supported adoption by same-sex couples, citing social prejudice as harming the psychological health of lesbians and gays, while noting that there is no evidence that their parenting causes harm.

seek to adopt through the child welfare system are typically in part motivated by finances or altruistic reasons. Sexual minorities may also believe that they have the best chance of adopting via the child welfare system, in that the number of children in foster care exceeds the number of heterosexual prospective adoptive parents. While it is true that LGB people may be welcomed by some child welfare workers and social service agencies, some sexual minorities report insensitive practices by child welfare workers. The Transition to Adoptive Parenthood Some research has examined the transition to adoptive parenthood for sexual minorities, and for samesex couples specifically. This research suggests that, like heterosexual biological-parent couples, samesex adoptive couples experience declines in their mental health and relationship quality across the transition. Factors that appear to buffer against poor mental health across the transition include support from family, friends, and coworkers, living

27

in a gay-friendly neighborhood, and living in a state with pro-gay laws pertaining to adoption. Research on the transition to adoptive parenthood by sexual minorities shows that certain subgroups of parents may experience particular challenges. Many LGB people adopt transracially (i.e., children who are of a different race). These families may face additional challenges related to their multiply stigmatized and visible family structure, in that these families are vulnerable to the stresses associated with both heterosexism and racism. Same-sex couples who adopt through the child welfare system also encounter unique challenges. They often adopt children who are older and/or who have behavioral or attachment difficulties, which may cause strain to their relationships. Furthermore, parents who seek to adopt through the welfare system usually foster their children before they can legally adopt them, and the legal insecurity of such placements has been found to impact sexual minority adoptive parents’ well-being and attachment to their children. Parent and Child Functioning Some research has explored the well-being of sexual minority adoptive parents. These studies have found similar levels of parenting stress in LG and heterosexual adoptive parents. Aspects associated with less parenting stress for sexual minority parents include adopting babies or toddlers as opposed to older children; having children with few behavioral difficulties; having less depression before becoming a parent; and having a strong social support network. Likewise, research on children adopted via foster care by LG and heterosexual parents found no differences in family functioning as a function of parental sexual orientation. Parents who adopted younger children, and parents who adopted nondisabled children, report higher family functioning. Regarding parent–child relationships, children adopted by LG parents show similar levels of attachment to their parents as children adopted by heterosexual parents. They also show similar emotional and behavioral functioning compared to children adopted by heterosexual parents. Furthermore, children adopted via the child welfare system by both LG and heterosexual parents have been found to show significant gains in cognitive development and exhibit similar levels of behavior problems over time, despite the fact that LG parents

28

Adoption, Mixed-Race

tend to raise children with higher levels of biological and environmental risks prior to placement. Studies have shown that children adopted by LG adoptive parents demonstrate normal gender development. However, research also suggests that the adopted children of same-sex parents may be less stereotyped in their play behaviors than children of heterosexual parents. This may be regarded as a strength because different types of toys and play facilitate different types of skill building. Unique Challenges and Strengths in LG Adoptive Parent Families Despite their positive outcomes, sexual minority adoptive parents and their children may confront a variety of challenges, including legal ones. Same-sex couples, for example, may live in states that do not allow same-sex partners to co-adopt. These couples must select one partner to perform the official adoption as a single parent, resulting in a situation in which the child has only one legal parent. In about half of U.S. states as of 2013, the as-of-yet nonlegal partner may complete a secondparent adoption, thus enabling the child to have two legal parents. Furthermore, LGBT adoptive families may face discrimination within their communities, workplaces, and the school system. While social support may help ameliorate the negative effects of these challenges, sexual minority parents have been found to receive less support from their families of origin in general and with regard to parenting, often because of their sexual orientation. Although sexual minority adoptive parents encounter unique barriers, they also display distinct strengths. First, because they often must create families of choice (i.e., supportive communities that do not rely on biological ties), they have been found to possess more expansive notions of family, and thus may be open to adoption as a first choice. Similarly, they may be less threatened by relationships between the child and the birth parents, and therefore more accepting of open adoption arrangements. Furthermore, because they must work through unique challenges, same-sex parents may develop more resilience, and therefore may be better equipped to handle challenging parenting situations, such as special needs placements or transracial adoptions. Finally, because LG adoptive parents must go through a lengthy

process to become parents, they may approach parenting more intentionally than their heterosexual counterparts. Abbie E. Goldberg Lori A. Kinkler Clark University See Also: Adoption, Mixed-Race; Adoption, Open; Adoption, Second Parents and; Adoption Laws; Gay and Lesbian Marriage Laws; Parenting Further Readings Farr, R. H., S. L. Forssell, and C. J. Patterson. “Parenting and Child Development in Adoptive Families: Does Parental Sexual Orientation Matter?” Applied Developmental Science, v.10 (2010). Gates, G., M .V. L. Badgett, J. E. Macomber, and K. Chambers. Adoption and Foster Care by Gay and Lesbian Parents in the United States. Washington, DC: The Urban Institute, 2007. Goldberg, A. E. Lesbian and Gay Parents and Their Children: Research on the Family Life Cycle. Washington, DC: APA, 2010.

Adoption, Mixed-Race Mixed-race, or transracial, adoption occurs when adults adopt a child from a different racial background than their own. In the 20th and 21st century in the United States, this usually means white adults adopting nonwhite children. While transracial adoptions have taken place in the United States since the 1940s, they became more prominent with the increase in international adoptions and the influence of civil rights reformers in the 1960s. Origins When formal permanent nonrelative adoption became more acceptable starting at the turn of the 20th century, most prospective adoptive parents wanted children who could “pass” as their biological kin. Because those most inclined to formally adopt a child were middle or upper class and white, this meant that the ideal adoptable child was a white infant or toddler. Social welfare practitioners reinforced these norms, especially with the rise of child



development theories in the 1920s that prioritized environment over heredity. Such theories provided social workers with a scientific basis to scrutinize adoptive placements. One way that they accomplished this was by matching a child with regard to physical likeness, religious background, and social background to his or her new parents, so that the adoptive family resembled the child’s biological one. This was a specifically American construct, a type of social engineering similar to eugenics. The practice of matching ensured same-race adoptions and contributed to rising demand for white infants and toddlers. Social workers classified nonwhite children as “hard to place” and “unadoptable,” so many grew up in institutions. The social norms in the first half of the 20th century that had made transracial adoptions virtually nonexistent gradually eroded in the post–World War II era. The baby boom made childless couples feel pressured to adopt. The supply of adoptable babies, which had always been limited, precipitously fell and drove some couples to consider adoption across racial lines. This was the case for Helen and Carl Doss, who starting in the mid-1940s adopted 12 children domestically, 10 who came from multiracial Asian, Native American, and Latino backgrounds. Helen Doss’s 1952 bestselling memoir, The Family Nobody Wanted, popularized transracial adoption and made the extraordinary act of creating an interracial family seem more ordinary. Another early foray into domestic transracial adoption was the Indian Adoption Project, a U.S. government-funded program that ran from 1958 to 1967. This project placed 395 Indian children into white homes throughout the Midwest and the east coast. Even more significant was the rise of international adoption. Indeed, “mass” transracial adoptions started during the 1940s, when several hundred military families began adopting mixed-race G.I. babies from Germany and Japan, whose fathers were members of the Allied military forces serving in World War II. Civilians also increasingly looked overseas to find adoptable children. The Nobel Prize–winning novelist Pearl Buck was one prominent early example, personally adopting mixed-race children from Asia and Europe, and becoming a spirited activist and organizer for transracial adoption. Opportunities to internationally adopt expanded in the aftermath of the Korean War, when South Korea struggled to care for thousands of mixed-race G.I.

Adoption, Mixed-Race

29

children. After adopting six G.I. children, Harry and Bertha Holt, Oregon evangelicals and philanthropists, launched an adoption agency to bring mixedrace Korean children to U.S. families. From 1954 to 1963, the Holt Adoption Program placed thousands of mixed-race children in predominantly white families. As more couples internationally adopted, this challenged the long-held social welfare practice of racial and religious matching. Still, interracial families were outliers in the 1940s and 1950s, when most adoptive parents still wanted children who “looked like them” and would fit into the largely segregated neighborhoods and schools that existed nationwide. Even with the marginal acceptability of adopting an Asian, Latino, or Native American child, few white families adopted black children. The first documented formal adoption of a black child by a white couple took place in 1948, but such instances were rare, even for proponents of transracial adoption. For instance, the Holts almost exclusively placed black Korean children with African American families,

Pearl Buck was the daughter of missionaries, and spent most of her life in China. She was particularly well known for her efforts for Asian and mixed-race adoption.

30

Adoption, Open

and the Dosses rejected the placement of a partblack child into their home. By the mid-1960s, reform efforts and civil rights laws contributed to the easing of social taboos against the adoption of black children by white families. Couples and single women adopted African American children for a variety of reasons, including their desire to take in a “hard-to-place” child, promote interracial unity, and improve race relations. At the same time, states such as Louisiana continued to ban interracial adoptions into the 1970s. In fact, the numbers reveal that such adoptions continued to be exceptional—prior to 1975, white couples adopted less than 12,000 black children nationwide. Resistance to Mixed-Race Adoption As early as 1969, African American professionals worried about the implications of interracial adoptions. These fears came to a head as transracial adoptions reached a peak in 1970; two years later, the National Association of Black Social Workers (NABSW) issued a provocative statement calling for and end to adoptions of black babies by white families. NABSW president Cenie J. Williams further shocked the social welfare establishment and adoptive parents by insisting that black children would be better off in institutions and foster placements than in the homes of white families. For some in the African American community, the rise in transracial adoptions symbolized not only the racism embedded in white, middle-class adoption culture, but also how the U.S. child welfare system failed to recruit and maintain black adoptive families. The NABSW statement temporarily slowed white couples’ adoption of black children. Most professional social welfare agencies, including the Child Welfare League of America, continued to balk at transracial placements in the 1970s, considering them risky for children in a nation still beholden to a black-white binary. Resistance to mixed-race adoptions also came from Native Americans in the late 1960s and early 1970s. Indian tribes argued that the Indian Adoption Project was another example of U.S. cultural imperialism. In response to advocacy from tribal leaders, Congress passed the Indian Child Welfare Act in 1978. The act determined that a child’s best interest could not be determined apart from his or her tribal heritage; this erected a significant legal barrier to future interracial adoptions of Indian children by non-native families.

The 1990s and Beyond Since the 1990s, debates over transracial adoptions have continued to shape the legislative and social landscape. In 1994, Congress passed the Multiethnic Placement Act (MEPA), which prohibited agencies that received federal funding from using children or parents’ racial backgrounds as criteria for denying adoptive placements. Two years later, the Interethnic Provisions of 1996 (MEPA-IEP) amended MEPA, making it easier for children from the foster care system to be placed in adoptive homes and offering a tax credit for adopting families. Some adoption scholars have applauded this legislation, contending that it offers children of color a greater chance of being adopted into permanent homes. Other scholars have argued that it undermines protections for minority birth parents and prioritizes the interests of privileged adoptive parents. Regardless, mixed-race adoption continues to inform the conversation over family formation, race, and identity in the United States. Rachel Winslow Westmont College See Also: Adoption, International; Adoption, Single People and; Adoption Laws; African American Families; Asian American Families; Civil Rights Movement; Immigrant Children; Multiracial Families; Native American Families. Further Readings Briggs, Laura. Somebody’s Children: The Politics of Transracial and Transnational Adoption. Durham, NC: Duke University Press, 2012. Herman, Ellen. Kinship by Design: A History of Adoption in the Modern United States. Chicago: University of Chicago Press, 2008. McCoy, Ruth and Amy Griffin. “Transracial Adoption Policies and Practices: The U.S. Experience.” Adoption and Fostering, v.36 (2012).

Adoption, Open Adoption is an accepted alternative for birth parents who choose not to raise a child and for parents who cannot or choose not to have biological



children. Legislation in the United States has made adoption more legally feasible than in previous generations. Despite this, adoption rates have gradually declined since the 1970s, even though there has been a shift in recent decades toward open adoptions, where communication and contact between the birth parents, adoptive parents, and the adoptee continues to varying degrees. Research supports the benefits of openness in adoption for all participants. Domestic adoptions either take place privately, with the assistance of adoption agencies or attorneys, or through the public child welfare system when authorities deem it unsafe or impossible for children to remain with their biological parents. Involuntary state-mandated adoptions tend to involve older children who know the identity of their birth parents because they have spent a significant amount of their lives with them. In situations that are safe for the child, such adoptions commonly include ongoing contact between birth and adoptive families. Therefore, openness of adoption arrangements has been predominately studied among voluntary private adoptions. In the first half of the 20th century, private adoptions tended to be “closed,” meaning that adoptees and adoptive parents had no contact with the birth parents, and legal records concerning each party’s identity remained sealed. Gradually, open adoptions and a shift in practices toward open and semi-open arrangements took place. As of 2013, the vast majority of domestic voluntary adoptions involve open arrangements. Around 55 percent of adoptions are fully open, and another 40 percent involve some level of ongoing contact among participants. Structural and Communication Openness Open adoptions are structured in one of two ways. The first concerns the arrangement between the adoptive and birth families. This may include selection of the adoptive parents by the birth parents, meetings between the birth and adoptive parents prior to the birth, the adoptive family providing updates about the child, and ongoing contact between the birth and adoptive families. Fully open adoptions include most of these practices, particularly ongoing personal contact among members of the adoption triad. The second way that open adoption can be structured is in terms of a

Adoption, Open

31

communication continuum, which typically has three levels. The intrapersonal level is where each member of the adoption experiences a self-exploration of his or her thoughts and feelings about the adoption; the intrafamilial level is where adoption issues are explored among adoptive family members and among birth family members; and the interfamilial level is where adoptive family members and birth family members communicate with each other about the adoption. In the majority of adoptions, the adoptive and birth parents form an agreement to facilitate contact with the adopted child before he or she is born, and this prenatal agreement outlines the frequency of contact later. Structural openness in adoption and communication on the interfamilial level are also associated with more communication openness within these families. Therefore, ongoing contact between birth and adoptive families is linked with more communication about the adoption among both families. Effect on Birth Parents Research suggests that there are benefits to all family members in terms of postadoption contact and openness. During pregnancy, women who enter into open adoption agreements report more attachment to their unborn child and are more likely to seek prenatal care. They are also more likely to report feeling grief immediately following childbirth. Over time, birth mothers in open arrangements report better grief resolution and less depression, more confidence about their child’s well-being, and more satisfaction with their decision than do birth mothers without ongoing contact. Even among adoptions that are involuntary, postadoption contact with the adoptive family is associated with healthier adjustment for birth parents. Effect on Adoptive Parents and Adoptees Early research examining the outcomes for adoptive parents with open adoption suggested that there may be adverse consequences, such as distress and worry about how contact will affect their child. More recent findings suggest that adoptive parents generally report satisfaction with open adoption arrangements, positive relationships with birth parents, and secure attachments to their adopted children. Adoptive parents in open arrangements tend to communicate more about adoption with

32

Adoption, Second Parents and

their children, and report that they worry less about birth parents reclaiming the children than parents with closed arrangements. For adopted children, research suggests that open communication about their adoption with their adoptive parents is associated with positive adjustment both in childhood and adulthood that translates into fewer identity problems, a closer relationship with their adoptive parents, and better overall adoptive family functioning. Furthermore, structural openness, including contact with birth parents, is associated with positive behavioral and mental health outcomes. Some research suggests that structural openness alone may not lead to positive outcomes for adoptive children without corresponding open communication both within the adoptive family and between adoptive and birth families. Thus, communication openness may be a required precursor to the benefits of structural open adoption. Continuum of Adoption Openness The ideal level of openness for each family varies because adoptions tend to fall on a continuum of openness, rather than being completely closed or fully open. The ideal level of openness may also change over time as children grow up and birth parents experience subsequent life events. For example, birth parents report grief around subsequent childbirths, marriage, and menopause. Furthermore, depending on the context surrounding the adoption, there is some evidence that structural and communication openness may be harmful to adoptive children, particularly if the reason for adoption was related to abuse. Amy M. Claridge Florida State University See Also: Adoption, Closed; Adoption, International; Adoption, Mixed-Race; Adoption, Single People and; Adoption Laws. Further Readings Brodzinsky, David and Jesús Palacios, eds. Psychological Issues in Adoption: Research and Practice. Westport, CT: Praeger, 2005. Grotevant, Harold and Ruth McRoy. Openness in Adoption: Exploring Family Connection. Thousand Oaks, CA: Sage, 1998.

Siegel, Deborah H. “Open Adoption: Adoptive Parents’ Reactions Two Decades Later.” Social Work, v.58 (2013).

Adoption, Second Parents and Second parent adoption—commonly referred to as stepparent adoption—occurs when an adult accepts legal parental status of his or her spouse’s legal child. Stepparent adoption is the most common form of adoption in the United States and accounts for over 40 percent of all adoptions. State laws vary in the qualifications and procedure for stepparent adoptions, but they tend to be less stringent than nonrelative (i.e., agency or private) adoption. Typically, states only allow a child to have two legal parents. Because of this, the nonresidential birth parent no longer has parenting rights or obligations after the adoption. If the nonresidential parent is living, then he or she consents to stepparent adoption by relinquishing parental rights. In some states, the court may terminate parental rights due to a lack of contact or child support. When stepparent adoption is finalized, most children are issued a new birth certificate listing the stepparent’s name as a legal parent. The child is able to change his or her last name to the stepparent’s surname. Stepparent adoption offers a number of legal protections for stepparents and children, but it can also present some difficulties for the blended family. Motivations for Stepparent Adoption Stepparent adoption is motivated by several factors. Because adoptive stepparents hold the same rights as legal biological parents, one motivation may be for legal protections. Adoption provides the child access to the stepparent’s insurance and Social Security benefits. Adoption also grants the stepparent legal authority to access the child’s medical and school records and make decisions regarding the child’s medical treatment or school-related activities. Adoption secures the stepparent’s role as a legal guardian, should the remarriage end in divorce or death of the residential biological parent.



Some adoptions attempt to create a sense of belongingness. Legal name changes can foster a sense of family unity. Additionally, formalizing the stepparent’s role may encourage the child to refer to the stepparent as “mom” or “dad,” and help the child feel that the stepparent will be a stable, lasting presence in his or her life. Finally, stepparent adoptions may be motivated by the desire to clarify the stepparent’s role in the family. In some situations, the stepchild’s desire to be adopted by the stepparent may be the driving force. This likely occurs when the child feels a strong emotional connection to the stepparent, and desires a formalized commitment of that relationship. Challenges Related to Stepparent Adoption In light of the legal and relational benefits motivating stepparent adoption, there are also several possible challenges. Of primary concern for many stepfamilies is the involvement of the nonresidential biological parent. Nonresidential parents who are at least somewhat involved in the child’s life are not likely to easily surrender their parental rights. They may contest the adoption if their relationship with their former spouse or partner is marked by conflict. Thus, remarried parents pursing adoption may face anger and resistance from the nonresidential parent, in some cases resulting in hostile legal proceedings. Furthermore, due to legal costs and the loss of child support from the nonresidential parent, the adoption process can be financially draining for stepfamilies. Once the adoption takes place, stepfamilies must negotiate other issues. For instance, many adopted stepchildren experience loyalty conflicts between their nonresidential biological parent and their adoptive stepparent. Children may feel as if the adoption is a betrayal of their nonresidential biological parent, and they may resent what they perceive as their stepparent’s attempt to replace the biological parent. In addition, stepchildren who are adopted may not fully understand the meaning of the adoption, or how it will impact the stepfamily dynamics. For example, a stepmother may view the adoption as a way of exerting her authority as a mother figure, but the stepchild might not anticipate any changes in their relationship. If remarried parents do not clearly communicate their expectations, children will likely have difficulty adjusting to the adoption.

Adoption, Second Parents and

33

Stepchild adoption also requires reorganization and adjustment within the entire stepfamily system. Family members must redefine the boundaries of the stepfamily, which may include the difficult process of cutting off all ties with the nonresidential biological parent. Stepparents and adopted stepchildren must also manage identity changes as their relationships and roles in the family take on new meanings. For example, while name changes can promote a sense of family unity, some stepchildren may have trouble adjusting to losing their previous name or coping with their new identity. Some adoptive stepparents, on the other hand, may struggle with the idea that they will be legally tied to the stepchild if the marriage ends in divorce or death. Thus, stepchild adoption can complicate the divorce process and lead to coparenting challenges if the marriage ends. In sum, there are benefits motivating families to pursue stepparent adoption, as well as potential challenges. Biological parents, stepparents, and children must coordinate with one another as they negotiate how they want to function as a blended family. Successful stepparent adoption requires careful communication with and about the nonresidential biological parent. Gauging the child’s interest in, and understanding of, the adoption allows parents to adjust their communication to their child’s needs, thus maximizing the benefits and minimizing the potential challenges of this family type. Colleen Warner Colaner Danielle Poynter Leslie Nelson University of Missouri See Also: Adoption, Open; Adoption Laws; Divorce and Separation; Other Mothers; Social Fatherhood; Stepchildren; Stepfamilies; Stepparenting. Further Readings Farr, R. H., S. L. Forssell, and C. J. Patterson. “Parenting and Child Development in Adoptive Families: Does Parental Sexual Orientation Matter?” Applied Developmental Science, v.10 (2010). Ganong, L., M. Coleman, M. Fine, and A. K. McDaniel. “Issues Considered in Contemplating Stepchild Adoption.” Family Relations, v.47 (1998). Lamb, K. A. “‘I Want to Be Just Like Their Real Dad’: Factors Associated With Stepfather Adoption.” Journal of Family Issues, v.28 (2007).

34

Adoption, Single-Parents and

Adoption, Single-Parents and Single-parent adoption typically refers to an unmarried individual who voluntarily takes legal and physical custody of a minor child for whom he or she did not previously have rights or responsibilities. The single-parent adoption rate in the United States has increased since 1990, whereas the overall adoption rate has remained steady. The increased rate of single-parent adoption can be attributed to a few factors, including the increasing prevalence and social acceptability of single-parent households, which has also prompted some states to provide single individuals easier access to adoption. Moreover, although many state statutes limit adoption to singles or married couples, individuals in unmarried heterosexual or homosexual relationships may pursue single-parent adoption. Trends toward delayed marriage and childbirth may also contribute to the increase in single-parent adoptions by offering family options to those who did not form families during the traditional reproductive window. Single-parent adoptions often informally occur. For example, relational adoptions in which an unmarried close family member or friend takes guardianship of a child without going through legal adoption proceedings are increasingly recognized as a common form of single-parent adoption. The nuclear family, comprised of a father, a mother, and their shared biological or adopted children, is no longer the norm among American families; despite this, nontraditional family structures are sometimes negatively stereotyped. For example, adoptive single-parent households are often stigmatized for having fewer financial and emotional resources available to children, and for lacking either a male or female role model. Although these stereotypes are often without merit or are not detrimental to children’s well-being and development, they nonetheless negatively affect access to adoption for single parents. Statistics No standardized or systematic data is collected on adoptions; estimates of the prevalence and characteristics of adoption are calculated by aggregating data available from child welfare services, private and public adoption agencies, and immigration services.

Thus, the likelihood of incomplete or duplicate data makes it difficult to pinpoint overall rates and characteristics associated with adoption, including the overall number of single-parent adoptions. However, the United Nations estimates that approximately 260,000 children are adopted worldwide each year, and that nearly half of adoptive families reside in the United States. Adoption by single parents as of 2013 accounts for approximately 20 percent of all adoptions in the United States; this is a dramatic increase from 1990, when only 2 percent of all adoptions were to single parents. Demographics Despite the sometimes negative stereotypes of single-parent families, adoptive single-parent households have many positive characteristics and similarities to two-parent adoptive families. For example, like other adoptive parents, the majority of single adoptive parents are older (the mean average age 42.8 years old) compared to first-time biological parents (whose mean average age is 25 years old). They are primarily either white (49 percent) or African American (33 percent). In contrast to other adoptive parents, single adoptive parents are more likely to be female and have at least some college education, but they tend to have a lower than average socioeconomic status. The discrepancy between education and household income is because single-parent households have only one income, compared to the dual income of the twoparent households of most other adoptive families. The demographics of children adopted by single parents also vary when compared to the general adoption population. Children adopted by single parents are more likely to be born in the United States, are usually older (mean average of 9.9 years old), and are more often considered “hard to place” due to mental, physical, emotional, and behavioral issues. The increased likelihood of single parents adopting hard-to-place children is most likely a result of the negative stereotypes against single-parent households that limit their adoption options. Laws Regarding Adoption by Single Parents Within the United States, adoption is regulated by state laws, and although the language and interpretation within each state varies a great deal, all states allow single individuals to adopt children. In addition, if enacted into law, the Every Child

Adoption Laws



Deserves a Family Act of 2013 will prohibit discrimination against prospective adoptive parents based on sexual orientation, gender identification, or marital status; any agency not in compliance will face withholding of federal funding. As of the end of 2013, the act had been introduced in both the House of Representatives and the Senate, but had not been passed by either branch of Congress. If passed, the legislation would increase the number of unmarried individuals eligible to adopt, and may therefore further increase the prevalence of singleparent adoption. Many Americans choose to adopt children from abroad. As in the United States, regulations involving international single-parent adoptions greatly vary, depending on the country of origin of the child adopted. For example, the People’s Republic of China and the Dominican Republic prohibit singles from adopting; Ethiopia, Kenya, Nepal, and Nigeria only allow singles to adopt under specific conditions; and many other countries are unclear or do not have established regulations specific to singleparent adoption. Brigitte Dooley Jason D. Hans University of Kentucky See Also: Adoption, Lesbian, Gay, Bisexual, and Transgender People and; Adoption, International; Adoption Laws; Foster Families; Single-Parent Families. Further Readings Davis, Mary Ann. Children for Families or Families for Children: The Demography of Adoption Behavior in the U.S. New York: Springer, 2011. Farr, R. H., S. L. Forssell, and C. J. Patterson. “Parenting and Child Development in Adoptive Families: Does Parental Sexual Orientation Matter?” Applied Developmental Science, v.10 (2010). McCoy, Ruth and Amy Griffin. “Transracial Adoption Policies and Practices: The U.S. Experience.” Adoption and Fostering, v.36 (2012). Nickman, Steven, et al. “Children in Adoptive Families: Overview and Update.” Journal of the American Academy of Child & Adolescent Psychiatry,, v.44 (2005). U.S. Department of State, Bureau of Consular Affairs. “Intercountry Adoption.” http://adoption.state.gov/ country_information.php (Accessed June 2013).

35

Adoption Laws Adoption has long been a part of Western society. In ancient Rome, it was well-accepted that the ruling emperor could adopt a worthy male adult to become his heir. While there are stories about adoption throughout the Middle Ages and up to the modern era, it was not until the 19th century that the United States became the first modern country to formally legalize the practice. The history of the laws governing adoption can be read, in part, as a reflection of the growing awareness of the needs of families and children and how joining them together can be beneficial to the family unit as well as society as a whole. The laws concerning domestic adoption—that is, adoptions completed by residents of the United States who adopt children who are U.S. citizens— are also often applicable to internationally adopted children. The protections for a child’s welfare extend to all children adopted within the United States, whether the child is a native citizen or a citizen by virtue of the adoption process. In 1851, the United States became the first modern country with a local law permitting adoptions when Massachusetts approved the Adoption of Children Act. (As a point of reference, Great Britain did not legally permit adoptions until 1926.) In 1917, the Children’s Code of Minnesota made that state the first to seal off adoption records to the general public and require an investigation about the suitability of the adoptive parents prior placing the child in a home. This law served as a precursor to the subsequent laws requiring a thorough home study prior the approval of an adoption. Fifty years later in 1968, the growing awareness of the needs of children in the foster system led the state of New York to be the first to approve subsidies for children adopted through the foster care system. Ten years later, the federal government took action to try to right the wrongs that it perpetrated for over 100 years against Native American children. Such communities were often torn apart when the federal government removed children from their Native American homes and placed them in a families with little or no concern for their ethnic heritage. The Indian Child Welfare Act was passed in 1978 and ultimately provided the various Indian tribes, as opposed to the states, final oversight for the placement of tribal children.

36

Adoption Laws

With the 1980 passage of the Adoption Assistance and Child Welfare Act, the U.S. government provided funding to states for adoption subsidy programs for children with special needs. The law also privileged the biological family in cases where children were removed from the home and placed within the foster system. This legislative act built upon the Social Security Act of 1935. The Multi-Ethnic Placement Act was passed by Congress in 1994. This law prohibits agencies from refusing or delaying foster or adoptive placements because of a child’s or foster/adoptive parent’s race, color, or national origin. It also prohibits agencies from considering race, color, or national origin as the basis for denying approval or otherwise discriminating against a potential foster or adoptive parent and requires agencies to develop plans for the recruitment of foster and adoptive families that reflect the ethnic and racial background of its children. After several years of practice, the law was amended in 1996 through the Interethnic Placement Provisions. The Adoption and Safe Families Act of 1997 was in many ways the first substantial update in thinking about adoption since the 1980 Adoption Assistance and Child Welfare Act. This law attempted to place the welfare and safety of the child at the center of any decision about terminating parental rights or approving a placement. It also represented a policy shift away from family reunification and toward adoption. Two laws in 2000 helped bring adoption laws to where they stand in the present day. The Intercountry Adoption Act provided for the implementation of the Hague Convention on Protection of Children and Cooperation in Respect of Intercountry Adoption. The Hague Treaty, as it is commonly called, established the requirements for all adoptions that occur between families of different countries. Although this legislative act approved the implementation of the international agreement, the United States did not fully complete the process until 2008. The Child Citizenship Act of 2000 allowed certain foreign-born biological and adopted children of American citizens to automatically acquire American citizenship if entering the country with the appropriate visa. These children did not acquire American citizenship at birth, but they are granted citizenship when they enter the United States as

lawful permanent residents. Each of these milestones represents a step in the direction of better protecting the welfare of children and attempting to make adoption as ethical as possible. Thematic Overview of Adoption Laws Since the first adoption law was passed in 1851, one of the major shifts that has occurred is that decisions are no longer made solely in the interest of the parents; greater consideration is given to the welfare of the child involved. This theme is evident in the requirements for researching the family planning to adopt (1917), the growing concern about allowing children to maintain a connection with their biological heritage (1978), and the increasing concern given to the child’s safety and welfare as key part of any final decision about adoption (1997). In many ways, this shift mirrors many of the larger societal concerns about child safety and welfare and the recognition that children are unique human beings and should not simply be viewed as smaller versions of adults. A second societal shift that is mirrored in the adoption laws is the move from individual states determining the laws for adoption to a greater level of supervision at the national level. The laws of Massachusetts (1851), Minnesota (1917), and New York (1968) created a precedent that was followed by many other states. Many of the federal laws reflected a growing consensus within individual states, whereas others were requirements of a growing country evolving a high level of legal complexity. The third theme of adoption laws reflects the increased role of the United States on the international political stage. With the two legal actions of 2000, there has been a shift from domestic concerns to international concerns related to adoption. The realities of globalization in the late 20th century required that the United States internationalize many adoption laws and processes. Concerns that were not even considered in 1851 became significant enough to warrant federal legislative action. The fourth theme regards open and closed adoptions. Access to birth records of adopted children has been a legislative issue since the ruling in Minnesota in 1917. The contemporary conversation about this issue is framed in terms of having an “open” or “closed” adoption, which are terms that



Advertising and Commercials, Families in

represent two extreme sides of a continuum. On the extreme side of closed adoption are the cases where adoption records are entirely sealed off, and the extreme side of an open adoption includes cases where there is full participation of the birth family in the life of the adopted child. Practically speaking, most adoptions end up somewhere in the middle, and are based upon the interests of the birth and adoptive parents, as well as the guidance they receive through agencies involved in the process. The final theme is the implicit and explicit discrimination of the various laws at different points in time. Many adoption laws discussed above were enacted to correct previously biased laws, rulings, or judgments. For example, the 1978 act was in response to the fact that as many as 35 percent of children born to Native American families were removed and placed with families with little or no cultural connection to the children. The 1994 act was in response to the much higher rate of African American children in the foster care system compared to Caucasian children. The 1996 provisions that amended this act were an additional effort to help increase the number of minority children who were adopted by nonminority families in the United States. Despite these advances, as of 2013 the United States still lacked a federal law providing equal opportunities for adoption for LGBT individuals, who still face restrictions that other individuals do not. This thematic analysis is by no means exhaustive. For example, there are nuances for those who choose to engage in a private adoption as opposed to an adoption coordinated by an agency. For those interested in an international adoption, an agency that has federal approval to coordinate the international components must be contacted. Agencies that no longer hold an approved license cannot provide the legal documents (i.e., home studies) to facilitate the adoption. The history of adoption within the United States follows a slightly different path from the history of the legislation governing adoption proceedings, but understanding the various trajectories of adoption laws clarifies the multifaceted nature of adoption. Considering that the 1851 law predates the Civil War, clearly much has changed since that time. Although many changes have taken place in the last century and a half, one of

37

the commonalities between all of these various laws and acts is that the primary concern is that a family unit is created that can contribute to society at large in a positive way. The relatively recent emergence of international adoption further highlights the interdependence of countries around the world. Brent C. Sleasman Gannon University See Also: Adoption, Closed; Adoption, International; Adoption, Lesbian, Gay, Bisexual, and Transgender People and; Adoption, Mixed-Race; Adoption, Open; Orphan Trains; Primary Documents 1994. Further Readings Adoption History Project. “Timeline of Adoption History.” http://pages.uoregon.edu/adoption/index .html (Accessed August 2013). Conn, Peter. Adoption: A Brief Social and Cultural History. New York: Palgrave Pivot, 2013. U.S. Department of Health and Human Services. “Intercountry Adoption Act of 2000.” https://www .childwelfare.gov/systemwide/laws_policies/federal/ index.cfm?event=federal Legislation.viewLegis&id=51 (Accessed August 2013). U S. Department of State, Bureau of Consulate Affairs. “Intercountry Adoption.” http://adoption.state.gov/ hague_convention.php (Accessed August 2013).

Advertising and Commercials, Families in Advertising is a form of expression with limited First Amendment protections. Bigelow v. Virginia (1975) established advertising’s protected status. In it, the Supreme Court held that advertising has value in U.S. society, but also that, to a greater extent than other forms of protected speech, it is subject to “reasonable regulation.” False ads, misleading ads, and ads for unlawful goods and services are not protected under the First Amendment; all other commercial speech may be regulated if the government shows a substantial state interest that is narrowly advanced by such regulation.

38

Advertising and Commercials, Families in

Laws and Regulations Regarding Children’s Advertising In many countries, children are one of the most protected groups when it comes to advertising. In the United States, standards for advertising to children typically come from a balance of state or federal legislation and industry self-regulation. According to the Federal Trade Commission (FTC), advertising must (1) be truthful and nondeceptive, (2) have evidence to back up all of its claims, and (3) cannot be unfair. The FTC, which regulates advertising in print, broadcast, and online media, as well as through direct mail, recognizes that children (especially those under the age of 13) have an increased susceptibility to unfair and deceptive advertising. Its Division of Advertising Practices “monitors advertising and marketing of alcohol, tobacco, violent entertainment media, and food to children” and maintains enforcement priorities specific to that task. It issues and enforces rules to this end, with fines the most common recourse for violations. Other enforcements range from setting standards in industry guides to court orders terminating an ad or campaign and requiring substantiation or corrective advertising for claims made in an ad. Because alcohol and tobacco cannot be purchased by children under the age of 18, these products cannot legally be advertised to that audience, but products such as unhealthy food and violent or sexual materials are more problematic. Federal agencies such as the Small Business Administration hold that such questions of content are best left to parents’ discretion. For issues of practice, however, the FTC assesses unfair and deceptive advertising to children by whether the message is likely to alter a child’s (rather than an adult’s) judgment. For example, the 900-number rule prohibits sellers of most pay-per-call services from targeting these services to children. In one case, the FTC brought legal action against television advertisers who were encouraging children to call figures such as Santa Claus using a 900 number charged to their parents’ phone bill because parents could not decline or regulate the charges. COPPA and Collecting Children’s Data Although these data-gathering practices are not strictly limited to advertising, the line between online commercial and other online media content

is not always clear. The Children’s Online Privacy Protection Act of 1998 (COPPA) limits the information that children’s Web sites can gather and what the sites can do with this information. Any Web site targeted to children under age 13 or designed for a general audience but collecting information from someone under age 13 must comply with COPPA requirements. Among these requirements, clear and comprehensive privacy policies must be made available by such sites, and parents must be provided with a variety of notifications and controls regarding access to and use of their children’s personal information. “Personal information” as defined in COPPA originally included information such as names, home addresses, contact information, and social security numbers. The law was amended in 2013 to include information common to social media Web sites, such as geolocation information; screen or user names; photo, video, and audio files with a child’s image or voice; and “persistent identifiers” that can be used to recognize a user “over time and across different website or online services.” Broadcasting: Special Situations Both television and radio broadcast communications are special cases under the First Amendment. Because they use publicly owned airwaves, broadcasters must be licensed by the Federal Communication Commission or FCC (as a comparison, advertisers are overseen by the FTC, yet require no license, and if they follow appropriate practices, they may never come into contact with the agency). With regard to children, the Supreme Court in FCC v. Pacifica (1978) determined broadcasts to be “uniquely pervasive [and] accessible.” Cable television and satellite radio, which do not use the public airwaves, are exempt from most FCC regulations. The FCC has several special rules regulating broadcast advertising to children. The total length of commercials during children’s programming is more limited than in other media, with 10.5 minutes per hour on weekends and 12 minutes on weekdays. This limitation also holds for cable and satellite providers, and all digital video programming. Further restrictions are based on a separation policy, “to protect young children who have difficulty distinguishing between commercial and program material and are therefore more vulnerable to commercial messages.” “Host-selling” commercials,



Advertising and Commercials, Families in

which are advertisements featuring characters from the show in progress, are also banned, as are program-length commercials targeting children. Self-Regulation of Children’s Advertising Advertisers typically prefer to regulate themselves, and organizations such as the Better Business Bureau (BBB) provide standards and resources for this. The BBB’s Child Advertising Review Unit publishes self-regulatory guidelines and monitors child-targeted (under age 12) advertising content in a variety of media. When noncompliant ads are identified, the unit seeks voluntary cooperation with its standards. Children’s Advertising in Other Nations Restrictions on advertising during children’s programming take significantly different forms depending on the country, and several Western nations are more strict than other places. In the Canadian Code of Advertising Standards, for example, the commercial message time allotment is only four minutes per half hour of children’s programming, and hostselling segments are not permitted in any context. The European Union has introduced framework legislation that restricts children’s advertising in member nations, banning product placement and not allowing commercial interruptions in programs shorter than 30 minutes. Advertising to children is completely banned in Sweden, Norway, and the Canadian province of Quebec. Examples of the Content of Advertisements One of the early studies concerning gender stereotypes in media found gender-based discrepancies in print advertisements. Women were shown as shy, passive, and gentle, whereas men were dominant and powerful. Since this early study, a number of other researchers have focused on this topic area and have found similar results despite the medium chosen (e.g., print or television). Findings that women and men are shown differently have also turned up in other studies; however, two somewhat unique studies—one on postage stamps and another on clip art—found that women were more likely than men to be portrayed as nurturing. Further, the men that were portrayed were shown in nonnurturing roles and more women than men were shown in “cross-over” activities—traditionally feminine (or masculine) activities that are performed by men (or

39

women). That women are portrayed as nurturing, whereas men are not, is a fairly consistent finding across numerous studies, and is particularly evident in advertisements. Other researchers who focused on media portrayals of fathers, in contrast to men without children, have found that such portrayals experienced a historical change. Cartoons and popular magazines of the mid-1900s largely portrayed fathers as incompetent, but by the 1970s, the same media sources showed fathers as nurturing and invested in their families. Nurturing fathers were also more prevalent in comic strips during this time, as cartoonists similarly drew mothers and fathers as nurturing and supportive parents. Additionally, the average number of fathers portrayed in television commercials between 1950 and 1980 more than doubled, which may have helped the general public connect fatherhood and active family participation. One of the most influential forms of media may be television; over 98 percent of households have at least one television set that is on for an average of over four hours each day. Television viewers are also watching numerous commercials during their weekly 28 hours of television. About 25 percent of every one hour of network television consists of commercials, which adds up to over 500 commercials each week. The number of commercials viewers see, as well as the amount of television time devoted to commercials, has only increased over time. For example, there are currently about 8.5 minutes of commercials per 30-minute show (10 minutes for a cable TV show). In the 1960s, there would have been 5.5 minutes of commercials during the same length of time. In general, television viewers are now watching twice the amount of commercials than they were in the 1960s. Viewers are also watching shorter commercials. During the 1950s and 1960s, commercials lasted about one minute; now, most commercials are 30 seconds, which means that viewers now see roughly twice as many ads as before. Commercials are repeatedly shown throughout the day, so that viewers will be more willing to buy the products that they see advertised, but there are subtle, and in some cases not so subtle, messages about gender, family, and ways of behaving in these commercials. Advertisers seek to create an idealized picture or story about the characters in the commercials in order to sell products. Commercials can invoke

40

Advertising and Commercials, Families in

a sense of the good old days, the happy family, and fun-loving youth so that viewers will be more likely to associate with and buy the product with the warm and fuzzy feelings elicited from viewing the commercials. Commercials also present models of traditional, popular, and desired ways of acting, which is especially true when considering the portrayals of men and women. Images of stay-at-home moms and working dads are some of the common, yet traditionally gendered images that people view many times each day. These images portray families in idealistic or even unrealistic, but predictable and well-established ways; because these images are relatively consistent, viewers can focus on the products advertised, instead of being challenged with different or radical ways of thinking about gender and family roles. Advertising’s Impact on Children The impact of commercials on children is particularly worrisome to researchers and others. The average child sees over 40,000 commercials each year, which suggests that they are receiving a variety of messages over and above what they view on their favorite television show. What is particularly of concern is that young children are not able to know the difference between commercials and TV shows, and so advertisers may show certain behaviors that are not appropriate for children (e.g., violence or sexual content). Young children also have difficulty understanding the difference between reality and fantasy. Because of this, and because advertisers may seek to create fanciful worlds only possible when in possession of the toys they are selling, children are especially susceptible to the messages advertisers are selling. Mental health professionals, scholars, and noacademics worry about the impact media have on children. Advertising may be less of a concern compared to violent video games and sexualized television shows, but researchers have found that commercials can negatively impact children and adults. Recent research has found that the content of commercials that aired on children’s channels and those that aired on other stations did not differ in the amount of negative content or disturbing behaviors (e.g., violence, destruction of property, natural disaster, or body trauma). They also found that commercials that aired on children’s channels were more likely to show negative modeling compared to those airing on other stations. Negative modeling included smoking, minors drinking alcohol, or

swearing. MTV was found to show the most types of negative content. Other researchers are concerned with the way food is advertised in commercials that air on children’s and other channels, especially in light of increasing rates of childhood obesity. For example, a popular-press article reported that over 40 percent of food commercials advertised snacks and fast food. The report also found that there were no commercials showing fresh fruit, vegetables, poultry, or seafood. In combination, children see almost no commercials on exercise and fitness (one per every 26 food ads for young children; one for every 130 food ads for teens). The age group that sees the greatest number of food commercials is between 8 and 12 years old, a group moving into adolescence and making independent choices; this seems like a vulnerable and lucrative market. In 2005, the Institute of Medicine found that marketing unhealthy food choices was a contributor to unhealthy environments for children. Within the last year, McDonald’s, Coca-Cola, and Pepsi agreed to change the way that they advertised their food products to children, agreeing that at least half of their advertisements would show healthier foods and lifestyles. Jessica Troilo Bob Britten West Virginia University See Also: 24-hour News Reporting and Effect on Families/Children; Children’s Online Privacy Protection Act; Children’s Television Act; Primary Documents 1990. Further Readings Beales, J. Howard. “Advertising to Kids and the FTC: A Regulatory Retrospective That Advises the Present” (2004). www.ftc.gov. http://www.ftc.gov/ public-statements/2004/03/advertising-kids-and-ftc -regulatory-retrospective-advises-present (Accessed February 2014). Federal Communication Commission Guide: Children’s Educational Television. https://www.fcc.gov/guides/ childrens-educational-television (Accessed February 2014). Federal Trade Commission. “Complying with COPPA: Frequently Asked Questions.” http://www.business.ftc .gov/documents/0493-Complying-with-COPPA-Freq uently-Asked-Questions (Accessed February 2014).

Hoy, M. G., C. E. Young, and J. C. Mowen. “Animated Host-Selling Advertisements: Their Impact on Young Children’s Recognition, Attitudes, and Behavior.” Journal of Public Policy & Marketing, v.5 (1986).

Advice Columnists Newspaper advice columns such as “Ann Landers” or “Dear Abby” have served as one of the few consistent, mainstream, widely available public forums for the discussion of etiquette, politics, and social issues, especially since their modern incarnation in the 1950s. Advice columns were staples of lifestyle sections of newspapers, and were meant to entertain and to inform. Advice columnists were usually female and acted in the role of trusted confidant. Popular advice columnist Ann Landers (pen name for Eppie Lederer) was known as “America’s Mom.” Letters to advice columnists ranged in topic, and were at times funny, tragic, or strange. Letter writers often claimed to be writing on behalf of a friend or family members. Questions were often anonymously asked, with the signature identifying the problem. For example, a person writing about the confusing behavior of a husband may sign her letter “Confused in New York.” Research has shown that these letters came from actual people, and the advice columnists received lots of letters. For example, on an average week in 1978 at the Minneapolis Tribune, Ann Landers received 763 letters, while local advice columnist Mary Hart racked up 1,056 letters. Many of the letters were about parenting or marriage and a few were from teenagers, who asked for help about what to do about their parents or their dating experiences. While advice columns were popular throughout the second half of the 20th century, little scholarship has been devoted to them. Communication researchers have typically focused on front-page news, and the content of the lifestyle or women’s section of newspapers or women’s magazines, where advice columns usually ran, has rarely been studied. The Women’s Pages The topics of women’s magazines and newspaper sections were defined as the four Fs: family, fashion, food, and furnishings. Content ranged from

Advice Columnists

41

traditional to progressive, and ranged from recipes and fashion photographs to articles about women’s club activities. They contained the occasional article about progressive topics such as equal pay for women and violence against children. A common practice of women’s page writers was to use pen names to preserve the column’s continuity. After all, it was expected that the female columnist would leave the position once she married, and the next woman on board could then write under the same pen name and the readers would be none the wiser. Even beyond the women’s pages, female news reporters began using pen names in the late 1800s because for a woman to work as a newspaper reporter was considered unsavory and disreputable. Some of the most famous female journalists of the late 19th century included “Dorothy Dix,” who was really Elizabeth Meriwether Gilmer, and “Nellie Bly,” the writer Elizabeth Cochrane, who was hired by Joseph Pulitzer to travel around the world in 80 days. Newspaper advice columns in the United States started during the days of yellow journalism, which peaked about 1898, as a way of showing that newspapers had a heart. Dorothy Dix, whose work ran in newspapers across the country beginning in 1894, was the first renowned advice columnist. A scholar of Dix noted that her advice reflected the shifts in morality and values that were occurring in society at the time. Popular advice columnists understood the pulse of their readership and guided readers through social change. Column With a Hart During the 1950s and 1960s, the Miami Herald’s women’s section became influential in the journalism field. Eleanor Hazlett Ratelle, who wrote as “Eleanor Hart,” was the author of “Column With a Hart.” Ratelle often mixed advice with stories of her children, a common advice column tactic at the time. Before responding to readers’ letters, she would often consult with experts in law, medicine, and family relations. “I’m a reporter and a writer,” she told readers. “I’m not an expert or an authority and I don’t pretend to be.” She also included her opinions, at times boldfacing her answers to emphasis her point. Common debates in the 1950s and 1960s were whether married women should work or if racially segregated neighborhoods should integrate.

42

Advice Columnists

Esther Lederer won a contest to take over the “Ask Ann Landers” feature after the original writer, Ruth Crowley, died in1955. Lederer wrote the column until her death in 2002.

In the 1950s, questions about acceptable behaviors for married women were common themes. For example, Hart published several letters under the headline “Do You Need Two Jobs to Live in Miami?” in reaction to a letter from “Sun Kissed,” who wrote that she and her husband both had to work to afford to live in Miami. This was the response from “Thankful Miamian”: “Married women should stay home, especially those with children. This woman should stop working outside the home. A home is work, but that’s where folks are content.” “Quite Optimistic” responded in the same column: “I feel sorry for Sun Kissed. But it’s people like her and her husband who make it hard for men to find jobs.” Hart did not offer her opinion, although she was a working mother. Role of Advice Columns Advice columns can be traced back to 1660s England, when newspapers first appeared. Advice columnists in England were called “agony aunts” because they dispensed comforting advice, much

as a loving aunt. From the beginning, advice columns were based on two beliefs: that nearly everyone occasionally seeks advice from others, and that almost everyone is curious about other people’s problems. Journalists justified the inclusion of advice columns as a form of gossip that nonetheless has some journalistic value. Late-20th-century advice columnists raised the standards of earlier columnists by incorporating information that they received from medical, mental health, and other professionals with their common sense wisdom. For instance, advice columns by Abigail Van Buren (“Dear Abby”), her twin sister Ann Landers, and Dr. Joyce Brothers (in Good Housekeeping magazine) gave advice similar in quality to that of mental health professionals. Much of the advice given in these columns pertained to family life, particularly romance and marriage. Advice to the lovelorn placed columnists in the role of relationship referees. Often, advice columnists took sides on a conflicting situation between husbands and wives, or between other intimate partners. In addition, the columnists were seen as a safe, but anonymous audience for letter writers. Advice columnists often educated readers about sex-education topics ranging from health matters to sexual techniques, which is a topic that was rarely covered in other parts of the newspaper. Advice columnists tended to explain the changing role of families in society, especially in the era that witnessed the normalization of divorce and the rise of stepfamilies. Some of this was likely due the lives of the columnists. Ann Landers, for example, divorced her husband after learning that he was having an affair. Later, the columnists normalized adoption, interracial dating, and eventually homosexuality. Carolyn Hax’s Washington Post column, originally called “Tell Me About It,” remains popular in the 21st century. “Dear Maggie” offers sex advice to a predominantly Christian readership in Christianity Magazine. Men as advice columnists are becoming more popular. One example is Dan Savage, whose advice column “Savage Love” presents an intimate look at sexual behavior in all its variations. Online publications also have advice columns; two popular examples are “Dear Prudence” at Slate, and “Since You Asked” at Salon. Anyone can dispense advice on the Internet, so the number of “agony

African American Families



aunts” (and uncles, if the advice-givers are males) are likely to proliferate. Kimberly Wilmot Voss University of Central Florida See Also: Child-Rearing Experts; Child-Rearing Practices; Mothers in the Workforce. Further Readings Gudelunas, David. Confidential to America. New Brunswick, Canada: Transaction, 2008. Hendley, W. Clark. “Dear Abby, Miss Lonelyheart and the Eighteenth Century: The Origins of the Newspaper Advice Columns.” Journal of Popular Culture, v.11 (1977). Kogan, R. America’s Mom: The Life, Lessons, and Legacy of Ann Landers. New York: HarperCollins, 2003. Vella, Christina. “Dorothy Dix: The World Brought Her Its Secret,” Louisiana Women: Their Lives and Times, Janet Allured and Judith F. Gentry, eds. Athens: University of Georgia Press, 2009.

African American Families Identifying a definition of family that can be agreed upon by everyone can be challenging—so challenging that many family studies textbooks often point out that the term is defined in as many ways as there are cultural contexts. Ask anyone to list who they consider to be members of their family, and one will receive answers ranging from mother, father, sibling, aunts, uncles, and cousins to family friend, stepparents, cohabiting partner, nonbiological parentalfigure, and father’s partner. Deciding who should be categorized as African American or black can be just as challenging. The terms black and African American are often used to lump together an entire group that is actually quite heterogeneous. A great deal of heterogeneity exists in the African American population, yet distinctions in this population are typically overlooked. Most people mistakenly think that all African Americans are descendants of slaves who were brought to North America from Africa; however, their origins are

43

broader than that, rooted in multiple cultures. For example, African Americans include the descendants of Africans who were victims of the African slave trade, as well as immigrants who voluntarily emigrated from South America, the Caribbean islands, or the numerous nations of Africa. Estimates of African American Families in the United States In 2011, approximately 13.6 percent of the U.S. population consisted of African Americans or blacks. This includes individuals self-identifying as more than one race. About 12.8 percent of the population (over 39 million people) identified as only African American. According to the U.S. Census Bureau’s projections, African Americans will consist of about 18.4 percent (77.4 million people) of the U.S. population by 2060. In 2012, according to the Current Population Survey, approximately 62 percent of black households were comprised of a family; in other words, there were 9.7 million black family households. About 45.2 percent of those family households consisted of married couples. This is in stark contrast to earlier decades. Between 1940 and 1960, most African American families were marriage-based; during that period, the number of married African Americans peaked. In fact, according to the Bureau of the Census Statistical Brief issued in March 1993, about 78 percent of all black families in 1950 were marriedcouple families. Marriage advocates allude to the benefits of the nuclear family ideal (i.e., mother, father, and children). Consequently, most of the debates about African American families have focused on the roles of African American men as partners and fathers. Regardless of racial or ethnic background, ideas regarding what constitutes a family—or conversely, what does not constitute a family—can be quite divergent. Attempts to understand African American family functions and structure in general, as well as African American males’ roles specifically, have been largely motivated by political interests. Politics What became known as the Moynihan Report, which was leaked to the press in 1965, generated more public debate about African American families than any other document in modern history.

44

African American Families

Written by New York Senator Daniel Patrick Moynihan, the report created a wedge between conservatives and liberals, as well as blacks and whites. Moynihan argued that the black family, unlike the white family, was unstable; that black families were weak and facing complete breakdown; and that in black families, the roles of husband and wife were often reversed. He also asserted that this black matriarchy was responsible for social disorganization and that diminished black manhood was associated with both the absence of strong male role models and the failure of black men to serve as breadwinners. While Moynihan posited that cultural problems in black society led to a crisis among black families, others argued that the era of slavery contributed to the crisis. Yet even others argued that although African American family structure varied from what was described as the ideal, that variation did not mean that family was devalued or unimportant to African Americans. Even the ideal family structure—two biological parents and children—is not the reality for non–African American families. Sociohistorical forces, many of which were rooted in the institution of slavery, have shaped the development of African American family life. Historical Accounts In some of his early work, prominent scholar of African American family life Andrew Billingsley described slavery in the United States as being particularly distinct from slavery in other countries and other historical epochs. For instance, elsewhere there were safeguards for a slave’s personhood, family, and worth as a human being. Special magistrates whose responsibility was to reprimand individuals who abused their slaves existed in some empires in other parts of the world. This was typically not the case in the United States, where slaves were beaten, killed, and intentionally separated from their families at the discretion of their owners. The full extent to which family members were separated from one another may never be completely understood because of the scarcity of slave records. Some accounts suggest that slave owners may have encouraged marriages between slaves on a plantation because it led to the birth of children, thereby increasing the number of slaves while eliminating the cost of buying additional slaves.

Some accounts suggest that for the purpose of maintaining order and obedience (and thus maintaining productivity), some owners kept slave families together. Rebellion was less likely to be precipitated by a married slave, because of possible consequences to the spouse and children. Conversely, an unmarried slave had no family ties on a plantation. This does not mean that slave families should be characterized as stable. Even in instances where slave families were kept together, economic events or circumstances such as bankruptcy, crop failure, or the slave owner’s death often led to the sale of spouses and children. Nevertheless, the black family was a functioning institution despite the confines of slavery because it played a critical role as a survival and coping mechanism. Through the family, slaves found some refuge, and that refuge came in the form of companionship, empathy, and care. The emancipation of slaves led to the emergence of three typical patterns of family life, according to Billingsley. In the first pattern, most blacks continued to live on the plantations where they had toiled as slaves. They became tenant farmers of their former owners and earned little to nothing for their work. In the second pattern, some family members who had been permitted to reside together on the same plantation and farm common pieces of land were able to sustain their families as both social and economic units. In the third pattern, men became homeless and would travel the countryside seeking employment opportunities. Some brought their families with them; some did not. Many moved from rural to urban centers; however, their economic situation was tenuous. While women may have been able to secure domestic positions; men were often unable to find work, which meant that they had to depend on the economic support of working women. Such circumstances fostered feelings of discouragement. Urban life offered stability for some families, especially if the men in those families were able to secure steady industrial employment. This facilitated the gradual launch of a black middle class in the mid-20th century, when black family stability increased. History: A Revisionist Approach A revisionist approach to history indicates that although the slave trade fragmented families, the institution of the family was not destroyed. This



was evident among free blacks during the slave era. In 1798, about 45 percent of free blacks resided in family-centered homes. Although complete empirical data is lacking, J. D. B. DeBow’s Statistical View of the United States indicates that in eight of the 12 states for which data are available, among free blacks, most lived in families: Connecticut, 74 percent; Maryland, 67 percent; Massachusetts, 60 percent; New Hampshire, 62 percent; New York, 69 percent; North Carolina, 73 percent; Rhode Island, 63 percent; and South Carolina, 79 percent. States in which less than half of free blacks lived in families included Maine (31 percent), Vermont (47 percent), Pennsylvania (32 percent), and Virginia (22 percent). For these early Africans in America, two factors greatly contributed to stable family life: one was Africans’ strong commitment to family. The other factor was comprised of their political, economic, and social environmental concerns. Several patterns of marriage characterized the lives of free blacks. They married other free blacks, slaves, whites, or Native Americans. From 1936 to 1938, about 2,000 former slaves across 17 states were interviewed as part of the Federal Writers’ Project. Over 60 percent were 15 years old or younger at emancipation. Most of the others were in their late teens or 20s in 1865. About 85 percent of former slaves interviewed in the South Carolina area around 1937 had clear memories of their mothers, and 76 percent had memories of their fathers. Former slaves in other regions demonstrated similar patterns of memories. Billingsley posited that this illustrated that slavery did not eliminate the family. Trends Over Time: 1990, 2000, and 2010 According to the U.S. Bureau of the Census, the number of married African Americans has decreased over the decades. In 1990, 45.8 percent of African Americans were married; in 2000, 42.1 percent; and in 2010, 38.8 percent. Conversely, the number of African Americans who have never married has increased over the decades. In 1990, 35.1 percent of African Americans had never married; in 2000, this rose to 39.4 percent, and in 2010, it rose again to 42.8 percent. Not only are fewer African Americans married compared to 1950, but fewer are married compared to other ethnic or racial groups. Lower marriage rates among African Americans may, in part, reflect the inability to find potential

African American Families

45

partners because of the limited pool of acceptable or marriageable choices. For African Americans who are married, they are generally marrying later. In 2010, the median age at first marriage for African American men was 30.8 years, and for African American women 30.3 years. This is higher than for whites (men, 28.3; women, 26.4), Asians (men, 30; women, 26.8), and Hispanics (men, 28.3; women, 25.9). Regarding marital dissolution, in 1990, 10.6 percent of African Americans were divorced; in 2000, 11.5 percent; and in 2010, 11.7 percent. The research of Joseph Veroff and colleagues revealed that for African American women married to African American men (but not vice versa), the risk of divorce decreased as the level of their education increased; perhaps because among African American men, higher levels of education provide opportunities for more viable alternatives to their current relationships. Family Stressors Just as the historical legacy of slavery has shaped the lives of African American families, so have contemporary contextual factors—most notably financial strain and racial discrimination. Even in the 21st century, African American families tend to have fewer socioeconomic opportunities, which in turn results in lower levels of socioeconomic status. A disproportionate number of African Americans live in poverty, and African American women have a long history of financially contributing to their households out of necessity. Economic factors directly contribute to family outcomes. Increased unemployment among African American men has been linked to decreased marriage rates among African Americans. Among those who are already married, unemployment of African American men has been linked to marital dissolution. This is not meant to suggest that all African Americans are impoverished. African American families are well represented in the ranks of the middle and upper classes; however, discrimination has contributed to racial disparities in the distribution of wealth. Interestingly, in the 1930s and 1940s, sociologist W. E. B. Dubois argued that economic hardship and discrimination led to African American female-headed families, rather than African American female-headed families leading to economic hardship and discrimination. Racial disadvantages are not totally explained by socioeconomic factors. Experiencing racial

46

African American Families

discrimination on a regular basis negatively impacts both physical and mental health. In addition, chronic racial discrimination intensifies the impact of other stressors. Some researchers have argued that experiencing continuing racial discrimination generates feelings of frustration, which are then associated with greater reactivity to adverse circumstances that occur in life. This explanation is consistent with findings reported by Velma McBride Murry, P. Adama Brown, Gene H. Brody, Carolyn E. Cutrona, and Ronald L. Simons. Their studies found that when African American mothers felt high levels of racial discrimination, their relationships with significant others and their children were of poorer quality than African American mothers who did not feel high levels of racial discrimination. Childbearing and Parenting Childbearing and marriage are becoming increasingly separated. This trend is evident in American society as a whole, but it has been observed among African Americans longer. Despite the general trend, there has been a significant decline in the birth rate of married black women. In 1980, the birth rate for married African American females (number of births per 1,000 women) aged 15 to 44 years was 89.2; in 1990, it dropped to 79.7. In 1999, it had dropped yet again to 67.3. More recent census data indicates that the birth rate per 1,000 unmarried black women was 90.5 in 1990; 70.5 in 2000; 67.8 in 2005, and 72.5 in 2008. Thus, although the birth rate for unmarried black women was higher in 2008 than in 2005, it is still significantly lower than the 1990 rate. Researchers have found that the consequences of parenting styles vary across race and ethnicity and household structure. For example, an authoritative parenting style is positively associated with higher school grades for Latino and white adolescents, but not for Asian or African American adolescents. Some researchers posit that children living in disadvantaged, high-crime areas benefit more from a tough, restrictive type of parenting (authoritarian rather than authoritative) because dangerous neighborhoods require parents to exercise higher levels of control and very firm parenting practices as a means of survival and protection. The authoritarian parenting approach, coupled with nurturing, warm parentchild relationships (sometimes referred to as no-nonsense parenting) has been associated with self-reliant

or self-regulated adolescents. This explains why corporal punishment (when it is a part of supportive, involved parenting) is not always related to negative child outcomes, particularly among African American families. Enduring Strengths Religiosity has been a noted as a strength of the African American family. It often serves as a coping resource for handling stressful life events, perhaps because it underscores the role that faith plays in making sense of the world. For African Americans, the church has also served as a means of organizing the community and as a conduit of social expression. African Americans are affiliated with a number of different religious groups, including Baptists, Seventh-Day Adventists, Roman Catholics, and Pentecostals. Thus, their varied religious affiliations also contribute to their heterogeneity. In noting the strengths of African American families, it is important to mention the extended family. The extended family has been described as an interdependent system of kinship, bonded together by feelings of responsibility and obligation. This interdependence is perhaps reflected in the Pew Research Center’s report indicating that in 2008, about 23 percent of African Americans (compared to only 13 percent of whites) lived in multigenerational households. Chalandra M. Bryant University of Georgia See Also: Equal Rights Amendment; Extended Families; Gender Roles; Slave Families; Fertility; Moynihan Report; Parenting; Parenting Styles; Working Mothers. Further Readings Billingsley, Andrew. Climbing Jacob’s Ladder: The Enduring Legacies of African American Families. New York: Simon & Schuster, 1992. Billingsley, Andrew and Amy T. Billingsley. “Negro Family Life in America.” Social Service Review, v.39 (1965). Bryant, Chalandra M., et al. “Marital Satisfaction Among African Americans and Black Caribbeans: Findings From National Survey of American Life.” Family Relations, v.57 (2008). Bryant, Chalandra M., et al. “Race Matters, Even in Marriage: Identifying Factors Linked to Marital

Outcomes for African Americans.” Journal of Family Theory and Review, v.2 (2010). DeBow, J. D. B. Statistical View of the United States. Washington, DC: A. O. P. Nicholson, 1854. Hardaway, Cecily R., and Vonnie C. McLoyd. “Escaping Poverty and Securing Middle Class Status: How Race and Socioeconomic Status Shape Mobility Prospects for African Americans During the Transition to Adulthood.” Journal of Youth and Adolescence, v.38 (2009). Johnson, Leanor Boulin, and Robert Staples. Black Families at the Crossroads: Challenges and Prospects, rev. ed. San Francisco: John Wiley & Sons, 2005. McAdoo, Harriette P., ed. Family Ethnicity: Strength in Diversity. Thousand Oaks, CA: Sage, 1999. McLoyd, Vonnie C., Nancy E. Hill, and Kenneth A. Dodge, eds. African American Family Life: Ecological and Cultural Diversity. New York: Guilford Press, 2005. Moynihan, Daniel Patrick. The Negro Family: The Case for National Action (The Moynihan Report). Washington, DC: Office of Policy Planning and Research, U.S. Department of Labor, 1965. Murry, Velma M., Adama Brown, Gene H. Brody, Carolyn E. Cutrona, and Ronald L. Simons. “Racial Discrimination as a Moderator of the Links Among Stress, Maternal Psychological Functioning, and Family Relationships.” Journal of Marriage and Family, v.63 (2001). Wilson, William J. The Truly Disadvantaged: The Inner City, the Underclass, and Public Policy, 2nd ed. Chicago: University of Chicago Press, 2012.

Agnostics Agnostics believe that the existence of a god is unknowable. The term agnostic originates from the Greek word agnostos, meaning “unknown” or “unknowable.” Much of the research on agnostics is conflated with research on atheists and secular Americans. Most research focusing on secularism in the past has concentrated on atheism; thus, much less is known about agnostics. Even less is known about the effects of secularism on family life because most literature on the topic has focused on the inclusive benefits of religion.

Agnostics

47

Agnostics and secular Americans are part of every demographic category, but certain patterns stand out. Estimates suggest that 12 percent of Americans identify as either atheist or agnostic, and 75 percent of those are men. Most are young, and 42 percent of agnostics have graduated from college, which is well above the national average of 27 percent. Agnostics are more widespread among liberal Protestants, Jews, and those with no religious affiliation. Under 5 percent of liberal Protestants and Episcopalians report being agnostic, but it is much more widespread among the nonaffiliated and Jewish—nearly 19 percent of religiously nonaffiliated people and 23 percent of Jewish Americans report being agnostic. The Pew Survey on Religion and Society finds that agnostics and atheists are two of the least popular groups in American society. Because of a continuing plurality of Americans who believe in religion, atheists and agnostics have been labeled as outsiders. This general dislike of secular Americans by those who profess religious beliefs is highest in regions where conservative and fundamentalist religious groups predominate. The stigma that goes along with being agnostic can lead to a sense of isolation, rejection, or alienation from family, colleagues, and peers. Generally, rates of religious participation drop when young adults leave the parental home and increase again when they start families. This pattern holds true for both agnostics and other secular groups. Agnostics have been found to join religious groups throughout their lives, especially during the time when they are raising children. Some parents cite the need for an organization that provides moral teachings, whereas others say that they want to keep an open dialogue by exposing their children to various religious teachings and having them form personal opinions about the existence of God. Precise statistics on the number of agnostic parents that raise their children in religious organizations are not available, but agnostic, atheist, and secular parents usually raise agnostic, atheist, or secular children, whether or not the children experience organized religion in their childhood. Findings on religion and divorce have been mixed over time, with some literature showing higher rates of divorce in nonreligious marriages, and other studies showing no correlation or higher rates of divorce among religious couples. One study on religion and divorce shows that divorce affects religious

48

Alan Guttmacher Institute

and nonreligious Americans in roughly equal measure. Many studies of the effects of religion on family life suggest that religion contributes to the overall health and happiness of the family. Much of this research stood uncontested for years, until scholars focused on the secular and nonreligious population to understand the effects of secularism on family life. The results of this research have been mixed and show that being a part of any civic or social group increases health and happiness, whether the group is religious or nonreligious. Other research has found that religious organizations provide a protective canopy for families that extend beyond what other civic or social organizations can provide. When one partner in a marriage or relationship is religious and the other is agnostic or atheist, friction can result. A secular spouse or partner can also have strained relationships with the parents or other family members of the religious spouse or partner. Studies have shown that agnostic or atheist spouses will sometimes adopt the religious preference of their spouse, or at least attend religious services as a way of compromising with issues that their secular status causes. When it comes to children, the higher a parent’s level of religiosity, the less likely he or she will approve of a child’s romantic relationship with or marriage to an atheist or agnostic. Members of conservative religions that exhibit higher levels of religiosity, such as Christian Evangelicals and Roman Catholics, are less likely to approve of these mixed relationships. Consequently, strains in family dynamics and relationships may be more pronounced. Much research needs to be conducted to distinguish the life experiences of atheists and agnostics, to clarify findings on divorce and agnosticism, and to identify the parenting practices of agnostics. With the increase in Americans identifying as nonreligious or secular, there has been a corresponding increase in the amount of research on the secular population. Sociologists who study religion have also begun to focus on the exclusionary practices of religion to better understand what the nonreligious and secular population experience. As this body of research increases, knowledge of agnostics and the family should also increase. Monika Myers Michael Wilson Arkansas State University

See Also: Atheists; Catholicism; Divorce and Religion; Education, College/University; Protestants. Further Readings Ecklund, Elaine and Kristen Schultz Lee. “Atheists and Agnostics Negotiate Religion and Family.” Journal for the Scientific Study of Religion, v.50 (2011). Gutting, Gary. “Religious Agnosticism.” Midwest Studies in Philosophy, v.37 (2013). Wills, David W. Christianity in the United States: A Historical Survey and Interpretation. Notre Dame, IN: University of Notre Dame Press, 2005. Zuckerman, Phil. “Atheism, Secularity, and Well-Being: How the Findings of Social Science Counter Negative Stereotypes and Assumptions.” Sociology Compass, v.3 (2009).

Alan Guttmacher Institute The Alan Guttmacher Institute’s primary focus is on sexual and reproductive health and rights. The overriding goal of the institute is achieving high standards of sexual and reproductive health for citizens throughout the world. Thus, this nonprofit organization works to facilitate sexual health, improve reproductive health, and protect abortion rights, both in the United States and across the globe. The institute employs a multipronged approach in pursuit of this goal. It engages in research, policy analysis, and education efforts. Through this interrelated approach, the institute hopes to enact strong programs and sound policies. Key Principles Four key principles guide work at the Guttmacher Institute. The first is envisioning a future in which all individuals feel free to and are comfortable exercising their rights regarding sex, reproduction, and family planning. This vision entails both social and policy support for individual decisions on pregnancy and birth. Also central are support for positive and stable parenting and the elimination of gender inequities. The second principle centers on an encompassing view of sexual and reproductive health for women



and men. This perspective begins by considering the needs of adolescents, and continues throughout the lifespan, covering several areas. These areas include forming responsible and satisfying sexual relationships, avoiding unwanted pregnancies, having and understanding the right to safe and legal abortion, preventing and treating sexually transmitted infections, and attaining healthy pregnancies and subsequent childbirth. The third guiding principle is priority based on need. Although the institute is interested in everyone’s health and rights, special attention is devoted to persons and groups whose access to or use of information, materials, benefits, and/or services may be hindered due to factors such as discrimination, geographical boundaries, age, or socioeconomic status. The institute is devoted to serving those individuals and groups with the greatest level of need. The fourth principle centers on a commitment to both the United States and the world. One chief initiative is working domestically to develop new, and improve existing, sexual and reproductive health programs and policies. The institute also assesses the effects of such policies internationally and engages in international policy advocacy. Toward that end, the institute chronicles inequities in sexual and reproductive health and rights by country and assists efforts to promote progressive policies in specific countries. History of the Institute The Alan Guttmacher Institute was created in 1968, and was originally called the Center for Family Planning Program Development. In its early days, the center was part of the organizational structure of the Planned Parenthood Federation of America. Despite this, it was externally directed by a national council of advisors. Alan F. Guttmacher, a renowned obstetrician, gynecologist, political activist, and one-time president of Planned Parenthood, played a key role in the center’s development. After Guttmacher’s death in 1974, the center was renamed the Alan Guttmacher Institute in his honor, and was subsequently independently incorporated. At the time of the center’s inception, public awareness of unplanned pregnancies and the personal and societal consequences of such childbearing was increasing. The U.S. government was developing both domestic and international programming in these areas. For example, Congress

Alan Guttmacher Institute

49

was working to ensure domestic access to birth control and to enact international programs for population assistance, such as family planning and reproductive health services. The center began conducting nonpartisan research and policy analysis that helped guide public education efforts and government policymaking. This has remained consistent across the organization’s 45-year history and continues to guide its actions. The institute’s work has had far-reaching effects in the lives of U.S. families. Information and support in helping pregnant adolescents determine whether to give birth or choose an abortion, and, in cases where giving birth is selected, to decide on adoption or raising the child, have increased. Further, access to birth control has reduced unplanned pregnancies and contributed to a decrease in abortion rates. Access to information and medical and support services has contributed to the sexual and reproductive health of U.S. families, as well as families in several other nations In 2010, Philanthropedia ranked the Guttmacher Institute the top nonprofit for reproductive health, rights, and justice. The organization is also a Top-Rated Charity by CharityWatch, as well as an awardee of the Seal of Excellence from the Independent Charities of America. Structure of the Institute The institute maintains offices in Washington, D.C., and New York, and has a staff of approximately 80. These staff members range from social scientists and policy analysts to communication specialists and financial personnel. A 39-member board, comprised of well-known experts and community leaders from the United States and abroad, oversees the institute’s efforts and its annual budget. The budget mainly derives from private U.S. foundations, global organizations, governments, and to a lesser extent, U.S. government contracts and individual giving. The institute’s Web site provides numerous resources, including information on institute analyses and publications. Joy L. Hart University of Louisville See Also: Abortion; Adolescent Pregnancy; HIV/ AIDS; Planned Parenthood; Sex Information and Education Council of the United States.

50

Alcoholism and Addiction

Further Readings Bass, Hannah. “Guttmacher Institute.” British Medical Journal, v.344 (2012). Boonstra, Heather D., et al. Abortion in Women’s Lives. New York: Guttmacher Institute, 2009. Susheela, S., et al. Adding It Up: The Benefits of Investing in Family Planning and Newborn and Maternal Health. Guttmacher Institute and United Nations Population Fund, 2009. New York: Guttmacher Institute, 2011.

Alcoholism and Addiction A 2001 study by the National Institute for Drug Abuse found that less than 30 percent of the 21.6 million Americans addicted to alcohol, marijuana, prescription painkillers, and other substances actually get treatment. This prompted President Obama to addressed the problem by creating a National Drug Strategy to enable more Americans to receive treatment. The annual cost to the U.S. economy from alcoholism alone, from lost work, medical care, and related social problems, has been estimated at more than $500 million per year. Though rates of substance dependence or abuse has declined since 2000, nearly 43 percent of American adults have grown up with or experienced alcoholism in their immediate family or within a romantic relationship. When a family member struggles with addiction to alcohol, illicit drugs, or gambling, it has profound effects on the individual, parents, partners, siblings, and children. The prevalence of addictive disorders, and dire consequences that include homelessness, bankruptcy, disrupted family lives, and even death, points to the importance of better understanding how to treat individuals with addiction and prevent addiction from taking hold and destroying the lives individuals and their families. Diagnosis of Addiction The 2013 Diagnostic and Statistical Manual of Mental Disorders–V (DSM–V) classifies addictive disorders as characterized by a loss of control over the use of the substance, the inability to reduce substance use or behaviors, social impairment (e.g.,

New York State Inebriate Asylum, later known as Binghamton State Hospital, was the first institution designed and constructed to treat alcoholism as a mental disorder.

work or legal problems), continued use in spite of negative physical or psychological consequences, reduction in important activities, engagement in hazardous situations, experiencing cravings, and increased need for the substance in order to experience similar levels of stimulation. Finally, the DSM-V identifies addictions as having continuous cognitive, behavioral, and physiological symptoms that influence brain functioning. Abuse can be related to many diverse substances and behaviors, including legal substances (e.g., alcohol, prescription medications, and caffeine) and illegal or regulated substances (e.g., opiates and marijuana), as well as potentially problematic behaviors (e.g., gambling). Research shows that males are more likely than females to abuse substances, though women are more likely than men to experience negative effects, such as interpersonal violence or depression, from substance abuse. Addictions and substance abuse affects individuals



of all genders, races, ethnicities, classes, levels of education, and religions. Differences by ethnicity suggest that white individuals are more likely to abuse across a wider range of substances, though female minorities are more likely to experience substance-related problems than white women. It is a myth that addicted individuals are unstable; many with addictions have stable jobs, stable family lives, and can appear high functioning. Onset and progression of addiction is also highly variable, with no absolute pathways for developing an addiction. Strong evidence refutes the idea that addiction is willful behavior; no one would choose the negative outcomes that accompany addiction. Typically, addiction starts with casual patterns of use and then degrades into more frequent use with negative social and occupational consequences, for example losing a driver’s license or succumbing to financial trouble. When addictive behavior continues, in spite of negative consequences, the disease meets the criterion for abuse. Ongoing use or addictive behaviors can lead to conditions of psychological or physiological dependence that can be very difficult to break. Under conditions of dependence, individuals are likely to experience cravings and withdrawal effects; at this point, acquiring the addictive substance can become a primary behavior. Over time, the individual’s physical health begins to deteriorate. Physical changes occur in the brain that make it increasingly difficult to make decisions, exercise good judgment, learn or retain new information, or exercise behavioral control. In extreme cases, abuse and dependence can lead to dangerous outcomes, including death from accidents, overdose, and suicide. Theories of Addiction Since the 1960s, addiction theories have shifted from blaming the individual’s weak moral character to understanding the many factors that can contribute to it. Early family theorists focusing on alcoholism blamed family members and negative interaction patterns for causing the disease. For example, in describing the “alcoholic marriage,” theorists believed that “flawed” women who were needy or insecure paired with substance-abusing partners as self-punishment or to establish superiority. Subsequent studies of genetics and the brain have developed more accepted theories that addiction is a disease that results from complex

Alcoholism and Addiction

51

interactions of relational, genetic, and environmental conditions. The medical model sees addiction as a chronic illness, similar to diabetes or asthma, which is not curable, but can be managed over a lifetime. As with other illnesses, the way that an individual responds to treatment or manages his or her disease benefits from a thorough evaluation of the person’s family health history, environmental stressors, and available resources. Influences on Addiction Addiction results from complicated interactions of individual internal factors (e.g., cognitive deficits and impulsivity) and contextual variables (e.g., lack of economic opportunity, social pressures, and family environments). Individual risk factors for addiction include age, psychiatric disorders, and neurological functioning. Contextual factors include the immediate environment (e.g., peers who use substances) and societal factors (e.g., the social acceptability of drinking). A substantial amount of research has examined both family environments in which abuse or addiction develops and family problems (e.g., legal, financial, and occupational consequences) that emerge as a consequence of addiction. Research has identified family context variables that precipitate the development of addictions. The National Institutes of Health’s (NIH) genetic studies reveal that, while no one gene or pattern of genes causes or prevents addiction, genetic factors account for between 40 and 60 percent of a person’s vulnerability to addiction. Whether genetic or environmental, when a parent is addicted to a substance, there is a higher likelihood that his or her child will also become addicted to a substance. Alcoholic individuals are six times more likely than nonalcoholic individuals and twice as likely as individuals with other psychological disorders to have a parent with substance dependence. Even when raised apart from addicted parents, biological children of addicted parents remain at increased risk for addictions. Longitudinal evidence supports that parental use or addiction alone, however, does not mandate that a child will become an addict. It is impossible to predict with accuracy a child’s future behavior from parental use or addiction habits. Other family contextual variables have also been identified. With regard to alcohol, birth order has been implicated in the development of alcoholism.

52

Alcoholism and Addiction

Findings support that last-born children have a higher likelihood (a ratio as high as of 1.26 to 1) of developing a substance use disorder than firstborn children. That early hypothesis, however, is tempered by the fact that many last-born children do not develop substance disorders and many firstborn children develop disorders. Family problems, including parent conflict, early childhood trauma, and parents who model unhealthy behaviors, have also been implicated in contributing to addiction development. Early family systems theorists believed that family members operated as dependent parts of an integrated system, with various family members taking on heavier emotional loads or absorbing more family anxiety, so that one family member may become addicted to take attention away from other family problems. Thus, families develop emotional overdependence that leads to unhealthy emotional interactions. Other family theorists hypothesized that family members take on archetypal roles (e.g., peacemaker or defiant child) and those roles serve as context for the development of addictive behaviors. Other theories relate to aspects of codependent behavior, where in an effort to help a struggling or addicted family member, other family members care for their loved one in ways that enable, rather than interrupt, addictive behaviors. In spite of research evidence correlating family difficulty with substance abuse, it remains clear that many individuals emerge from problematic family contexts without addiction, and other individuals emerge from less problematic family environments with addiction. Influence of Addictive Behaviors on the Family Addictive behaviors take a toll on families. Family members of addicted individuals are more likely than the general population to experience emotional or behavioral problems. Within family and romantic relationships, alcohol abuse is related to less effective communication, less cooperation, more stress, and greater levels of interpersonal conflict and violence. Partners of those who abuse alcohol report elevated levels of anxiety, depression, and more physical health problems than partners of nonaddicted individuals. The National Association for Children of Alcoholics (NACoA) estimates that more than 28 million Americans

are children of an addicted individual, with current estimates from U.S. Health and Human Services estimating more than 6 million children in 2011 resided in a house with an addicted parent. Children with an addicted parent are more likely to experience mental health disorders, including depression and anxiety, academic problems, behavioral disorders such as ADD, medical problems, and child abuse than children of parents without an addiction. Many reports of child abuse, neglect, and foster care placements directly relate to substance abuse. Problems related to gambling include increased risks of children developing gambling problems, increased risk of child abuse and neglect, and partners reporting more domestic abuse. Family Therapies It can be extremely difficult to recognize or accept addiction in a close family member; families may be reluctant to confront someone they love for fear of dealing with this difficult disease. They may fear that confronting the addictive behavior could result in betrayal, shame, or isolation. When symptoms start to become problematic, a brief private conversation, acknowledgment of distressing behaviors, or an offer to help find treatment may be most effective and a powerful way of expressing compassion and commitment to the individual. Family treatments are numerous, and research supports that involving the family can be critical to treatment, regardless of the age of the individual experiencing the addiction. Though physical recovery can be relatively fast and psychological recovery can take many years, research suggests that recovery is an attainable goal for most people. The most effective form of treatment is total abstinence within a supportive treatment program. Response to treatment is highly variable, with most people benefitting when they have both the internal motivation to stop addictive behaviors and a family and community that engages in the treatment and supports their recovery. Several family treatment models are supported by research. These include individual approaches that sporadically integrate family members; couples therapy; family group sessions; and peer-led self-help family groups. Most family therapies are aimed at helping family members understand the perspective of the addicted individual, improving communication, identifying negative interaction

Alimony and Child Support



patterns, and increasing helpful forms of support. Both informal and formal interventions conducted by a professional can be influential in encouraging an individual to seek help for addictive behaviors. Shifting family dynamics can also help individuals stop addictive behaviors and avoid relapse. Couples therapy, which is more effective for women, has been found effective for preventing relapse. Family systems therapy helps families evaluate the systems within which alcohol and substance use is entrenched. That approach assesses family goals, examines the relationship between drinking behaviors and family interactions, and looks for mechanisms to create change. While family systems treatment may not be as ideal as individual treatment, there is evidence to suggest it can be more efficacious when integrated with other behavioral treatments. There is no one-size-fits-all treatment approach, though treatments that educate and involve individuals and supportive family members are likely to yield the best outcomes. Individual treatments that are affective for alcohol dependence include motivational enhancement therapy, cognitive behavioral therapy, and 12-step programs such as Alcoholics Anonymous (AA). NIH studies also support brief interventions, which can be effective for reducing motivation to drink and preventing high-risk drinking. In addition to individual treatments, the following family treatments, with providers recommended by the National Association of Addiction Treatment Providers, are available: • • • • •

Behavioral couples therapy Unilateral family therapy Structural-strategic family therapy Multifamily therapy Peer-led family support groups such as Al-Anon and Narc-Anon • Community approaches such as the Counseling for Alcoholic Marriages Project and the Community Reinforcement and Family Training Program As researchers better understand addiction, in particular the brain chemistry associated with it, they will better be able to specify addiction treatments. Developing additional pharmacologic approaches is a primary goal, along with identifying how to best combine several types of therapy and to better match treatment with biology.

53

Family dynamics do not cause addictions; complex interactions between individual, interpersonal, and contextual factors determine addictive outcomes. While only an individual can stop him or herself from engaging in addictive behavior, families can be helpful when they share their concerns for the addicted individual, express empathy, help the individual find treatment, and evaluate their patterns of interaction to assist the individual in attaining sobriety and avoiding relapse. Shannon Casey Alliant International University See Also: Family Counseling; Parenting; Social History of American Families: 1981 to 2000; Social History of American Families: 2001 to the Present. Further Readings Alcoholics Anonymous. Alcoholics Anonymous: The Story of How Thousands of Men and Women Have Recovered From Alcoholism, 4th ed. New York: Author, 2001. Conyers, B. Addict in the Family: Stories of Loss, Hope, and Recovery. Minneapolis, MN: Hazelden Foundation, 2003. Kuhar, Michael. The Addicted Brain: Why We Abuse Drugs, Alcohol, and Nicotine. Upper Saddle River, NJ: FT Press, 2011. Margolis, R. D. and J. E. Zweben. Treating Patients With Alcohol and Other Drug Problems: An Integrated Approach, 2nd ed. Washington, DC: American Psychological Association, 2011. McCrady, B. S., B. O. Ladd, and K. A. Hallgren. “Theoretical Bases of Family Approaches to Substance Abuse Treatment.” In Treating Substance Abuse: Theory and Technique, 3rd ed., S. T. Walters and F. Rotgers, eds. New York: Guilford Press, 2012.

Alimony and Child Support The term alimony is based on a combination of the Latin noun alimonia (sustenance) and the verb alere (to nourish). Alimony is the payment of monies by

54

Alimony and Child Support

one spouse to another spouse when a couple separates or divorces. This generic term refers to a number of different types of payments (both temporary and permanent) and is used interchangeably with maintenance and spousal support. Historically, alimony evolved from a husband’s common law duty to support his wife during marriage. In an attempt to continue the standard of living achieved during marriage after the dissolution of the relationship, the amount was calculated on a case-by-case basis based upon the wife’s need and the husband’s ability to pay. In the 21st century alimony has evolved to be gender-neutral and states are moving toward limits. In an intact family, there is no particular duty of child support. As long as there is no abuse or neglect, a parent can provide (or not provide) for his or her children as he or she sees fit, without state interference. However, if a family is not intact, the issue of child support arises. Child support has been interpreted as a child’s right to receive support from his or her parent(s) beginning at birth. However, a support order cannot be established for a child born to unmarried parents until paternity has been established. Once paternity is legally established, a child has legal rights and privileges, such as inheritance, medical and life insurance benefits, and social security and/or veterans benefits. Through statutorily adopted formulae, the amount of child support owed is determined in conjunction with whether a parent has child custody (or parental time-sharing), the amount of time spent in each parent’s custody and care, and the number of children. English Rules The English ecclesiastical court’s practice of awarding alimony was only applicable to divorce a mensa et thoro (from bed and board)—a precursor to modern day legal separation. Under canon law, there was no absolute divorce; instead, a marriage found to be void ab initio (from the beginning) could be annulled. Divorce a mensa et thoro required proof of adultery and cruelty. With broad discretion, the ecclesiastical judge often granted the wife onethird of either the husband’s income (her dower amount) or the combined income of the spouses; both awards were based upon the wife’s need and the husband’s ability to pay. Because wives had no common law duty to support their husbands, alimony for a husband was not provided. Prior to the Divorce Act of 1857, absolute divorces were solely

available through private acts of parliament and were seldom granted. In 1857, ecclesiastical courts were usurped, and absolute acts of divorce via judicial decree were introduced in England. Early America Alimony upon absolute divorce was accepted wholesale and without question in the early American commonwealths and territories. As in England, alimony was intended to provide continued maintenance for the wife, who had unequal legal status; it remained based upon the wife’s need and the spouse’s ability to pay. In addition, alimony was meant to serve as supplemental income to the former wife (as long as she was faultless in the divorce) due in part to the common law title system, where the former husband received the majority of the marital property upon divorce. Colonial America adopted the English paternal preference rule for custody determinations. As a result, from the colonial period through the early 19th century, mothers seldom won custody of their children in divorce cases. But by the 1850s, the presumption in custody cases had changed and maternal preference became the rule. However, in the 19th century, newly divorced mothers nearly always fell into poverty, despite their predivorce social standing. By the 1920s, the Tender Years Doctrine, which stipulated that upon divorce any children from birth through age 7 were to be placed in their mother’s sole custody, was the rule for child custody in the United States. Because the duty of child support arose from the English law dictating the father’s right to the custody and services of the child, it ended if the father lost custody upon divorce. Despite this, early in the 19th century, American courts began to hold that fathers had a legal support duty to their children. In addition, as an early precursor to child support, American states adopted portions of the Elizabethan Poor Law of 1601, which created a duty for parents to provide for their minor and adult children if those children were otherwise to become “paupers.” By the late 1800s, at least 11 states had criminalized paternal abandonment or a father’s nonsupport of his minor children. Modern Law In the 1970s, the general perception of the bench, bar, and citizenry was that alimony was frequently

Alimony and Child Support



(if not always) awarded when requested. Despite this unilateral supposition, in 1972, a significant study of California court orders found that only 15 percent of divorcing wives were awarded alimony. In 1979, the U.S. Supreme Court struck down Alabama’s gender-based alimony statute on equal protection grounds. Nationwide, gender-neutral alimony statutes were the result. Nevertheless, the appropriateness of alimony has been questioned since the 1960s. Upon the implementation of no-fault divorce in the 1970s, which negated the punitive rationale for alimony, questions regarding the appropriateness of alimony sharply rose. Favoring a so-called clean-break approach to divorce, the Uniform Marriage and Divorce Act (UMDA; 1970) favored property distribution over alimony to provide for the former family’s needs after the divorce. The goal of the UMDA was self-support for the dependant spouse. Through the 1900s, there were three broad categories of alimony: (1) temporary alimony (alimony pendent elite), typically awarded from the time of initial separation until the issuance of a final award; (2) lump-sum alimony, a one-time payment of an alimony award (this was criticized by many for being too similar to a property distribution); or (3) final alimony, which may include any of the following: • Rehabilitative: Provides the recipient spouse the ability to achieve the education, training, or experience necessary to develop the skills or credentials to gain self-sufficiency. It is generally limited to three to five years. • Bridge-the-gap: Covers identifiable, shortterm expenses needed to transition from being married to being single. It is generally limited to one to two years, and is often a one-time lump sum. • Reimbursement: Not relative to need, this type of support is meant to repay (some of ) the monies that he or she spent during the marriage to assist the other spouse in achieving a degree or further professional success. Indeterminate in duration. • Permanent/periodic/indefinite: May be granted after a long marriage (generally, more than 10 years), if the judge concludes

55

that the dependent spouse will indefinitely need support. Some states do not allow permanent support. Unlimited duration. Although there is no absolute rule guaranteeing the modification of an alimony award, generally the court considers whether there has been a material, unanticipated/unforseeable change in circumstances for either party. By the 1990s and into the 2000s, state legislatures and judges had begun to introduce limits on alimony, particularly in regard to short-term marriages. In addition, the statutory factors for determining alimony expanded beyond need or ability to pay, and focused on the facts in the individual case. After a decade of review, in 2002, the American Law Institute (ALI) introduced its “Principles of the Law of the Family Dissolution: Analysis and Recommendations” (Principles). The alimony provisions interpreted payments as compensation for intramarriage economic losses by one of the spouses, such as for lost employment opportunities or the ability to acquire education or job training or a loss on a return of investment in human capital (degrees/professional licenses earned during marriage). The principles contain a two-step calculation process: (1) calculate the disparity in the spouses’ incomes at divorce; and (2) multiply that disparity by a percentage based on either the length of the marriage, or the length of the childcare period. In 2007, the American Academy of Matrimonial Lawyers (AAML) recommended a formula for most alimony payments: 30 percent of the payor’s gross income, minus 20 percent of the payee’s gross income (not to exceed 40 percent of the combined gross income of the parties). The duration of the award is determined by multiplying the length of the marriage by the following amount: 0 to 3 years (0.3); 3 to 10 years (0.5); 10 to 20 years (0.75). Marriages over 20 years result in permanent alimony. Circumstances that would be considered deviation factors that might affect this formula include age or health, unusual needs, resultant inequity, missed job opportunities, tax consequences, and agreement of the parties. In its 2012 review of family law in the United States, the Family Law Quarterly reported that 41 states and Washington, D.C., use this statutory factors analysis in determining whether to grant

56

Alimony and Child Support

alimony; 21 states and Washington, D.C., do not consider marital fault in alimony decisions; 47 states and Washington, D.C., consider the standard of living; and 41 states and Washington, D.C., consider the status of the custodial parent. Payments made and classified as permanent, temporary, and lump-sum alimony are deleted from the income of the payor and added to the income of the payee for tax purposes, although arguments have been made that rehabilitative and reimbursement alimony should not be considered taxable income. Under a 2005 amendment to federal law, alimony payments are no longer dischargeable in bankruptcy. Under modern law, the duty of child support is independent of custody rights. Child support is owed until a child reaches 18 years of age or graduates from high school. There is an exception to this general rule for mentally or physically disabled children who are incapable of self-support. In those cases, child support is indefinite. Although states are divided on the issue, some require parents to pay postmajority support for education-related tuition, which may include college, graduate school, professional school, or college preparatory school. Under common law, a stepparent had no legally enforceable support obligation for a stepchild during the marriage to the child’s parent. If a relationship or obligation arose during the marriage, it was terminable at will by the stepparent by divorcing or separating from the child’s natural parent. Historically, upon divorce from a child’s parent, the stepparent had no duty of continued child support. In the modern era, with the increase in the number of stepfamilies, some states have statutorily imposed limited support obligations. Like alimony, child support paid during a given tax year is generally tax deductible from the payor parent’s income, reported as taxable income to the payee parent, and not dischargeable in bankruptcy. The Child Support Guidelines In the United States, family law rules and regulations have generally been state driven, rather than federally mandated. The calculation and collection of child support is no exception; as a result by the 1980s, there were approximately 54 different plans implemented within the 50 states, Washington, D.C., and the U.S. territories. Despite federal legislation dating back to the 1950s, mandating

interstate enforcement of child support orders, it was nearly impossible for the majority of parents to collect payments. Nationwide, there are three model child support guidelines utilized by the states: • Income shares model: This is the most common model and is based on the concept that the child should receive the same proportion of income that she or he would have received in an intact family. Both parents’ incomes are added together with the actual expenses for child care and extraordinary medical care. The total amount is prorated between the parents based upon their proportionate share of income. • Percentage of obligor’s income model: This model is followed in 10 states and Washington, D.C., and assumes that the custodial parent will provide for the child without being ordered to do so. Its two variations are the flat percentage (based upon the number of children) and the varying percentage (based on the payor’s income). • Delaware Melson formula model: Only followed in three states, this model is based on the theory that children’s needs are primary. Therefore, parents are only entitled to keep sufficient funds to meet their basic needs and retain employment. The Melson formula is a hybrid of the income shares and percentage of income models. Either parent can petition to have a child support order reviewed, at least every three years or when there is a substantial change in circumstances. Ending Support Payments Circumstances in which child support payments end before the child reaches the age of majority include the death of the payor parent, the emancipation of the minor child, if the child leaves the custodial parent’s home and refuses to follow the parents’ wishes, if the child earns enough money to provide for him or herself, adoption of the child by a parent who is replacing the payor, termination of parental rights, or the death of the child. Most

Almshouses



states provide that alimony ends when either the recipient or the payor dies, the recipient remarries, or often if the recipient cohabitates with someone in a marriage-like relationship. Cynthia G. Hawkins Stetson University College of Law See Also: Child Custody; Child Support Enforcement; Cohabitation; Demographic Changes: Cohabitation Rates; Demographic Changes: Divorce Rates; Divorce and Separation; Shared Custody. Further Readings Abramowicz, S. “English Child Custody Law, 1660– 1839: The Origins of Judicial Intervention in Paternal Custody.” Columbia Law Review, v.99 (1999). Kisthardt, Mary Kay. “Re-Thinking Alimony: The AAML’s Considerations for Calculating Alimony, Spousal Support or Maintenance.” Journal of the American Academy of Matrimonial Lawyers, v.21 (2008). Orr v. Orr, 440 U.S. 268 (1979). Vernier, Chester and John Hurlbut. “The Historical Background of Alimony Law and Its Present Statutory Structure.” Law and Contemporary Problems, v.6 (1939). Weitzman, Lenore and Ruth Dixon. “The Alimony Myth: Does No-Fault Divorce Make a Difference?” Family Law Quarterly, v.14 (1980).

Almshouses Immigrants from England to the American colonies brought with them notions of common law, including the Elizabethan Poor Law of 1601, which sought to differentiate between the poor who deserved aid from those who did not. The colonists in Massachusetts reinterpreted this law to fit their new circumstances, which in turn heavily influenced treatment of the poor in other colonies. Almshouse is generally synonymous with poorhouse in the United States, but it should not be confused with workhouse—a very different institution. Almshouses were established to assist a wide range of individuals unable to support themselves, including the homeless, sick, elderly, unemployed, mentally ill, orphans,

57

alcoholics, out-of-wedlock mothers, and victims of domestic abuse. Children of such “paupers” were also housed and schooled there. In the mid-19th century, many almshouses in rural areas changed their names to poor farms. Still later in that century, with increased professionalization and attempts to reduce the stigma of the poorhouse (a pejorative term reflected even today in the game Monopoly), other names were adopted such as city home or county home. Though most of these institutions have shut down or been transformed into something different, testimony to their past existence lives on in the names of roads, brooks, bridges, and land that carry their name (e.g., Town Farm Road or County Farm Road). English Poor Laws and Their Adaptation on American Soil English society was hierarchical in nature, and assumed that an extended family structure of mutually dependent members was the first line of defense against the need for public aid. Failing that, the parish (in England) or the town (in Massachusetts) assumed support of its legal inhabitants. Emulating the Elizabethan Poor Law of 1601, overseers were appointed to dispense money to the poor through outdoor relief programs that enabled recipients to remain in their homes and subsist on a dole of money given to them, or through indoor relief programs, wherein these poor were taken to and cared for at the local almshouse. The unworthy or idle poor were sometimes put to work in almshouses or sent to a workhouse. These laws served the purpose of caring for the indigent, maintaining the social order, and punishing those deemed unworthy to receive support. Until abolished by law in 1794, paupers who were not established residents of a given town were “warned out of town” so that they would not become a tax burden. Failing this, compensation was sought from the pauper’s town of origin, or in the case of newly arrived immigrants, from ship captains until a federal immigration act was passed in 1882. Determining the geographical area of a pauper’s established settlement was a key consideration determining a town’s responsibility for care; thus, this concept was a matter of controversy and was subject to change over the years. The inability to care for oneself was an affront to both survival on the American frontier and to later notions of American individualism. Almshouses

58 Almshouses were therefore meant to provide the bare necessities to not encourage or condone laziness. Whenever possible, ablebodied people were put to work maintaining the house or farm in order to cultivate good habits. The breaking of moral rules regarding things such as drunkenness and sex was taboo. Violators were punished in various ways. Often missing from such analysis of the poor was acknowledgement of larger forces at work that mitigated individual responsibility, such as food shortages that forced large-scale immigration, the Industrial Revolution, slavery, sexism, and forms of capitalism that treated people as a commodity without acknowledging their humanity. Organization and Structure of the Almshouse The overseers of almshouses, called superintendents (almost always male) and their matrons, mediated between the town and the day-to-day administration. While considered upstanding citizens, these administrators rarely had experience in social work, and sometimes not even in administration or management. The superintendent, matron, and their families often lived on the premises and assumed a patriarchal and maternal role in the facility. Residents of almshouses were called inmates, and most (even the deserving poor) were generally restricted to the premises, though it is unclear if this was due to social control designed to maintain order or simply the most economically efficient way of doing things. Rules regarding when to get up in the morning, work, and go to bed, along with codes of conduct (both written and unwritten) were strictly enforced. Where possible, children were separately housed and schools were established for them. Child welfare reformers were of two minds. Some advocated removal from the negative influences of the almshouse, including the parents(s), and placement with stable families. Others called for separate facilities within the almshouse compound, or new institutions altogether. Both mindsets agreed upon limiting or severing the parental role, and little thought was given to aid that would keep families intact. Conditions and Closure Conditions within the almshouses widely varied. Building maintenance was often neglected. The inmates’ diet was designed to mirror that of the working poor who were not confined. Overseers

attempted to strike a balance between providing a healthy diet, but not one so appealing that it attracted unworthy vagrants and tramps. Room shortages sometimes necessitated crowding residents in the same room: healthy with the sick, men with women, and the mentally ill with those of sound mind. Children who were able learned practical trades. Some were paid low wages to produce goods and services to sustain the almshouse. Others were apprenticed to farmers and craftsmen, a practice that too often amounted to servitude. Almshouses were accused of both perpetuating the institution through low-wage pay for economic gain when they retained children, and shirking their responsibility to the children and society when they did not. It was this same comingling of a diverse population with very different needs, combined with increasing professionalization and specialization, which led to the demise of almshouses. Overseers, superintendents, and matrons were given an impossible task unimaginable in today’s society. While some poorhouses and poor farms held out until the 1960s, most closed in the 1930s and 1940s. Greater empowerment for the working classes, New Deal reforms following the Great Depression, food stamps, and especially the Social Security Act of 1935 displaced the local relief programs and provided the “outdoor” aid necessary to close down deteriorating almshouses. Greater outside aid in the form of welfare and Social Security did not, however, address the specialized needs of many former residents of almshouses. While almshouses were shuttered, the population in jails, public retirement homes, asylums, homeless shelters, children’s homes, women’s shelters, public hospitals, and other more specialized institutions grew. Public housing aid also skyrocketed. Addressing the needs of poor families through affordable housing, health care, nutrition, vocational training, rehabilitation for substance abusers, and mental health treatment in today’s diverse population is no less a problem now than it was in Elizabethan England. Douglas Milford University of Illinois at Chicago See Also: Domestic Violence; Food Shortages and Hunger; Food Stamps; Homelessness; Immigrant Families; Individualism; Industrial Revolution Families; Nursing Homes; Welfare.



AMBER Alert

59

Further Readings Meltsner, Heli. The Poorhouses of Massachusetts: A Cultural and Architectural History. Jefferson, NC: McFarland, 2012. Wagner, David. The Poorhouse: America’s Forgotten Institution. Lanham, MD: Rowman & Littlefield, 2005. Wagner, David. Ordinary People: In and Out of Poverty in the Gilded Age. Boulder, CO: Paradigm, 2008.

AMBER Alert The AMBER Alert Program is a means through which notice of and details about the abduction of a child can be quickly conveyed to the community in order to facilitate a search. More than a decade before this program was initiated, the Missing Children’s Assistance Act of 1984 required federal agencies to collect data about the number and characteristics of children who were abducted, missing, or sexually exploited and about the nature and outcome of such cases involving missing children. The results were shared with law enforcement agencies,  practitioners, and policymakers. Two key findings from these studies laid the groundwork for the need for the AMBER Alert Program. The first was the scope of the problem (estimates of the number of children reported missing each year range from 750,000 to 1.3 million), and the second was that studies have consistently found that speed of response to a child abduction is critical to the safe return of that child. The U.S. Justice Department reports that of those children abducted and murdered by a stranger, 75 percent of them are killed within the first three hours of their abduction. Thus, in 1996, when a 9-year-old girl was abducted and murdered in Arlington, Texas, the community created a coalition between local law enforcement agencies and the local media to implement a system that would allow for the efficient notification of the entire community if a child was abducted. Since then, the program has spread across the United States, Canada, and other countries, and it has also grown in complexity, with increased coordination across communities. By the first quarter of 2012, the AMBER Alert Program had been credited with making a direct contribution to the safe return of more than 570 children.

An AMBER Alert highway sign alerts motorists to a suspected child abduction in northern California. Highway signs are one of the many distribution methods of the emergency alert system.

The program’s title has two meanings. First, Amber was the name of the girl abducted and murdered in Arlington, Texas; and second, AMBER is an acronym for America’s Missing: Broadcast Emergency Response. Just a few days after Amber’s death, her mother called for and organized efforts to create a national sex-offender registry, and Amber’s parents were present when President Clinton signed the bill requiring such a registry. In the course of their work, it also became apparent that there was more that local police could do in the case of an abducted child. This prompted a news reporter to approach the Dallas police chief about creating a more efficient and quick response system that would solicit the entire community’s help in child abduction cases. This was the first AMBER Alert program. In 2002, 26 states had AMBER Alert systems in place, and that year President Bush announced improvements in the program and the development of national standards for issuing AMBER Alerts. By 2005, all 50 states had AMBER Alert Programs, and in 2013, AMBER Alerts began to be delivered automatically through the Wireless Emergency Alerts program to millions of cell phone users. Participation in the AMBER Alert Program is voluntary, and generally involves considerable

60

AMBER Alert

coordination between law enforcement, media, wireless companies, transportation organizations, and nearby communities. The protocol when a child abduction occurs is to communicate any useful details about the abduction to the public as soon as possible. Useful details include physical descriptions and clothing worn by the child and the abductor and the make, model, and license plate number of any vehicle thought to be involved. This notification occurs through commercial, Internet, and satellite radio, broadcast and cable television, the National Weather Service, e-mails, text messages, Facebook and Google notifications, electronic traffic-condition signs, and LED billboards owned by private companies. To further enhance the program, key figures in its creation and development, along with representatives from various parts of the networks involved in communicating AMBER Alerts, convened at a federally funded conference to compare notes about procedures that are most effective and to create procedures designed to ensure that all parties in the system remain aware of their responsibilities and are able to perform them. With so many different kinds of agencies with distinct roles involved, the complexity of the program is substantial. The conference resulted in the 2012 publication by the Department of Justice of a best practices handbook for Amber Alert Program participants, with future revisions and network development and coordination coming from national AMBER Alert coordinators, in collaboration with a national advisory panel. AMBER Alert Protocol AMBER Alert protocol begins with the issuing of an alert by a local police organization. The criteria for an AMBER Alert have been created to prevent overuse of the system that might limit its effectiveness. The public is likely to pay less attention to the alerts if they become everyday occurrences. Thus, the recommended criteria for posting an AMBER Alert are that law enforcement confirms an abduction has occurred, the child is at risk of serious bodily harm or death, sufficient information is available to offer to the public to make an alert useful, and the person abducted is 17 years old or younger. Upon meeting these criteria, AMBER Alert data is then entered into the National Crime Information Center (NCIC) system. The most controversial criteria for an alert is

that the child be at risk of serious injury or death. This usually rules out parental abduction, yet because these criteria are only recommendations, many police agencies also choose to issue AMBER Alerts in those cases. Some police agencies also ignore the first criteria, and contend that by the time police are able to verify that an abduction has occurred, much precious time has been lost. A study of 233 AMBER Alerts issued in 2004 revealed that most did not meet the federal criteria for an alert. Half were family abductions, and just 70 of the 233 alerts involved children abducted or unlawfully traveling with adults who were not their legal guardians. However, given the need to respond with urgency to all cases of missing children, the federal government has begun to train Child Abduction Response Teams (CART). These teams are designed to assist local authorities in missing children cases of all kinds and are considered an essential next step, after the successful implemental of the AMBER Alert program, in increasing the rate of safe recovery of missing children. Mel Moore University of Northern Colorado See Also: Child Abuse; Child Safety; Childhood in America; Center for Missing and Exploited Children; National Center on Child Abuse and Neglect. Further Readings Griffin, Timothy. “Empirical Examination of AMBER Alert Successes.” Journal of Criminal Justice, v.38/5 (2010). Griffin, Timothy and Monica K. Miller. “Child Abduction, AMBER Alert, and Crime Control Theater.” Criminal Justice Review, v.33 (2008). Miller, Monica K., et al. “The Psychology of AMBER Alert: Unresolved Issues and Implications.” Social Science Journal, v.46/1 (2009). National Center for Missing and Exploited Children. http://www.missingkids.com/NCMEC (Accessed July 2013). U.S. Department of Justice. “AMBER Alert Best Practices.” http://www.ojjdp.gov/pubs/232271.pdf (Accessed July 2013). U.S. Department of Justice. “AMBER Alert—America’s Missing: Broadcast Emergency Response.” http://www.amberalert.gov (Accessed July 2013).



American Association for Marriage and Family Therapy

61

American Association for Marriage and Family Therapy

order to remain up to date with current empirical findings and ethical practices in the field. Marriage and family therapy is recognized—along with psychiatry, psychology, and social work—as one of the core mental health professions.

The American Association for Marriage and Family Therapy (AAMFT) is the national association representing marriage and family therapists worldwide, though primarily in the United States and Canada. Originally referred to as the American Association of Marriage Counselors when it was founded in 1942, the AAMFT has maintained its primary focus to address the needs and challenges of couple and family relationships. Several goals guide the association’s work: (1) to increase the understanding of changing patterns in couples and families; (2) to facilitate research on couple and family relationships and best clinical practices; (3) to provide education to couples, families, and educators regarding family patterns and theory development; and (4) to develop standards for clinical training in order to ensure that professionals and clinicians are prepared to meet the public’s needs.

Annual Conference and Clinical Institutes The AAMFT hosts an annual week-long national conference that unites the leading professionals within the field. This conference provides continuing education for working clinicians, and presents new ideas in research regarding clinical work with couples and families. Each conference highlights a specific theme in marriage and family therapy, with past years focusing on the science of relationships (2011), the evolving roles of women (2012), and raising vibrant children (2013). The AAMFT also provides additional opportunities for continuing education in the form of clinical institutes in the summer and winter each year. These workshops cover a variety of topics, such as a refresher course for clinical supervisors, treating sexual concerns in couples, substance abuse by adolescents, and family play therapy. The association also provides online training opportunities that allow clinicians to expand their knowledge of family therapy via the Internet.

Marriage and Family Therapists Distinct from clinicians in similar fields such as social work and psychology, marriage and family therapists are professionals trained to address and diagnose mental health issues and disorders in the context of couples and families. These therapists are often guided by family systems theory, which posits that all members of a family system are interconnected and that the behavior or needs of one member impacts the entire family system. According to the AAMFT’s Web site, marriage and family therapy is brief and solution-focused and guided by achievable therapeutic goals developed with the end of therapy in mind. Although licensure requirements vary by state, clinicians licensed in marriage and family therapy are required to have a master’s or doctoral degree, must train for at least two years in a post-degree supervised clinical setting, and must pass a national licensure examination managed by the Association of Marital and Family Therapy Regulatory Boards. Once licensed, marriage and family therapists must complete some form of continuing education (e.g., workshops, trainings, or readings) each year in

Journal of Marital and Family Therapy and Family Therapy Magazines In addition to providing continuing education opportunities, the AAMFT publishes the quarterly Journal of Marital and Family Therapy (JMFT) and provides members with free online access to it. JMFT is a peer-reviewed journal that focuses on research, theory, and practice. It describes new perspectives in marriage and family therapy while also presenting empirical support for various psychotherapeutic treatments. The AAMFT also publishes the Family Therapy magazine, which is distributed bimonthly to all AAMFT members. This publication presents articles on developments and news in marriage and family therapy as well as the legislative and economic issues affecting not only families but also the clinicians and therapists working with them. Laura M. Frey Jason D. Hans University of Kentucky

62

American Family Association

See Also: American Family Association; American Family Therapy Academy; Family Counseling; Family Therapy. Further Readings American Association for Marriage and Family Therapy. American Association for Marriage and Family Therapy. http://www.aamft.org (Accessed May 2013). Gladding, S. Family Therapy: History, Theory, and Practice, 5th ed. New York: Pearson, 2010. Lebow, J. L. “Listening to Many Voices.” Family Process, v.21 (2012). Professional Examination Service. Association of Marital and Family Therapy Regulatory Boards (AMFTRB). http://www.amftrb.org (Accessed May 2013).

American Family Association The American Family Association (AFA) is a Christian nonprofit organization advocating socially conservative positions in American culture and government. It employs multiple platforms, including print and Web publications, a radio network, and a news division to oppose obscenity and indecency, gay rights, and religious pluralism. History and Platforms of the AFA Founded in 1977 by Donald E. Wildmon, a Methodist pastor from Mississippi, the AFA’s original name was the National Federation for Decency, reflecting its early focus on pornography and obscenity. In 1988, the name change to the American Family Association reflected the organization’s broadened scope in taking on other issues in the so-called culture wars. Under Wildmon, the AFA became one of the most active organizations of the religious right. In 2010, Wildmon left his position, and his son Tim Wildmon became president of the organization. In 2013, the AFA claimed a circulation of more than 180,000 subscribers to, and more than two million online viewers of, its flagship publication, the AFA Journal, which reviews books and films,

summarizing plots and indicating areas of potential concern for Christian viewers, such as alcohol use, violence, or immodest dress. The AFA is affiliated with American Family Radio, one of the nation’s largest Christian radio networks, with approximately 200 radio stations. Programming includes sermons and Bible commentary, as well sports commentary and financial advising, but the largest portion of programming focuses on politics. Similarly, the AFA news source, OneNewsNow, serves as a political voice for the organization and produces audio newscasts and a daily digest of news stories, editorials, and opinion columns. Such stories are often cited by other religious right outlets. By educating its audience about current events that it views as challenging Christianity’s primary place in American culture, the AFA inspires citizens to contact their elected officials, vote, boycott objectionable companies, and buy products from companies that share AFA values. For example, the Web sites OneMillionMoms and OneMillionDads encourage conservative Christians to pressure companies to align their products and advertising with AFA positions. AFA Positions The AFA was founded to oppose what the group sees as indecency, obscenity, and pornography. Campaigns include efforts to defund the National Endowment for the Arts because of the explicit content of some works. The AFA also pressures booksellers and convenience stores to stop selling pornography. Through its Web sites and the AFA Journal, the organization encourages viewers to boycott television shows (and their advertisers) that have an “anti-Christian” message. These shows have included Saturday Night Live, NYPD Blue, Ellen, Desperate Housewives, Preachers’ Daughters, and The New Normal for reasons ranging from obscenity to adultery and the normalization of same-sex relationships. Alternatively, the AFA encourages members to purchase entertainment produced by conservative Christians. AFA members boycott films and television shows with gay characters and oppose “politically correct” efforts to show respect for gay people in education and the media. For example, the AFA encourages parents to keep children home from school on Harvey Milk Day, a California state



American Family Therapy Academy

63

holiday commemorating the assassinated politician who was gay, and Mix It Up Day, when students are encouraged to eat lunch with students outside their cliques. Most notably, Bryan Fischer, the AFA’s director of issue analysis, has voiced some of the most hostile anti-gay language espoused among religious right leaders, arguing that gay rights activists are akin to Nazis, suggesting that children of gay parents be removed to homes with straight parents, and supporting AIDS denialists. Consequently, the Southern Poverty Law Center (SPLC) labeled the AFA a hate group in 2010. AFA officials denied that charge, stressing that they are a beneficial resource for gay people because they encourage homosexuals to stop engaging in samesex contact. As part of its anti-gay activism, the AFA boycotts companies that advertise in gay magazines or that otherwise market to gay clients or customers, use gay spokespeople, participate in the National Gay and Lesbian Chamber of Commerce, and provide employment benefits for same-sex partners. Targets have included Disney, McDonald’s, the Ford Motor Company, Home Depot, and JC Penney. The AFA opposes what it sees as increasing secularization in American public life, including the removal of crèches and Ten Commandments displays from public settings, the end of mandatory school prayer, and efforts in business and government to minimize the use of the word Christmas during the holiday season. For example, the AFA ranks companies on according to how frequently and specifically they reference Christmas in their print and broadcast advertising and store displays. Companies labeled as anti-Christmas and targeted for boycott include Office Depot, Old Navy, Radio Shack, and Victoria’s Secret. The AFA also opposes the increasing visibility of those outside of Christianity in American life. The AFA’s anti-Semitic claims, including propagation of the stereotype that Jewish people control American media, have prompted consternation from the Anti-Defamation League and the American Jewish Congress. Frequently, the AFA has voiced opposition to non-Christians’ participation in civic life. For example, in 2007, the AFA urged Congress to require all elected officials to be sworn in using a Bible upon the election of Keith Ellison, the first Muslim elected to Congress. That year, the U.S. Senate session was opened inprayer by a Hindu

priest for the first time, which AFA warned was in defiance of the national motto “In God We Trust” due to the fact that Hinduism is polytheistic, and thus would necessitate trust in “gods.” Fischer has gone so far as to argue that Islam is not protected by the First Amendment, and so American Muslims have a privilege, extended by American Christians, to practice their religion, but no actual right to do so. Rebecca Barrett-Fox Arkansas State University See Also: Christianity; Evangelicals; Family Research Council; Family Values; Focus on the Family; Protestants. Further Readings Tepper, Steven J. Not Here, Not Now, Not That!: Protest Over Art and Culture in America. Chicago: University of Chicago Press, 2011. Wilcox, Clyde and Carin Robinson. Onward Christian Soldiers? The Religious Right in American Politics, 4th ed. Boulder, CO: Westview Press, 2010. Williams, Daniel. God’s Own Party: The Making of the Christian Right. Oxford: Oxford University Press, 2009. Winbush, Don and Donald E. Wildmon. “Interview With Rev. Donald E. Wildmon: Bringing Satan to Heel.” Time (June 19, 1989).

American Family Therapy Academy The American Family Therapy Academy (AFTA) is a nonprofit organization of leading family therapy teachers, clinicians, policymakers, and social scientists dedicated to advancing systemic thinking and practices for families. Founded in 1977 as the American Family Therapy Association, in 2014 the organization had over 800 members. The diverse membership agrees to uphold AFTA’s goal of clinical and academic excellence within a social justice framework. The AFTA Web site states that the organization “holds a core commitment to equality, social responsibility and justice with attention

64

American Family Therapy Academy

to marginalized and underserved groups, and fosters policies that support the welfare of families and children and serves as a context for all concerned with the health of the family.” Membership Founding AFTA members, including Murray Bowen, Carolyn Attneave, and James Framo, created AFTA as an avenue to explore the biological, psychological, relational, and sociocultural dimensions of family therapy. In 1979, the charter meeting recognized that relationships are at the center of the family therapy field, and AFTA was committed to systemic approaches to family therapy. Current members invited to join AFTA must have at least five years of post-degree experience in family research, teaching, or clinical work, and must have made recognized contributions to the field. AFTA’s early career membership category allows young professionals to join after two years of postdegree work, and a student membership category allows those pursuing a terminal degree in family therapy to join. Members convene at an annual meeting in June for workshops, presentations, and collegial discussion. A biannual clinical research meeting held in locations across the United States allows members to present and discuss the most current, innovative work in family therapy. In addition, AFTA members contribute cutting-edge research and articles to the AFTA Monograph, published once a year, and members keep in touch with each other and with the organization through the AFTA Update, which is published quarterly. Committees AFTA’s family policy committee addresses the impact of local, state, and national policy on families. This committee has addressed how the U.S. federal budget, the marriage resolution, and reproductive rights legislation affect families and the working poor. AFTA defines family broadly, including all family structures with regard to race, gender, and marital status. The human rights committee addresses concerns of abuse, manipulation, and exploitation of at-risk or marginalized people. AFTA believes that a human rights framework is essential to promoting mental health and psychosocial wellbeing. This committee has crafted and presented

statements on immigration, torture, prison abuses, working rights, and other pertinent issues. AFTA’s committee on cultural and economic diversity further focuses attention on immigration issues, working rights, and the status of marginalized workers in the United States. Particular attention has been given to infringements upon the Fourth Amendment right protecting individuals from illegal search and seizure. In 2013, AFTA released an immigration position statement decrying current U.S. immigration policies and their impact on families. Calling the policy “restrictive, harmful, and detrimental to families,” AFTA identifies how procedures such as separating families, forbidding medical care and education, and garnishing earned wages are dehumanizing and a violation of basic human rights. The position statement further calls for policymakers and community members to advocate for fair and humane treatment of immigrants and to monitor current policies concerning immigration. AFTA recommends policies that protect immigrant families and their children, expand opportunities for legalized citizenship, and work to decriminalize the status of those currently residing in the United States. Commitment to Research and Clinical Excellence AFTA maintains a commitment to research and clinical improvement in family therapy by promoting research and discussion of next-generation thinking. In 2012, AFTA released a statement clarifying their concerns with the revision process and the content of the Diagnostic and Statistical Manual of Mental Disorders-V (DSM-V), the reference work used to determine and support mental health diagnoses and treatment options. In this document, they indicate that “the current revision of the DSM continues a long history of ignoring research and excluding vital contributions of non-psychiatric mental health disciplines resulting in invalid diagnostic categories and treatment protocols.” The statement expresses concerns about the use of a medical model in the DSM-V and the lack of consideration of family and sociocultural contexts. In the statement, they indicate that they believe that the increased focus on organic causes of mental illness leads to an increased use of medication as the treatment of choice over other psychological

American Home Economics Association



or systemic approaches. AFTA maintains that the diagnostic criteria in the DSM-V demonstrate a gross neglect of years of family and systemic research. Annual awards recognize the clinical and research excellence of AFTA members. These awards include: the Distinguished Contribution to Family Systems Research Award, the Distinguished Contribution to Family Therapy Theory and Practice Award, the Innovative Contribution to Family Therapy Award, the Lifetime Achievement Award, and the distinguished Contribution to Family Justice Award. These awards recognize those members who demonstrate through their professional and personal qualifications a commitment to AFTA’s position on systemic, socially constructed family therapy within a social justice framework. Marcie M. Lechtenberg Sandra M. Stith Kansas State University See Also: Bowen, Murray; Family Therapy; Parent Education; Systems Theory. Further Readings American Family Therapy Academy. http://www.AFTA .org (Accessed December 2013). Gladding, S. Family Therapy: History, Theory, and Practice, 5th ed. Upper Saddle River, NJ: Pearson, 2010. Lebow, J. L. “Listening to Many Voices.” Family Process, v.21 (2012).

American Home Economics Association Home economics as a profession developed from conferences at Lake Placid, New York, beginning in 1889. The American Home Economics Association was founded in 1909 by Ellen Swallow Richards, a pioneering chemist and professor who became the organization’s first president. Richards was the foremost female industrial and environmental chemist in the United States in the 19th century, and pioneered the field of home economics. She was the first female graduate and female professor

65

at the Massachusetts Institute of Technology. She was the first woman accepted to any school of science technology and the first American woman to earn a degree in chemistry. Richards was also an activist for consumer education; nutrition; child protection; industrial safety; public health; career education; women’s rights; purity of air, food, and water; and the application of scientific principles to family life. Richards was a pragmatic feminist and founding ecofeminist, who believed that women’s work within the home was a vital aspect of the economy. Richards combined the idea that science is capable of making human existence better with her desire to improve women’s education and her opinion that the home was the most important place for that reform. This resulted in a rapid development of home economic courses in public schools and colleges. The purpose of such courses were to teach women how to prepare food and assume the responsibility for the care of the house and family. In 1994, the American Home Economics Association changed its name to the American Association of Family and Consumer Sciences (AAFCS). This retooled organization is a professional association devoted to advancing the family and consumer sciences. Its purpose is to provide for and promote the professional development of family and consumer sciences students, whose work will improve the quality of life for all families. In addition to addressing the professional needs of home economics professors and instructors, the AAFCS also includes government, business, and nonprofit organizations. Members include educators, administrators, and managers, human service and business professionals, researchers, community volunteers, and consultants. These individuals provide research-based knowledge about the topics of everyday life, including human development, personal and family finance, housing and interior design, food science, nutrition and wellness, textiles and apparel, and consumer issues. The AAFCS is part of the Consortium of Family Organizations and makes recommendations to the Vocational Political Action Committee. In 1985, the AAFCS (then the American Home Economics Association) joined the Home Economics Public Policy Council (HEPPC), which assists in the development of legislation that impacts issues concerning home economics.

66

“Anchor Babies”

Brief History The Morrill Act of 1863 initiated domestic science as an area of study at the nation’s land grant colleges because it facilitated the act’s mandate to create educational institutions that furthered agricultural research. Domestic science sought to educate women who were instrumental in running their family’s farming households. The term home economics was coined during the first Lake Placid Conference in 1899. Richards and her contemporaries met over the next 10 years at subsequent conferences to explore the latest in advances in the profession. Their goal was to form an educational and scientific association that would formalize the profession. Thus, in January 1909, they founded the American Home Economics Association. These women felt that it was important for students in primary and high schools to be offered courses that would open up professional opportunities for women. Throughout the 20th century, many federal laws were passed that contributed to establishing the discipline of home economics. The Bureau of Home Economics Act of 1927, The George–Dean Act of 1937, as well as a number of vocational acts throughout the 1960s and 1970s all issued funding for research in this field that promoted increased opportunities and rights for women’s education. In the Twenty-First Century Decades after the founding of this professional organization, family and consumer science professionals continue to practice in many venues, including colleges, universities, and outreach and cooperation extension programs. Nutritionists, consumer specialists, and housing and textile specialists continue to provide for a better quality of life for families. The organization recognizes the need to increase the understanding and appreciation of the field among media, legislators, and the general public. Joanne Ardovini Metropolitan College of New York See Also: Breadwinner-Homemaker Families; Child Care; Domestic Ideology; Education, College/ University; Family Life Education; Feminism; Feminist Theory; Food Shortages and Hunger; Frozen Food; Gender Roles; Home Economics; Homemaker; Obesity; Parenting Education; Pure Food and Drug Act of 1906; Technology.

Further Readings Richard, Ellen. The Efficient Worker. Boston: HealthEducation League, 1908. Richard, Ellen. First Lessons in Food and Diet. Boston: Whitcomb & Barrows, 1904. Richardson, Barbara. “Ellen Swallow Richards: Humanistic Oekologist, Applied Sociologist, and the Founding of Sociology.” American Sociologist, v.33 (2002). Slavin, Sarah. U.S. Women’s Interest Groups: Institutional Profiles. Westport, CT: Greenwood, 1995.

“Anchor Babies” The first known use of a term related to anchor babies was in a 1987 Los Angeles Times Magazine article by Mark Arax, who wrote about a study involving “troubled southeast Asian teens . . . either ethnic Chinese, Vietnamese, or a blend” called “anchor children,” who were “saddled with the extra burden of having to attain a financial foothold in America to sponsor family members who remain[ed] in Vietnam.” By 2005, the term anchor babies was used to describe another type of American immigration—children born to illegal immigrants (especially those coming over the border from Mexico) in the United States or to those who legally visit the United States for the purpose of giving birth during that visit. The second group may not want to stay and live in the United States, but they see an advantage in having their child gain automatic U.S. citizenship. Anchor babies is considered a derogatory term, used most often by political conservatives concerned about illegal immigration in the United States. When the American Heritage Dictionary published this term in its 2011 edition, it was criticized by some groups for not explicitly stating that it is a derogatory expression. The editors reconsidered, and the online version of the dictionary notes that the term is considered offensive. Citizenship Requirements The controversy stems from the Fourteenth Amendment of the U.S. Constitution, which reads: “All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the state wherein they reside.



No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.” The Fourteenth Amendment was passed after the American Civil War, primarily to establish the principle that former slaves were citizens and entitled to every right of such citizenship, and it is codified in 8 USC Sec 1402(a). Some interpret this amendment to state that anyone born in the United States receives automatic citizenship; others do not interpret it this way and want the policy of automatic citizenship revised. Still others agree that the Fourteenth Amendment offers citizenship but believe that in the 21st century the United States is becoming too lenient with this policy and advocates instead for what is known as birthright citizenship. Enacting this would entail overturning the Fourteenth Amendment by adding another amendment. Finally, others argue that illegal immigrants are not even “subject to the jurisdiction” of the United States and thus the Fourteenth Amendment does not apply to them. The U.S. Supreme Court defended the Fourteenth Amendment in the late 19th century, tracing the history of the statutory and common law regarding jus soli (Latin meaning “right of the soil”) in England and the United States. Jus soli means that citizenship is determined by the place of birth. The case of United States v. Wong Kim Ark, 169 U.S. 649 (1897), involved a 21-year-old man of Chinese descent born in San Francisco. He was denied entry back into the United States after a year-long visit to China, even though he had legally been born to Chinese parents working in the United States and had lived in the United States for 20 years. The Chinese Exclusion Act (1892) denied entry to Chinese immigrants of the labor class at this time, and though Wong Kim Ark had worked as a laborer, the Supreme Court claimed that because of the Fourteenth Amendment, his citizenship could not be denied. He was not an immigrant; he was a citizen who had been born in California. Useful Term or Slur? As a term, anchor baby is used by some in the media and by some policymakers when describing a child born in the United States to non-U.S. citizens, whether those parents are legally in the United

“Anchor Babies”

67

Many Vietnamese were rescued while fleeing their home country and immigrated to the United States Their babies were referred to by the derogatory term anchor babies.

States or not. It is a term sometimes used during congressional discussions when describing a bill or proposal to remove the birthright citizenship. Statistics on how many illegal immigrants have babies in the United States are unreliable because there are no dependable statistics on how many illegal immigrants reside in the country. Therefore, even though some policymakers and newscasters state that thousands of parents have illegally entered the country just to have a baby, there is great skepticism over this number. Parents of such children do not automatically earn citizenship, but when the child reaches the age of 21, he or she can sponsor the parents to live and work legally in the United States. Controversy over the term anchor babies remains heated; some find it pertinent to the situation, and others find it an offensive slur. Several bills have been introduced in Congress, such as H.R. 140 and S. 723 proposed in 2011, and at least 13 states have had similar laws proposed.

68

Annie E. Casey Foundation

These bills address concerns that children born in the United States to noncitizen parents can at age 21 be the portal to legal status for those parents. However, no laws had yet been codified to change the Fourteenth Amendment by late 2013. However, the birthright clause concerning the children some call anchor babies remains a divisive issue surrounding illegal immigration in the United States. Antoinette W. Satterfield U.S. Naval Academy See Also: Central and South American Immigrant Families; Immigrant Children; Immigration Policy; Mexican Immigrant Families; Migrant Families. Further Readings Arax, Mark. “A Profile of a Lost Generation.” Los Angeles Times Magazine (December 13, 1987). Ignatow, Gabe. “New Media and the ‘Anchor Baby’ Boom.” Journal of Computer-Mediated Communication, v.17 (2011). Lacey, Marc. “Birthright Citizenship Looms as Next Immigration Battle.” New York Times (January 4, 2011. http://www.nytimes.com/2011/01/05/us/ politics/05babies.html (Accessed November 2013). O’Neal, Nathan. “‘Anchor Baby’ Phrase Has Controversial History.” ABC News. July 2010. http:// abcnews.go.com/Politics/anchor-baby-phrase -controversial-history/story?id=11066543 (Accessed August 2013).

Annie E. Casey Foundation The Annie E. Casey Foundation is a private organization devoted to assisting disadvantaged children. This nonprofit is one of the United States’s leading foundations, with assets exceeding $2 billion. Toward its goal of forging more desirable futures for poor children, the foundation works on public policy, social services, and building supportive communities to promote the health and well-being of at-risk children and their families. One facet of

its work is grant-giving, totaling nearly $200 million annually, which is aimed at organizations that offer creative methods of addressing inequities and better meeting children’s needs. Its focus is on children living in the United States. The foundation was begun by James E. (Jim) Casey, a founder of the United Parcel Service (UPS), and his siblings. It was named in memory of their mother, Annie E. Casey, who struggled to raise them after she was widowed when they were young. History of the Foundation In 1948, Jim Casey, along with his brothers George and Harry and their sister Marguerite, founded the charity in Seattle, Washington, to honor the memory of their mother and her child-rearing efforts as a single widowed mother. Through witnessing their mother’s struggles, the siblings came to understand and appreciate the need for support and resources for disadvantaged children and families. To help with the family income, Casey began work making deliveries at age 11. In 1907, while he was still a teenager, he started a message delivery service named the American Messenger Company. Because household telephones and automobiles were uncommon at the time, people often relied on messenger services to relay information for them. Over time, American Messenger grew into today’s UPS. Casey became considerably wealthy, and launched the foundation in 1948, undergirded by his belief that children’s futures are shaped by support from both the family and the community. Toward that end, his goal was to ensure that disadvantaged children had access to support, guidance, and role models that could help them become successful. During its first two decades, the foundation’s primary project was providing funding for a camp for at-risk youth. After Casey stepped down as the CEO of UPS, he turned his attention to the foundation, striving to increase its reach and impact. Part of his expansion of the organization included Casey Family Services, a direct services program that operated for many years. Casey died in 1983, and left considerable funds to the organization, which allowed its grant-giving program to expand. By the mid-1990s, the Annie E. Casey Foundation had moved its headquarters to Baltimore, Maryland.

Anorexia



Example Projects The central features of many of the Casey Foundation’s projects include ensuring the well-being of children, enhancing education, facilitating financial security, fostering community change, and advancing rehabilitation for juvenile offenders. Through its grants initiatives, it supports longitudinal projects with multipronged outcomes. Several of these projects have shown considerable promise. Two cities, Baltimore and Atlanta, have received considerable focus and support. In each of these cities, grant funding combined with local initiatives has fueled community-oriented solutions for underprivileged children and families. Employing the Evidence2Success model beginning in 2000, the foundation has been assessing need and determining methods to help urban communities succeed. A key facet of this program is building collaborations. The Family Economic Success programs are devoted to building strong families that can remain resilient in the face of difficulties. These programs help clients secure employment, learn financial management, and plan for the future. The Kids Count project is concerned with the welfare of children nationwide, and involves data collection and analysis, in addition to advocating for policies that enhance children’s lives. This project also ranks the states on issues of child well being. Beyond producing an updated annual volume, Kids Count also maintains a data center. The Casey Foundation’s Child Welfare Strategy Group, formed in 2000, has been working to transform the nation’s child welfare programs. Also, for more than two decades, the foundation has helped rehabilitate juvenile offenders through the Juvenile Detention Alternatives Initiative, in which incarcerated youth participate in programs to promote adult success, reduce recidivism, and develop positive life skills. Since its formation in 1948, the Annie E. Casey Foundation has expanded its mission and programs, reaching needy children and their families and enhancing their lives. Joy L. Hart University of Louisville See Also: Childhood in America; Family Life Education; Foster Care; Poverty and Poor Families; Working-Class Families/Working Poor.

69

Further Readings Annie E. Casey Foundation. http://www.aecf.org (Accessed August 2013). Family to Family California. “Who Was Jim Casey?” http://www.f2f.ca.gov/res/pdf/WhoWasJimCasey.pdf (Accessed August 2013). Lukas, Paul and Maggie Overfelt. “UPS—United Parcel Service: James Casey Transformed a Tiny Messenger Service Into the World’s Largest Shipper by Getting All Wrapped Up in the Details of Package Delivery.” CNN Money (April 1, 2003). http://money.cnn.com/ magazines/fsb/fsb _archive/2003/04/01/341024 (Accessed August 2013).

Anorexia Since the early 20th century, family dynamics have been implicated in both the development and maintenance of eating illnesses such as anorexia nervosa, a serious disorder with both psychological and physical consequences that affects males and females of all ages. Research has also discovered that numerous other factors influence eating disorders, including genetics, birth complications, personality characteristics like perfectionism or negative self-evaluation, neurocognitive functioning, and comorbid disorders such as obsessive-compulsive disorder. Because of the complex interactions of multiple factors and difficulty determining temporal precedence, researchers no longer believe that family characteristics cause eating disorders, but that they are related to their maintenance. Families, however, especially in the case of adolescents, are a critical part of anorexia treatment, and can play a meaningful role in providing support for those who struggle with the disorder. Diagnostic features of anorexia include (1) restriction of sufficient food intake relative to requirements that leads to less-than-normal body weight, (2) intense fear of weight gain or behavior interfering with weight gain, even when below weight, and (3) disturbed views of weight and body shape. Two types of anorexia have emerged: the restricting type characterized by intense dieting, and the binge eating/purging type in which eating more than normal or abusing diuretics occurs. The one-year prevalence of anorexia is 0.4 percent for young women,

70 Anorexia with a 10-to-1 female-to-male ratio. As a result of medical complications and suicide, anorexia has the highest morbidity rate of any psychiatric disorder. Family Risk Factors and Anorexia Early theories proposed that anorexia was from families that were achievement-oriented, concerned with appearance, repressed their anger, or had high parental conflict. It was believed that children longed for parental approval and developed a poor self-image to divert attention from their parents’ problems. To compensate, the child would strive for physical perfection or become overly controlling. These family characteristics evolved using case reports with little quantitative research support. Subsequent research revealed other factors related to the development of anorexia that are also related to the development of other psychiatric disorders. For example, correlation and retrospective studies have shown that youth with anorexia report families with a high degree of emotional involvement, a low degree of expressed emotion, and mothers who are overly protective or concerned. Researchers now believe that these factors interact with biologically based variables (e.g., genes and cognitive function) that then determine specific outcomes, such as anorexia, rather than a different psychiatric illness like depression. Family protective factors have also been identified, including highly structured family environments and frequent family meals. However, most studies on anorexia and the family suffer from small sample size, retrospective reporting, diverse assessment methods, and a lack of longitudinal studies. Genetics, cognitive factors, and cultural contexts also contribute to eating disorders. As with other psychiatric conditions, genetic vulnerabilities appear to combine with early environmental factors to predispose individuals for elevated risk of anorexia. Studies of identical twins have found that genetic factors count for between 50 and 83 percent of the variability in the presentation of anorexia. Chromosomal regions and genes are also involved in neurological systems (e.g., neurotransmitter function) that interact with other factors and emerge behaviorally as problematic eating. Other cognitive deficits (e.g., mental rigidity and overly detailed focus) and emotional functioning deficits are also implicated. In addition, sociocultural factors, such as thin-idealization in society and the media, serve as a context for body image

disturbance, which is a key symptom of eating disorders. Family Involvement in Treatment Research suggests that families of both adolescent and adult patients should be involved in the intervention and treatment of eating disorders. Strong research evidence supports family-based treatments, particularly for adolescents with anorexia. A meta-analysis of research studies supports that family-based treatments for adolescents using the Maudsley method outlined in J. Lock and D. Le Grange’s Family Based Therapy for Adolescents show similar short-term results as individual therapies but are likely superior over the long term. The Academy of Eating Disorders believes that families are critical to adolescent treatment, unless there is a clinical reason that family should not be involved. Controversy remains over the specific mechanisms by which families assist children recovering from anorexia, so instead of a “one-size-fits-all” approach to treatment, it is helpful for families to create a specialized treatment plan with physicians, psychologists, nutritionists, and other care providers. When children struggle with a life-threatening illness, family relationships are strained and parents and siblings may become distressed, leading to the development of new and unhealthy patterns of interaction. Treatment can be disruptive to families and may involve sending the ill family member away from home, or families may need to patch together diverse supportive groups to find care. In addition, engaging multiple care providers or residential programs can be expensive and difficult to coordinate. Community support is not always available to families and insurance may not cover costs. In some states, legal intervention may be required for adult patients in order to receive care, which could further disrupt family relationships. Families are encouraged to pursue multiple avenues for treatment possibilities, including looking at insurance programs and seeking out research studies. It makes sense for families to consider their resources and demands as they consider family involvement in the recovery of a family member from anorexia. Families need to educate themselves on risk factors, encourage resilience and positive approaches to eating and exercise, and help children develop strong coping skills. Perhaps most importantly, parents must recognize the seriousness of

Arranged Marriage



this illness and have faith in the efficacy of treatments. They can consult with the Academy of Eating Disorders to receive additional information on treatment options and find specialist providers across the country. Shannon Casey California School of Professional Psychology Alliant International University Danielle Colborn Stanford University See Also: Bulimia; Family Counseling; Parenting. Further Readings Academy of Eating Disorders. http://www.aedweb.org (Accessed November 2013). Konstantellou, A., M. Campbell, and I. Eisler. “The Family Context: Cause, Effect or Resource.” In A Collaborative Approach to Eating Disorders, J. Alexander and J. Treasure, eds. New York: Routledge/Taylor & Francis, 2012. Le Grange, D., J. Lock, K. Loeb, and D. Nicholls. “Academy for Eating Disorders Position Paper: The Role of the Family in Eating Disorders.” International Journal of Eating Disorders, v.43 (2010). Lock, J. and D. Le Grange. Treatment Manual for Anorexia Nervosa: A Family-Based Approach, 2nd ed. New York: Guilford Press, 2012. Maine, M. D. “Eating Disorders and the Family: A Biopsychosocial Perspective.” In Handbook of Families and Health: Interdisciplinary Perspectives, D. R. Crane and E. S. Marshall, eds. Thousand Oaks, CA: Sage, 2006. Treasure, J., G. Smith, and A. Crane. Skills-Based Learning for Caring for a Loved One With an Eating Disorder: The New Maudsley Method. New York: Routledge/Taylor & Francis, 2007.

Arranged Marriage Marriage, a pair-bond relationship reinforced by culturally specific ritual, is an essential institution in every society. Marriages across the world share a number of essential elements, including reciprocal economic and sexual rights and obligations. However, one way that marriages differ is in how they

71

are formed. In the United States and most of the Western world, physical attraction and love usually form the basis for an individual’s decision on who to marry. But elsewhere in the world, marriages are typically arranged. Arranged marriage is a process of mate selection in which parents choose a partner for their child to marry. Though other family elders or religious leaders may also play a role in finding and approving a potential partner, most commonly it is the parents of the individual who have the most influence in the process. Parents may begin looking for appropriate mates and negotiate the arrangements when their child reaches marriageable age, during childhood, or even sometimes before their child is born. Depending on the cultural tradition, the child may have a substantial voice in the selection, or none at all. In some societies, spouses may not even meet until the day of the wedding ceremony. However, most cultures with a tradition of arranged marriage allow their children to reject perceived mismatches, and the young adults are encouraged to meet several times before the ceremony to determine their compatibility. This process results in what is sometimes called a “vetted” marriage. Motivations Underlying Arranged Marriages Instead of individuals choosing a partner for themselves based on love or attraction, the central goal of arranged marriage is for parents to match a son or daughter to a spouse who possesses traits and family characteristics that are culturally, financially, and religiously compatible and desirable. This is not to say that the emotions of love, passion, romance, and longing are unknown in these cultures. Indeed, the idea of falling in love is quite common. Rather, many parents consider romantic love or physical attraction to be a poor basis for the marriage of their children. Instead, they consider a broader range of sociocultural traits to be a better indicator of a successful match. The cultural tradition of arranged marriage exists because the parents see the needs of the family as a whole as superseding those of individual children. Toward that end, the parents believe that they can make better choices based on their greater experience and foresight. From the parents’ perspective, marrying one’s children into good families establishes and maintains important social and economic alliances with families that possess wealth,

72

Arranged Marriage

power, and status. Children are expected to defer to their parents’ greater wisdom. Proponents believed arranged marriages have several advantages over “love marriages.” First, because the families have been instrumental in orchestrating the marriage, they are supportive of the future partner and the union in general. Second, the compatibility of life goals are considered when determining suitable partners, leading to marriages that are grounded in similar outlook and background, instead of emotional or physiological attraction. Finally, arranged marriages focus on family connections, the commonalities of the prospective partners, and their potential success in raising a family and building a life together. Though the emotional connection to one another is an important quality to develop during the marriage, it is not the primary focus of the marriage; love is neither an important motivation for getting together, nor a necessity for starting a family. This is not to say, however, that arranged marriages are loveless; many partners in arranged marriages develop strong bonds of love, affection, loyalty, and mutual respect over time. But these are regarded as happy outcomes of a successful marriage, not the reasons for their inception. Arranged Marriage in Historic and Global Contexts In the United States, love is the primary criterion that individuals use for choosing spouses, a practice known as marriage by free choice. However, there are groups within the United States for whom arranged marriage is a viable option. For example, immigrant families may continue to arrange marriages for their children after settling in the United States. In addition, some religious denominations encourage family input in mate selection. The ideal of marriage based on bonds of love that is shared by most Americans is a distinctly minority view in the modern world and across history. Arranged marriage has been and remains the most common form of mate selection globally. Historically, nobility, royalty, and political leaders arranged marriages for their families, regularly using their children and grandchildren to make strategic alliances with families or countries. These alliances had nothing to do with romance or the potential for compatibility or companionship; instead, they were politically important ties binding countries, kingdoms, and families together.

Unequal Marriage, a 19th-century painting by Russian artist Pukirev, depicts an arranged marriage where a young girl is forced to marry a much older man against her will.

In the 21st century, arranged marriages remain common in Asia, Africa, and the Arab world, and though they have declined in frequency because of increased Western influences, they remain an important part of those cultures. Many of these arranged marriages occur within the extended family and often between cousins, a form of marriage known as consanguineous marriage. Given the extent of global communication and its influence, it is unclear whether arranged marriage will remain the most common form of mate selection in the world in the future. Mari Plikuhn James J. Berry University of Evansville See Also: Courtship; Dowries; Extended Families; Immigrant Families; Islam. Further Readings Ghimire, Dirgha J., William G. Axinn, Scott T. Yabiku, and Arland Thornton. “Social Change, Premarital Nonfamily Experience, and Spouse Choice in an Arranged Marriage Society.” American Journal of Sociology, v.111/4 (2006).

Hirsh, Jennifer S. and Holly Wardlow, eds. Modern Loves: The Anthropology of Romantic Courtship and Companionate Marriage. New York: Macmillan, 2006. Myers, Jane E., Jayamala Madathil, and Lynn R. Tingle. “Marriage Satisfaction and Wellness in India and the United States: A Preliminary Comparison of Arranged Marriages and Marriages of Choice.” Journal of Counseling and Development, v.83/2 (2005).

Artificial Insemination Artificial insemination (AI) is a procedure used to address the challenges that women sometimes face in becoming pregnant. Urban myths suggest that women in the United States are facing growing fertility challenges, but the truth is that social, educational, academic, and professional factors have changed pregnancy timelines for many women. Some women have fertility issues, and others who are older than the average age of conception may need medical intervention to facilitate pregnancy. AI is a minimally invasive method that has helped many who might otherwise not be able to naturally conceive to become a parent. Men can also have fertility issues, and AI can also help them become biological fathers. Ideally, AI addresses a number of fertility issues that often results in conception leading to childbirth. The social construct of the American family has long been perceived as a married couple with children. Some religions accept infertility as grounds for divorce; in some cases, a marriage was not sealed until an heir was born. As an alternative for couples who were not able to conceive, options such as adoption and fostering were considered. Yet, some couples still sought to have biological children. The first published account of AI in the United States took place in 1909, and by the mid-19th century, it was regularly used in the United States and Europe. By 1949, freezing and thawing methods had improved and AI successes increased. In 1954, an Illinois court ruled that babies conceived by donor-provided sperm during artificial insemination were illegitimate. This position was rejected by most states, and by 1960, approximately 50,000 babies were born using the AI method.

Artificial Insemination

73

Artificial Insemination Defined Artificial insemination is the commonplace name for intrauterine insemination. This process directly inserts sperm into a woman’s cervix, uterus, or fallopian tubes. The result is a shorter, more direct trip for the sperm to reach the egg that avoids possible obstructions. A variety of fertility issues can be addressed through this method. Men with low sperm counts or mild male factor infertility can become biological fathers because of AI’s ability to give sperm a “shorter trip” that is more likely to lead to fertilization than natural insemination. To do this, a sperm sample is “washed” by removing dead cells and cellular debris, leaving the best and fastest sperm that will not induce uterine cramping while also expediting fertility potential. Women with reproductive organ abnormalities or endometriosis can also benefit from AI. Some women have “unreceptive cervical mucus,” which prevents the sperm from successfully traveling to the egg. AI bypasses this cervical mucus. Sometimes, doctors who cannot identify specific fertility problems will suggest this method. There are several different AI techniques. The most common and most effective is intrauterine insemination (IUI), in which concentrated samples of motile sperm are directly placed into the uterus. While IUI does not create a multiple pregnancy, the use of medications to stimulate follicles and release multiple eggs can increase the chance that more than one egg will become fertilized. AI allows individuals who are HIV positive to have children without the risk of passing the disease on to them. Sperm washing for an HIV-positive donor as well as in vitro medications to prevent a gestating fetus from contracting HIV can result in an HIV-negative infant. However, in 1990, the Centers for Disease Control and Prevention issued a recommendation against sperm washing after an HIV-negative woman became HIV-positive after she was artificially inseminated with washed sperm from her husband. This recommendation has not been changed. All sperm bank donor sperm is tested for HIV by law to avoid infecting the woman and the future fetus. Results Babies conceived using AI can be carried to term with minimal medical intervention. For some families, the technique is preferable to adoption, which can be more expensive. Physiologically, families

74

Asian American Families

can bear biological children, or they can use donor sperm for inseminating the female partner, or eggs from a surrogate female who is inseminated with the male’s sperm. Both methods are fairly common and generally accepted methods of conception. While AI does not cause multiple pregnancies, they are possible. Twins, triplets, and more multiples may gestate together, resulting in reduced birth weights and other issues. Doctors may recommend reducing the number of fetuses to increase viability of the greatest number. Families eager to have a child may find several embryos growing to term, creating a bigger family than anticipated. This may necessitate arrangements regarding financial assistance, medical care, child supervision, and possibly special needs. Relevance to the American Family American families increasingly include both heterosexual and homosexual couples who wish to become parents. In the case of gay couples, using a surrogate requires participation in an artificial insemination process. Lesbian couples may undergo AI with donor sperm. Heterosexual couples with fertility issues may find themselves undergoing turmoil in their efforts to conceive. For them, the process of artificial insemination is the least invasive intervention, and often the first attempted before moving on to more costly or medically invasive options. Few insurance plans cover AI, which may leave couples or individuals paying for the procedure for an outcome that is not guaranteed. In addition, stress and hormonal side effects from follicle stimulating medications can introduce tension within the family, especially if pregnancy is not achieved. Legal Considerations Laws governing paternity vary by state. However, sperm bank donors are universally protected when their sperm is used for AI. Such donors have no responsibilities or liabilities regarding resulting offspring. Similarly, these donors have no rights or access to information about who has received their sperm. Sometimes, an individual or couple will choose to use a known donor. In such cases, it is strongly recommended to work with an attorney to draft relevant papers to terminate the donor’s parental rights and give full custody to the prospective parents. Children conceived via donor sperm have the right in some states to access identifying

information about the donor upon reaching adulthood. Kim Lorber Ramapo College of New Jersey See Also: Assisted Reproduction Technology; Fertility; Infertility; Multiple Partner Fertility; Natural Families; Parenting Plans; Prenatal Care and Pregnancy. Further Readings De Brucker, Michael, et al. “Cumulative Delivery Rates in Different Age Groups After Artificial Insemination With Donor Sperm.” Human Reproduction, v.24 (2009). Ganguly, G., et al. IUI Intrauterine Insemination. London: J. P. Medical, 2012. Vercollone, C. F., H. Moss, and R. Moss. Helping the Stork: The Choices and Challenges of Donor Insemination. New York: Hungry Minds, 1997.

Asian American Families In 2010, there were over 17 million Asian Americans living in the United States. However, this population is not monolithic, and the term Asian American constitutes many subgroups. Asian American people include those who identify themselves as Chinese, Filipino, Indian, Vietname.se, Korean, Japanese, Pakistani, Cambodian, Hmong, Thai, Laotian, Bangledeshi, Burmese, Indonesian, Nepalese, Sri Lankan, Malaysian, Bhutanese, Mongolian, and/or Okinawan, among other nationalities. Asian Americans are among the fastest-growing ethnic groups in the United States. Literature reinforces the notion that Asian Americans value family, sometimes by perpetuating stereotypes that belie how greatly these families can vary. Public figures such as author Amy Chua, who wrote the bestselling book Battle Hymn of the Tiger Mother (2011), can misinform the public about the culture of Asian American families. Chua’s portrayal perpetuates and homogenizes Asian American culture in overly simplistic ways, leading to stereotypes that do a disservice to the reality of the vibrancies of the many Asian American subcultures. The reality is that Asian American families vary not only in their household and family size, but

Asian American Families



75

Table 1 Number and percentage of families below poverty level in 2000, by ethnic group Group

Number of Families

Total Population

Percentage below poverty level

Vietnamese

34,900

1,122,000

3.11

Cambodian

9,500

171,000

5.56

Laotian

5,700

168,000

3.39

Thai

2,100

112,000

1.88

Hmong

8,900

169,000

5.27

Asian Indian

27,900

1,600,000

1.74

All Groups

89,000

3,342,000

2.66

Source: Christopher Thao Vang, 2010.

also in their compositional dynamics. The idea that Asian American families are all similar is highly problematic because it erases legitimate ethnic, cultural, and parenting differences within the population. To believe in the idea that “tiger mothers” exist is to legitimize Chua, which in turn mischaracterizes the Asian American family. Table 1 provides the number and the percentage of families below the poverty level in 2000 by ethnic group. This highlights the extent to which Asian American families are heterogeneous, not homogenous. While only 1.88 percent of Thai American families lived below the poverty level in 2000, 5.56 percent of Hmong American families lived below the poverty line in 2000. Divorce Within Asian American Families Compared to Other Groups Asian American families have the lowest rates of marital divorce and the highest levels of intact families of all ethnic groups in the United States. An intact family represents a mother and father living in the same home. Compared to African Americans, Asian Americans appear to remained married at higher rates (See Table 2). However, these percentages are based on aggregate statistics, which leads to the incorrect belief that because Asian American culture values matrimony, their divorce rates are lower than other groups of people. Cultural explanations also lead people to believe that Asian Americans’ strong families contribute to their children’s high achievement in school. The suggestion or insinuation that all Asian Americans endorse pro-family and pro-educational values

is incorrect, and can be thought of as excellent embodiment of the model minority stereotype and “ethnic gloss.” Ethnic gloss is a sociological term that means overgeneralizing racial and ethnic differences so much that it leads to homogenization. The homogenization of Asian American families, or believing that all Asians families are the same, is highly problematic. The idea of “tiger mothering” conceals Asian Americans’ true academic and familial heterogeneity. Ethnic gloss leads people to believe that all Asian American families are alike, which is not true. Not all Asian American families are high functioning and economically stable. Certain Asian American families undergo familial disintegration and turmoil when a generational or language gap comes between parents and their children. Some of these “disintegrated” families have family members who join gangs, and are eventually killed, adjudicated, or incarcerated. Consequently, the literature on Asian American families indicates that the family composition has been used to support the notion that Asian

Table 2 Percentage of couples divorced, by racial group Group

Percentage

African Americans

11.3

European Americans

9.8

Hispanic Americans

7.6

Asian Americans

3.0

Source: Christopher Thao Vang, 2010.

76

Asian American Families

American culture is a decisive explanatory factor when examining achievement. Asian American Household Income The household incomes for Asian Americans appear to be higher than they truly are because median income hides the fact that Asian American families are more likely to have more wage earners under one roof than other racial and ethnic families. According to many critical scholars and sociologists this reality casts serious doubts on research that argues that Asian Americans are model minorities who come from flourishing families. The work done by Jaime Lew in New York documents that there are lowachieving Korean Americans who live in impoverished families in the United States. According to Lew’s research, Korean American achievement is most associated to its family’s socioeconomic class. In other words, poor Korean American families will, on average, have poorer academic outcomes than wealthier Korean American families. The Coalition for Asian American Children and Families (CACF), with funding from the Ford Foundation, Carnegie Corporation, Beautiful Foundation, and New York Community Trust, authored a spectacularly revealing 2011 report, “We’re Not Even Allowed to Ask for Help: Debunking the Myth of the Model Minority.” This CACF report documents how Asian American families are faring in New York City. According to the report, half of New York City’s Asian American children are in families with incomes below the 200 percent of poverty threshold. Adopted Asian American Families Asian American families are also created by and through international adoption. For instance, when an adoptive family adopts a child from an Asian country, the family becomes an adopted Asian American family. A significant amount of literature on adopted Asian American families has compared adoptive mothers’ and fathers’ perceptions of their adopted child’s realities. What this research indicates is that adoptive parents’ perceptions may be at odds with their adopted Asian American son’s or daughter’s experiences. Some literature on adopted Asian American families addresses Asian adoptees who are raised in white homes, and who describe being cultureless. While not all adopted Asian Americans experience a sense of cultural ambivalence, that adopted Asian

Americans diversify family type in the United States must be recognized by society. Additionally, while the configurations of Asian American families continue to evolve, cultural competence is important. Research and literature support the idea that culturally competent adoptive parents are those who instill ethnic pride in, and share coping skills with, their children. According to some definitions and conceptualizations, cultural competency is reached when adoptive parents are racially aware (are aware of racial differences between and among the parent and child), engage in multicultural planning (build bridges between their race and their child’s race and culture), and teach their children survival skills (educate their children about the realities of racism). The Future of Asian American Families Asian American families in the United States continue to grow more diverse and more heterogeneous over time. The compositions and cultures found in Asian American families are incredibly diverse. Household income of Asian American families also tends to be distributed in a bimodal fashion. Future research should examine how adopted Asian American families develop and evolve over time. Asian American families can be created, in some ways. Does this reality mean that Asian American families will continue to grow more diverse and heterogeneous, or less? The continued and rapid growth of Asian Americans in the United States necessitates that states, schools, and local and national governments create systems of support and social services that will serve Asian Americans. In addition, personal and political stakeholders will have to advocate and educate the general public about the realities of Asian American families so that the public does not buy into the destructive myth of the Asian American model minority family. Nicholas D. Hartlep Illinois State University See Also: Adoption, Mixed-Race; Chinese Immigrant Families; Model Minority Stereotype. Further Readings U.S. Census Bureau. “The Asian Population.” http:// www.census.gov/prod/cen2010/briefs/c2010br-11 .pdf (Accessed June 2010).

Van Campen, K. S. and S. T. Russell. “Cultural Differences in Parenting Practices: What Asian American Families Can Teach Us.” Frances McClelland Institute for Children, Youth, and Families ResearchLink, v.2 (2010). Vang, Christopher Thao. An Educational Psychology of Methods in Multicultural Education. New York: Peter Lang, 2010.

Assimilation Assimilation refers to the processes by which people adopt the dominant culture, how members of the family unit adapt in response to exposure to the dominant culture in the United States. There are several competing ideas about how immigrants and members of nondominant cultures in the United States should assimilate, and these ideas have undergone substantial changes over time as the United States has become increasingly ethnically and racially diverse. Understanding assimilation requires examining the family unit and the parental role differences between immigrant and nonimmigrant families. This has led researchers to define two major approaches to assimilation, the linear and segmented models of assimilation. Familial and Parental Roles Familial roles are the behavioral and psychological expectations that individuals have of others in their families. The family unit is comprised of individuals who make up one’s immediate support system. This may include the traditional family, comprised of a mother, father, and children. However, a family may also include extended family members (aunts and uncles), same-sex parents, step children/brothers/ sisters, and adopted children. Traditional families include a head of household (man/husband/father) and a homemaker (woman/ wife/mother). In the traditional family framework, the parental role of a father includes providing financial resources, guiding children in what are considered male-dominated areas such as sports, and having final authority over family matters. The homemaker’s role is to take care of the household in terms of cleaning, cooking, and meeting the emotional needs of family members. In the United

Assimilation

77

States, however, family roles have dramatically changed since the middle of the 20th century, with many families sharing the breadwinner roles and decision making more equally between husband and wife. Additionally, the number of divorced and unmarried individuals raising children has dramatically risen. Thus, there are multiple family models that immigrants may look to when assimilating. Among members of the dominant culture, generally speaking, parents are expected to pass their culture down to their children. Among members of the nondominant culture, however, children may become the individuals who most effectively facilitate the process of assimilation for the family. Immigrant children often serve as a bridge between the immigrant culture and the dominant culture. The immigrant culture is often preserved by parents within the home, and children then assimilate into the dominant culture through school, media, and their peers. Immigrant children often speak English more fluently than their parents, and serve as translators for their parents in public settings. In addition, parents may not be aware of American customs that their children learn in school, and thus children may bear the responsibility of teaching their parents to hand out candy during Halloween or to buy cards for classmates for Valentine’s Day. Sometimes, immigrant children may have to monetarily contribute to the household and familiarize their parents with American sports such as baseball. As a result, the dynamics of the immigrant family unit can significantly differ from the dynamics of a traditional U.S. family as assimilation starts to occur. Types of Assimilation Linear assimilation is the idea that individuals will lose traits of their original culture and adopt traits of the mainstream dominant culture at a consistent rate. This type of change is said to occur because of a prolonged exposure to the dominant culture, such as over several generations. Linear assimilation has been studied by American sociologist Milton Gordon, who suggested seven stages of assimilation: cultural, marital, structural, identification, attitude reception, behavior reception, and civic. Cultural assimilation is the process by which individuals or groups adopt the cultural norms of the dominant group. For example, Italian Americans and many Hispanic Americans have adopted English as their primary language. Patterns in dating, gender

78 Assimilation roles, and preferences for sons have also somewhat changed among Italian Americans. More specifically, family dynamics have become more American in that the male–female relationship has become more egalitarian. Parents have also begun to shed their long-held preference for sons and regard daughters with equal value. Middle Eastern Americans have also culturally assimilated within gender expectation norms. For example, many Middle Easterners have adopted the norm of men and women interacting in public spheres, and allow their daughters more autonomy than in the past or in their country of origin. Marital assimilation occurs when prejudice decreases and individuals begin to marry others of a different ethnic or racial affiliation than themselves. In the 19th and early 20th centuries, for example, many Italian Americans and Irish Americans were considered outsiders to other U.S. groups with European ancestry, mainly because of their religion. Over the past century, however, there has been a significant increase in intermarriages among Italian, Irish, and other European American groups, and these marriages are now readily accepted. This is in sharp contrast to the early 1900s, when prejudice against Italian Americans extended between Italian Americans, who made distinctions between immigrants from specific areas of Italy. Structural assimilation is a phase within assimilation that suggests an acceptance into larger institutions. For example, one of the ways in which Irish Americans assimilated into mainstream American culture was by gaining a large presence in government entities, such as the police force and the fire department. In addition, many immigrants married into Protestant Christian denominations and left the Catholic Church. This resulted in many immigrants raising their children as Protestant, further assimilating into the dominant U.S. culture. Identification assimilation occurs when an immigrant feels a commonality with the mainstream culture, and feels that he or she is represented by it. For example, English Americans, German Americans, or Irish Americans may now solely identify as American. These groups have assimilated into the dominant group to an extent that they are now part of what defines the dominant culture and group in the United States. This form of assimilation may permit attitude-reception assimilation. Attitudereception assimilation occurs when dominant group members do not expresses negative feelings

or attitudes toward one’s group. The group may also experience behavior-reception assimilation. Behavior-reception assimilation occurs when dominant group members do not engage in acts or behaviors that negatively impact one’s group. Attitude-reception and behavior-reception assimilation describe an environment in which a group does not experience a high rate of prejudice or discrimination. The last phase, civic assimilation, occurs when there are no longer disagreements in regard to allocation of resources to the assimilated group. Segmented Assimilation and the Family Segmented assimilation suggests that society has a number of different sectors that an individual may assimilate into. Thus, assimilation is not a means to enter one dominant group. Generally, segmented assimilation focuses on two tracks of assimilation. The first is assimilation into the dominant group, consisting of U.S. norms. The second is assimilation into another marginalized group in society that has a lower socioeconomic status (SES). Segmented assimilation theorists often posit that linear assimilation does not always benefit the individual. Segmented assimilation may consist of three models: classical assimilation, assimilation into a lower SES, and selective acculturation. Classical assimilation requires the immigrant to drop cultural traits from their home country and adopt the cultural values and norms of the dominant group. Classical assimilation most often leads to entrance into the middle class. For example, many immigrants may choose to Anglicize their surnames or give their children Anglo names in order to ensure assimilation and in hopes of entering the middle class. Assimilation into a lower SES occurs when second- or third-generation immigrants decide to assimilate into a group other than the middle or upper classes. This model calls into question the notion that the longer a group is exposed to the dominant culture, the more assimilated it becomes. For example, some researchers have found that Mexican Americans who have been in the United States for several generations may experience lower levels of education achievement and income than the dominant middle class. They have also found that these groups may sometimes experience higher rates of divorce than mainstream Americans. A number of researchers have suggested that these immigrants may have formed an “oppositional culture,” whereby



they have chosen to reject the norms and values of the dominant group. One may argue that those who whereas have been in the United States for many years have a choice in whether or not to assimilate, recent immigrants often feel that they need to assimilate in order to survive. Selective acculturation occurs when an individual chooses to maintain his or her native culture while assimilating on an economic level. For example, an immigrant father may instill his cultural values and norms in his children while not expressing these values and norms in a work environment in an effort to ensure job security. Many Hispanic Americans may choose to only speak English at work in order to fit in and not alienate colleagues who do not understand Spanish. Another common experience within immigrant families is dissonant acculturation. Dissonant acculturation occurs when immigrant children assimilate into the mainstream American culture at a faster rate than their parents. This is especially apparent within many Middle Eastern families. For example, in many Middle Eastern countries, men and women are separated in public places, and young men and women are not permitted to interact in public. However, Middle Eastern boys and girls in the United States frequently interact in social settings and at school, where it may be expected and/or encouraged. Expressions of sexuality are also significantly different in the United States than in many Middle Eastern countries. Dating is uncommon in Middle Eastern cultures. Courting may only occur with the intention to wed. Thus, Middle Eastern parents in the United States may have an extremely negative reaction if they find out that their daughters are casually dating. Middle Eastern parents may also socialize their sons and daughters differently from Euro Americans. While both boys and girls are expected to become highly educated, generally only boys are expected to earn competitive incomes. In contrast, women are oftentimes expected to run the household. Thus, Middle Eastern parents limit girls’ interaction with the dominant group in order to preserve their culture and ensure that it is passed on to the next generation. Familism Some scholars have defined familism as a collectivistic trait whereby the individual puts the needs of others (their family and community) before their

Assimilation

79

needs. It is a complex phenomenon with important ramification for assimilation, and in the United States it is common among Hispanic Americans. Familism may explain why and how some Hispanic Americans do not become more assimilated after several generations in the United States. This is evident by differences between the dominant group and Hispanic Americans in the areas of income, number of children, single-parent households, and educational attainment. Some researchers have argued that familism persists because some Hispanic Americans choose to hold onto their cultural norms because they are emotionally and cognitively rewarding. Individuals may choose to assimilate into a lower socioeconomic class because they feel a sense of identification with the group that they do not feel with the dominant group. Using this perspective, familism would encourage strong family relationships, resulting in a lower divorce rate and higher birth rate. Familism also relates to traditional gender roles in Hispanic families. The terms machismo and marianismo are considered stereotypical and outdated by some researchers, but they describe the traditional roles of the husband and wife in a Hispanic family. The husband/father is the head of the household and holds most of the power and control; the wife/mother is expected to be patient and kind. The wife/mother’s job is to take care of everyone in the household and put the needs of her husband and children ahead of her needs. However, even these customs are changing; the success of feminism in American society has prompted many Hispanic American women to establish a new family dynamic, resulting in a higher degree of assimilation into Western ideals of familial roles. Shari Paige David Frederick Chapman University See Also: Acculturation; Immigrant Families; Mexican Immigrant Families; Middle East Immigrant Families. Further Readings Gratton, Brian, Myron P. Gutmann, and Emily Skop. “Immigrants, Their Children, and Theories of Assimilation.” History of the Family: An International Quarterly, v.12 (2007).

80

Assisted Living

McAuliffe, Garrett, et al., eds. Culturally Alert Counseling. Thousand Oaks, CA: Sage, 2008. Parrillo, Vincent, N. Strangers to These Shores. Boston: Pearson, 2006. Xie, Yu and Emily Greenman. “Segmented Assimilation Theory: A Reformulation and Empirical Test.” Population Studies Center (2005). http://www.psc.isr .umich.edu/pubs/abs/3443 (Accessed March 2014).

Assisted Living Assisted living refers to arrangements that promote the philosophy of personal control and responsibility for disabled or older persons who need assistance with some daily living tasks, but want to remain relatively independent. This arrangement follows the aging in place philosophy. The rise of assisted living facilities (ALFs) is based in the independent housing units (board and care homes) and organized homes called homes for the aged. These homes typically predated the establishment of Medicare and Medicaid. They were not organized units of care and did not have regulations to monitor their care. When Medicare was started in 1965, it began to shape the modern nursing home. Many of the board and care homes and the homes for the aged were converted to nursing homes to receive funding from federal health care coverage. Not all facilities could convert because they did not meet the criteria or they did not chose to offer the health- related services that were required of the new nursing homes. These homes were still known by many names, such as board and care homes, rest homes, adult care homes, and convalescent homes. As the health care system continued to change, nursing homes became more like hospitals, and the early models of assisted living emerged in reactions to the nursing home movement. The term assisted living was first officially used in the state of Oregon in 1985 in a pilot project for housing for Medicaid recipients that would be in new residential housing. The goal was to establish a more desirable approach to senior housing and care. The assisted living movement began to develop in Oregon and Virginia at the same time. Financing for these ALFS was mainly private.

From 1994 to 2000, the assisted living movement grew with the support of strong financing. Companies such as Sunrise, Atria, and others went public on Wall Street. The assisted living concept was well marketed to the public as a positive alternative form of care. Since 2000, the movement has not grown at the same rate because of the changing economy and new policies related to housing environments. ALF arrangements vary in structure and services. There is no clearly defined federal standard for the term assisted living. Therefore, arrangements greatly vary from serving almost totally independent adults to offering nursing home level of care. There are some federal laws that impact assisted livings but most oversight occurs at the state level. Many states are moving toward a definition of assisted living facilities and licensure of these facilities. Assisted living is regulated at the state level and each state determines policies that define and regulate what care services are required for assisted living communities. Some states set regulations to clearly distinguish an assisted living community from a long-term care facility. Other areas monitored by states include care providers, food, and safety. The lack of clear definition does not allow for precise counts, but best estimates were that in 2009, there were 36,000 to 65,000 assisted living facilities in the United States. More than 1 million senior citizens are served by these assisted living facilities. A 2005 study published in the U.S. National Library of Medicine estimated that these facilities generate income of about $15 billion annually. Those in residence are primarily female (74 percent) and white (97 percent). Assisted living arrangements generally include the following: • Private resident, group, or congregate living situations that provide room and board, as well as social and recreational opportunities • Transportation • Assistance to residents who need help with personal care needs and supportive services; this can include help with IADLs (instrumental activities of daily living) and ADLs (activities of daily living) • Protective oversight or monitoring • Help available 24 hours a day on a scheduled and unscheduled basis



ALFs range from a standalone residence to one level of care in a continuous continuing care retirement community (independent living to skilled nursing home level). The physical environment of assisted living is often more a homelike atmosphere. They can be apartment style, typically including studio or one-bedroom models with kitchenettes that usually feature a small refrigerator and microwave to allow for all individual preparation of food. Assisted living communities are designed to provide residents with a life as independent as possible. To maintain this independence, ALFs may provide assistance with basic ADLs such as bathing, dressing, and grooming. In some states, they are allowed to offer medication assistance and reminders. Assisted living communities differ from nursing homes in that they do not offer complex medical service. Typically, assisted living communities offer their residents prepared meals three times a day, and help with light housekeeping and laundry (these may be fee-for-service items). Depending on the community, residents may have access to fitness area swimming pools, beauty salons, a post office, and transportation. Communities also have planned events, activities, and trips that residents can purchase. In some ALFs they have anything from happy hour to concerts. Assisted living communities range from small homes to large campuses. Some allow residents to keep pets to maintain a more home-like atmosphere, if they can care for them. Assisted living residents usually have a slight decline in health or need assistance in performing one or more activities of daily living. Those who live in assisted living usually want to live in a social environment with little responsibility. Ideally, a facility works to provide supportive services to meet the residents’ needs to avoid discharge to a nursing home. Assisted living is typically paid for out of private funds but there are a few exceptions. Some longterm care insurance policies cover licensed assisted living facilities. If a resident was a war veteran or a spouse, they may qualify for veterans’ benefits that can help pay for assisted living. A limited number of state Medicaid programs fund waiver programs to help with assisted living costs. An assisted living facility is not a permanent home but a step along the continuum of long-term

Assisted Reproduction Technology

81

care. Most residents (77 percent) move on to nursing home care. Janice Kay Purk Mansfield University See Also: Caring for the Elderly; Elder Abuse; Nursing Homes. Further Readings Assisted Living Federation of America. http://www .alfa.org/alfa/default.asp (Accessed May 2013). “Genworth 2013 Cost of Care Survey.” https://www .genworth.com/corporate/about-genworth/industry -expertise/cost-of-care.html (Accessed May 2013). U.S. National Library of Medicine. http://www.nlm .nih.gov/medlineplus/assistedliving.html (Accessed May 2013).

Assisted Reproduction Technology Assisted reproductive technology (ART) is an overarching term that refers to the methods used to achieve a pregnancy through artificial or partially artificial means (i.e., when intercourse is not used to achieve pregnancy). A broad range of ARTs have emerged since the 1960s. Established technologies such as artificial insemination (AI) and in vitro fertilization (IVF) as well as a newer technique of egg donation, have created opportunities for many previously infertile couples to have genetic children or to rely on the gametes of others to produce a child who has a genetic connection to only one—or neither—parent. Because these technologies have been made available to a broad variety of individuals, including single mothers by choice and both gay and lesbian couples, a broader variety of families have become more commonplace. Simultaneously, the growing labor-force participation of women has led many to postpone their first pregnancies. The new technology of egg freezing may make it possible for these women to counteract the natural decline in fertility that comes with age. All of these technologies—and the family forms to which they

82

Assisted Reproduction Technology

contribute—have been the subject of widespread critique and commentary. Family Secrets About 10 to 15 percent of heterosexual couples in the United States have difficulty becoming or staying pregnant after one year of having unprotected sex. In earlier eras, there were limited methods available to assist these couples. Although artificial insemination is hundreds of years old, it only became common in humans in the second half of the 20th century, and for many years after that it was the only method that successfully treated infertility caused by a primary deficit in the husband’s gametes. The doctor secured donor sperm, often from local medical students, to impregnate the wife. Sometimes, husbands were not told about this procedure; sometimes, their sperm was mixed with that of another; often, couples were told to have intercourse after the insemination. Whatever the arrangement, marriage law made the husband the legal father. Because DNA tests were not widely available until the 1990s, the use of another man’s gametes was shrouded in secrecy. As donor conceived (DC) children grew up, they might wonder who they resembled, but few families discussed their method of conception. Only recently have families begun to be more open about these issues. Now, families grapple with questions about whether or not to disclose AI to their children (and if so, when), whether a donor should be allowed to remain anonymous, how much importance to accord the donor, and whether the donor should be viewed as genetic matter alone, or as some sort of social relative. The New Medical Era The births of Louisa Joy Brown in 1978 in England and Elizabeth Jordan Carr in 1981 in the United States ushered in a new era of reproductive medicine. These births relied on the manipulation of a couple’s gametes, and were dubbed “test tube babies” because the embryo was created outside the mother’s womb and then implanted in her. This new technology of IVF was greeted by some with moral repugnance: the Catholic Church opposed IVF because the marital embrace of love and the conception of children were separated by the technology; others (including many feminists) worried that this new technology could allow wealthy

women to hire women of lower social classes to have children for them. IVF initially offered women with medical conditions, such as blocked fallopian tubes, a way to become pregnant with their own eggs. IVF was also sought by couples with a combination of infertility issues and couples with genetic problems who wished to avoid passing them on to a child (because they could select embryos that did not have certain markers). In the early years, the success rates were low, the procedure was only offered to heterosexual married couples, and few could afford it. Some of this has changed: success rates have risen to approach natural fertility for women under 38 years old (30 percent of all ART cycles result in a live birth), the procedure is more widely available, and more states now require insurance policies to cover infertility treatments. Women over 38 may also rely on IVF, but donor eggs increase their chances of having a child. And whether young or old, individuals desperate to have a child through a mother’s pregnancy rather than by adoption, often risk financial insecurity in repeated rounds of IVF, which though less costly than it once was, is still expensive. Commercialization of Gametes The commercialization of gametes to be used in AI and IVF began slowly, and major shifts have occurred in the access to such gametes. In 1980, there were only 17 sperm banks across the country that sold frozen sperm to their customers. These banks offered limited information to clients, who used the banks to select donors for appropriate genetic and physical characteristics, including height, weight, and looks. By way of contrast, as of 2013, there are over 100 banks that supply frozen sperm. Even though the majority of sperm donors remain anonymous, banks now provide clients with extensive information about the donor’s physical characteristics, personality, and interests. Although the first successful use of an egg donation leading to a live birth occurred in 1984, the practice of freezing unfertilized eggs is still in its infancy. When clients purchase eggs, they go through the same IVF procedure that is used with their personal eggs (whether they are using their partner’s sperm or donor sperm) to create embryos that are placed either into the motherto-be or a surrogate. Extra embryos are frozen for



Assisted Reproduction Technology

later use. Egg donation is a more invasive procedure than sperm donation, and involves extracting large quantities of eggs that are produced through the hormonal manipulation of the female patient. The purchase of younger women’s eggs, mostly by older women (and couples), is highly controversial; among the reasons for concern is the lack of research on the long-term effects of this procedure on the health and future fertility of the donor.

the American Society for Reproductive Medicine (ASRM) lifted the experimental label on that technology. Because the technology is so new, few live births have been achieved with it compared to those achieved through other well-established ART procedures. There is little data about the effectiveness of egg freezing and any possible health risks. Even so, it is likely to become an increasingly widespread practice by women who wish to have biological children, but want to postpone childbearing until a later age. Oocyte cryopreservation is medicine’s newest answer to the “fertility penalty” that forces women to choose between furthering their careers or starting a family during their prime childbearing years; egg freezing thus mutes the ticking of the biological clock. However, egg freezing is an individual medical solution to what is in reality the collective, social problem of gender inequality in the workplace. As is the case for IVF, this is a procedure largely available to those who are wealthy: Clients must upward of $15,000 for the retrieval and freezing procedures alone.

New Families As family diversity has become more accepted, single mothers by choice and lesbian and gay couples are now major consumers of new reproductive technologies. In order to create families, these couples need to purchase gametes, as well as some form of assisted reproduction, even if it is AI. (Although heterosexual couples continue to use ARTs, a major technological development—intracytoplasmic sperm injection—in the mid-1990s for men with low sperm counts has led to fewer heterosexual couples seeking sperm donors. Norms have also changed about issues of disclosure of sperm and egg donation. Among lesbian couples and single mothers, disclosure is likely to occur as part of a child’s birth narrative. When disclosure is early, children view donor conception as a natural part of their lives; when disclosure is later, children report feeling surprised and shocked. Later disclosure and its disruptive effects are both more frequent among two-parent, heterosexual families than among other family forms. Some parents of donor conceived (DC) children and some DC children believe that the donors should not be anonymous. Parents want information so that they know more about hereditary conditions that might affect their children’s health and well-being. Children want information to satisfy their curiosity and to learn more about issues of identity. As a byproduct of the commercialization of gamete banks, the parents of DC children— and the children—can list their donor’s number on registries provided by sperm banks and independent agencies; DC children can now meet offspring who share their donor. New Technology Egg freezing has only been commercially available in U.S. IVF clinics since October 2012, when

83

Conclusion More and more individuals (alone and as part of couples) are now turning to the use of purchased gametes or their frozen gametes, in combination with the full range of assisted reproductive technologies, to make a family. The families that result give new possibilities for constructing narratives about the meaning of nature versus nurture, or biological and social influences on individuals and the relationships among them. Technological innovation has combined with the market in donor gametes and the new possibility of contact between donors and recipients and individuals who share the same donor to give rise to a brave new world of social arrangements. These new arrangements have not yet acquired legal standing, even as they expand the boundaries of kinship in novel ways. Rosanna Hertz Wellesley College Margaret K. Nelson Middlebury College See Also: Artificial Insemination; Fertility; Genetics and Heredity; Infertility; Technology.

84

Association of Family and Conciliation Courts

Further Readings Hertz, Rosanna, Margaret K. Nelson, and Wendy Kramer. “Donor Conceived Offspring Conceive of the Donor: The Relevance of Age, Awareness, and Family Structure” Social Science and Medicine, v.86 (2013). Luke, Barbara, et. al. “Cumulative Birth Rates with Linked Assisted Reproductive Technology Cycles” New England Journal of Medicine, v.366 (2012). Spar, Debora L. The Baby Business: How Money, Science, and Politics Drive the Commerce of Conception. Cambridge, MA: Harvard Business Review Press, 2006.

Association of Family and Conciliation Courts The Association of Family and Conciliation Courts (AFCC) is a worldwide organization of professionals in several disciplines who work with family courts and troubled families. It is of the only major organizations working in that field, which represents several professions. The AFCC has been a major force in many of the innovations for families over the past 50 years, including marriage counseling, divorce mediation, family courts, joint custody, courtreferred family therapy, collaborative divorce, child representation, divorce education, parent coordinators, and child custody evaluations. The group was founded as the California Conference of Conciliation Courts in 1963. It consisted of California judges and marriage counselors, mostly from the conciliation courts, a separate court system that helped reconcile people in troubled marriages before they reached the point of filing for divorce. In 1965 “California” was dropped from the title as interest spread outside the state. By 1970, the organization’s mission was expanding. As the group’s newsletter editor Meyer Elkin put it in the title of an editorial, “A Conciliation Court Is More Than a Reconciliation Court.” California had enacted a no-fault divorce law, drafted by a commission with the original mission to reduce divorce by expanding the conciliation courts’ work. The commmission’s proposal paired (1) the abolition of “fault” divorce with (2) an array of family courts providing both legal and counseling services,

achieving reconciliation whenever possible, and a simple, dignified divorce when not. However, that second prong of the proposal required government funding, so it was never enacted. No-fault divorce led to a flood of additional conflicts that cried out for the methods and skills that had developed in the conciliation courts. More unilateral forms of divorce, and fairer, more comprehensive laws about children and finances, gave courts far more issues to decide in a divorce. In the old system, economic and child-related issues were usually settled as part of a couple’s divorce agreement, but now the decision to divorce came first, with those issues litigated later. The era’s focus on justice and equality opened up more economic and child-related issues for wider and fiercer litigation. This and the increasing divorce rate moved court professionals’ daily work, and the exciting frontiers of innovation, away from reconciliation, toward family therapy, child custody evaluations, and mediation of economic and child custody issues. In 1973, the Los Angeles Conciliation Court began a pilot program of mediation of custody and visitation. Many conciliation courts started divorce education workshops. In 1976, the conference changed its name to the Association of Family Conciliation Courts (AFCC). By the late 1970s, it had grown to approximately 900 members in the United States and Canada, with several state chapters. By the 1980s, mediation, joint custody, domestic violence, and stepfamilies had become key issues for the AFCC. An “and” was added to the name, changing from “Family Conciliation” to “Family and Conciliation.” By the 1990s, the group was a longstanding authority on services for families. It conducted leading studies of such topics as custody, mediation, and domestic violence. It began a project to improve the education of family lawyers, and developed standards for family mediators. It worked with organizations in other countries to co-host international conferences, notably the World Congress on Family Law and the Rights of Children and Youth. As of 2013, the AFCC had over 4,800 members in 19 countries, from many different occupations including judges, lawyers, mediators, psychologists, psychiatrists, researchers, academics, counselors, court commissioners and administrators, custody evaluators, parenting coordinators, social workers, and financial planners.

Atheists



The scope of issues it works on has also grown to include self-represented litigants, domestic abuse, same-sex couples, never-married parents, dependency mediation, alienated children, nonresidential parenting, relocation, custody evaluations, and family preservation. Reconciliation, through the more modern techniques of marriage therapy and preventitive marriage skills education, is still a small part of this array. An attempt to remove “Conciliation” from the name was rejected in recent years. The AFCC raises awareness of these issues and brings together professionals who work with families to find and promote the best ways of dealing with them. Its journal, the Family Court Review, publishes original research and public-policy proposals. The AFCC also reaches professionals through training for custody evaluators and parenting coordinators in various regions, its annual conferences held in various areas of North America, the World Congresses, and joint conferences with other leading professional organizations such as the American Bar Association and the American Academy of Matrimonial Lawyers. The Family Court Review considers the AFCC’s goal to seek “a more collaborative, interdisciplinary, and forward-looking family dispute resolution regime,” and often advocates “a public health approach” to particular family issues and the process of family breakdown as a whole. John Crouch Independent Scholar Tiffany Ashton American University See Also: Child Support; Divorce and Separation; Family Mediation/Divorce Mediation; Fathers’ Rights; No Fault Divorce. Further Readings Association of Family and Conciliation Courts. http://www.afccnet.org (Accessed November 2013). Burke, Louis, H. With This Ring. New York: McGrawHill, 1958. Emery, Robert E. Renegotiating Family Relationships: Divorce, Child Custody, and Mediation. New York: Guilford Press, 2012. Family Court Review. http://www.afccnet.org/ Publications/FamilyCourtReview (Accessed November 2013).

85

Atheists In accordance with the adage “the family that prays together stays together,” individuals in the United States often feel that shared religious beliefs are the glue that strengthens familial bonds and deepens love between relatives. Estimates widely vary, but national surveys suggest that a vast majority of U.S. citizens, somewhere between 80 and 95 percent, report having a religious affiliation and a belief in a god or gods. For many families, religious practice provides time for them to get together and share rituals, traditions, and spiritual experiences. Important events such as births, deaths, and marriages that further bind families are all typically marked by religious ceremonies. Beyond the key role of religion in family rituals, religion is also seen as a force that promotes well-being. Specifically, religious practice is often perceived as uniformly good for physical and mental health, and thus even nonreligious or atheist family members may feel disinclined to “deprive” children, partners, or aging parents of spiritual practice and participation in a religious community. However, because rates of atheism are rising in the United States, families must learn to navigate tensions between their religious and atheist members. Some reasons suggested to explain why people are turning away from religious beliefs is that organized religion has become overly judgmental, hypocritical, and political. Though definitions of atheism vary, typically they capture a spectrum of nonbelief. In many cases, atheism may be described as the lack of belief in a higher power or God/gods. This is in contrast to agnosticism, which is typically described as the inability to know with complete certainty whether God/gods exist or not. Atheism in the United States Somewhere between 4 and 15 percent of individuals in the United States identify as atheist. Upon closer examination, several demographic characteristics of atheists emerge: they tend to reside in the northeast or west, are well-educated, are politically liberal or independent, white, and male. Research suggests that approximately 70 percent of those who identify as atheist are men. This is a significant trend because women often take on the majority of childrearing in traditional American families. These demographic trends demonstrate that atheist

86 Atheists identification is less common for women and people of color, and that levels of religious involvement are higher for these groups. When considering levels of religious participation across the lifespan, older adults demonstrate the highest levels of religious engagement compared to any other age group. Particularly compared to millennials, over half of whom report disenchantment with religion today, older adults remain invested in their faiths. Religious institutions often offer social, economic, and other forms of support to their elderly congregation members. Many social activities for older people are rooted in churches. National data from the 2010 General Social Survey revealed that roughly 40 percent of people aged 75 years or older attend religious services at least once a week, compared to 12 percent of people 18 to 29 years old. Conversely, only 13 percent of people aged 50 to 69 are nonreligious, and only 7 percent of people over 70 are nonreligious. Far fewer children born today will grow up in religious households, compared to earlier generations. Atheism Within the Family Historically, families have relied on religious institutions to help provide their children with a moral education. In many ways, religion has a lot to offer parents. As noted by atheist parenting expert Dale McGowan, organized religion ensures that parents will have access to a predefined set of values, an established community, a means of engendering wonder, rites of passage, consoling explanations to ease hardship and loss, and comforting answers to big questions. Raising children as atheist requires parents to be able to articulate the foundations of their values and beliefs and engage in constant reflection on what they view to be good and true. In addition to their personal identity explorations, parents must facilitate the formation of their children’s beliefs, without controlling the process or being overly proselytizing in deconversion. Some scholars suggest that parents can begin the deconversion process toward atheism by promoting curiosity, encouraging religious literacy, normalizing disbelief, questioning of authority, and encouraging active moral development. Moreover, atheist parents must focus on counteracting negative societal views of nonbelievers by demonstrating that atheists, like religious people, can lead good, compassionate, and moral lives, without

a belief in higher powers or a god. In sum, atheist parents must learn to forge meaning and morals for their children, without the traditional maps, guidebooks, or compasses provided by religion. In a nation that fully embraces religiosity in childrearing, a scarcity of resources exist for those who choose to raise children without faith. Beyond atheist people raising children as nonbelievers, what happens when a member of a religious family decides to shed his or her beliefs and “come out” as atheist? The ramifications of being a nonbeliever can be perceived as dire to religious relatives. For example, in the Church of Jesus Christ of Latter-day Saints, family members must all be believers and undergo shared rituals in order to remain together in an “eternal family” during the afterlife. Therefore, in the eyes of the faithful, nonbelievers are not just abandoning their current family obligations, but also sealing their fate that they will not be together in heaven. Religious deconversion is a slow and painstaking process for many individuals due to the guilt they experience for questioning their family’s belief system and fear of what could potentially happen to them. They may be shunned by family and friends, or believe they will be punished by God. A primary trigger for religious deconversion is a growing sense of skepticism or incredulity about the claims made in religious scriptures. Atheist individuals who were raised religious also report that social causes—such as the treatment of women and gay/lesbian people— or learning about science and other cultures made their beliefs seem preposterous and impossible to maintain. Thus, atheist people in religious families may view leaving their faith as synonymous to hurting their loved ones who remain active believers. Melanie E. Brewster Jacob S. Sawyer Teachers College, Columbia University See Also: Agnostics; Catholicism; Christianity. Further Readings Barbour, John D. Versions of Deconversion: Autobiography and the Loss of Faith. Charlottesville: University Press of Virginia, 1994. Keysar, Ariela, and Barry A. Kosmin. Secularism & Science in the 21st Century. Hartford, CT: Institute for the Study of Secularism in Society and Culture, 2008.

Attachment Parenting

McGowan, Dale, ed. Parenting Beyond Belief: On Raising Ethical, Caring Kids Without Religion. New York: American Management Association, 2007. Miller, William R., and Carl E. Thoresen. “Spirituality, Religion, And Health: An Emerging Research Field.” American Psychologist, v.58 (2003). Putnam, Robert D., and David E. Campbell. American Grace: How Religion Divides and Unites Us. New York: Simon & Schuster, 2010.

Attachment Parenting Dr. William Sears and his wife, Martha, created the childrearing philosophy of attachment parenting, which they based on the psychological theory of attachment. This theory holds that the infant-parent bond is crucial to appropriate development. Attachment parenting is the application of these theories to the behaviors of infant caregivers. The goal is a secure attachment between primary caregiver and infant, achieved through constant sensitivity to the needs of, and communication by, the infant. The Searses advocate specific practices to foster secure attachments; the three most significant are breastfeeding, babywearing, and co-sleeping. They argue that breastfeeding is ideal because in addition to its reported health effects, it teaches the infant that its mother will constantly respond to its needs. Moreover, they argue, breastfeeding helps the mother become an expert on her baby as she learns to read his or her various cries, expressions, and gestures. Extended breastfeeding beyond the cultural norm is highly recommended. The Searses also promote the literal attachment of babywearing, or wearing the infant in a cloth carrier or sling. Babywearing increases skin-on-skin contact and ensures that the baby will feel physically secure. The third primary recommendation is co-sleeping, or bedding the baby in the same room, or even the same bed, as the caregiver, rather than in a crib. This is a continuation of attachment through the night, which teaches the infant that its needs will be met 24 hours a day. The practice of attachment parenting has become more popular since the Searses published The Baby Book in 1992, and although no statistics are available, the number of self-identified attachment parents appears small.

87

Practices Attachment Parenting International (API), the international association for parents who follow the philosophy, argues that attachment parenting has become a buzzword that suffers from definitional slippage. In an attempt to clarify the practice, API lists eight fundamental principles. First, API suggests avoiding negative thoughts and feelings about pregnancy in order to prepare for the demanding labor of parenting. Second, following the Searses’ prescription, API advocates breastfeeding. Third, parents should address all infant expressions of emotion as serious forms of communication, rather than ignoring or punishing the child. Fourth, like the Searses, API recommends that parents physically comfort and touch their babies as much as possible through babywearing. Fifth, attachment parents should practice co-sleeping so that they will be available to their infants during the night. Some attachment parents extend this principle to the practice of sharing their bed with their infants. Sixth, API promotes consistent care, advising that secure attachment is best formed with the near constant presence of a parent. Seventh, positive discipline—redirection, distraction, and modeling—is encouraged over punitive discipline. Finally, attachment parents should seek family-life balance, because stressed-out caregivers are likely to be emotionally unresponsive to their infants, which can result in inappropriate attachment. Criticisms

Attachment parenting has its critics. The most basic criticisms are biological, citing the contradictory evidence on the health effects of breastfeeding and the concern that bed sharing has been linked to sudden infant death syndrome (SIDS). Other critics express theoretical concerns about the concept of attachment, arguing that it is mischaracterized as an immutable psychological trait. Rather, it is socially determined by experiences throughout the lifespan, and thus proponents of attachment parenting overstate the lifelong impact of early relationships. Other criticisms are social and historical. Attachment theory originated in the 1950s, in a culture that heavily promoted full-time, stay-at-home motherhood. Critics point out that this is an outdated model because most mothers in the United States work outside the home, and many children have multiple caregivers with whom they form consistent, secure

88

Attachment Theories

attachments. The Searses advise mothers to wear their babies to work or take out loans so that they can stay home with them, advice that opponents say ignores the economic realities of many parents’ lives. Daycare is another point of contention. Although proponents of attachment parenting generally use the terms parents or caregivers, opponents of the practice complain that the burden of constant care falls most heavily on mothers. Feminist critics argue that attachment parenting promotes an unattainable ideal of perfect motherhood that works against an equitable division of labor within marriage. Mary Ainsworth, a founding attachment theory researcher who conducted the first empirical research on attachment in human babies, acknowledged that an overemphasis on the welfare of the child ignores the needs of the mother. Recent studies have found that attachment mothers tend to cite higher levels of stress and lower levels of satisfaction than other parents. Feminist Proponents However, some proponents have made a feminist case for attachment parenting, arguing that attachment mothers have wisely chosen to make their own decisions through natural childrearing and have opted out of the patriarchal workforce. Until American society values the work of parenting through policies like maternity/paternity leave, flex-time, and affordable, high-quality daycare, these mothers contend that attachment parenting is the only way to insure that their infants receive proper care. Although the method is more flexible than opponents claim (the API suggests adapting the eight principles to the needs of each family), the attachment parenting style nevertheless taps into the major debates over women’s roles since the 1960s. Keira V. Williams Texas Tech University See Also: Attachment Theories; Child-Rearing Manuals; Day Care; Intensive Mothering; Marital Division of Labor; Maternity Leaves; Mommy Wars; Myth of Motherhood; Parenting Styles; Paternity Leaves. Further Readings Attachment Parenting International. “API’s Eight Principles of Parenting” http://www.attachment parenting.org (Accessed June 2013).

Hays, Sharon. “The Fallacious Assumptions and Unrealistic Prescriptions of Attachment Theory: A Comment on ‘Parents’ Socioemotional Investment in Children.” Journal of Marriage and the Family, v.60/3 (1998). Rizzo, Kathryn M., Holly H. Schiffrin, and Miriam Liss. “Insight Into the Parenthood Paradox: Mental Health Outcomes of Intensive Mothering.” Journal of Child and Family Studies, v. 22/5 (2012). Sears, William and Martha Sears. The Attachment Parenting Book: A Commonsense Guide to Understanding and Nurturing Your Child. Boston: Little, Brown, 2011.

Attachment Theories In the 1960s, psychologist John Bowlby described attachment as a “lasting psychological connectedness between human beings.” Like psychoanalysts before him, Bowlby viewed early childhood relationships as having a profound influence on future adult behavior and relationships. That is, a person’s earliest relationships establish attachment and relationship styles that carry over into adulthood. In his early research, Bowlby studied the severe emotional distress experienced by infants who were separated from their caregiver. He observed intense behavioral reactions such as frantic crying, clinging, and searching that infants would have to prevent separation from their parents or to re-establish physical proximity to them. Bowlby noted that attachment behaviors kept the infant close to the caregiver, who provided support, protection, and care and established the child’s sense of security. If the caregiver was a dependable figure, the child established a secure base from which to explore the world. Bowlby theorized that attachment has an evolutionary function, unlike his psychoanalytic contemporaries who believed that such behaviors were manifestations of immature defense mechanisms designed to repress emotional pain. Bowlby stated that because infants cannot care for themselves and must rely on adults, he argued that over the course of evolutionary history infants who were able to maintain proximity to an attachment figure would most likely survive. “The propensity to make strong emotional bonds



Attachment Theories

89

to particular individuals [is] a basic component of human nature,” Bowlby wrote.

as sadness or anger, are regulated and the adult can maintain an emotional balance.

Attachment Theory and Natural Selection Furthermore, he believed that attachment behaviors such as crying, fussing, and pleading were gradually “designed” by natural selection to regulate proximity to an attachment figure. These behaviors are adaptive responses to separation from a primary attachment figure. Bowlby described how the attachment behavioral system was most important early in life, but added that it also becomes active in adulthood when a person needs comfort and seeks proximity to an attachment figure. The attachment behavior system is an important concept in attachment theory because it provides the conceptual linkage between ethological models of human development and modern theories on emotion regulation and personality. According to Bowlby, the attachment system asks the fundamental question: Is the attachment figure nearby, accessible, and attentive? If the child perceives the answer to be “yes,” he or she feels confident to explore the environment, play with others, and be sociable. In short, the child feels loved and secure. However, if the child perceives the answer to this question to be “no,” he or she seeks proximity to the attachment figure, and until he or she reestablishes this physical or psychological closeness, he or she experiences anxiety. According to Bowlby, if the attachment figure remains unavailable, such as in a prolonged separation or loss, the child experiences profound despair and depression. In summarizing Bowlby’s work, psychologists Joseph Obegi and Ety Berant describe how human beings also develop mental representations of the qualities of attachment figures so that they can help themselves in the absence of attachment figures. Obegi and Berant further talk about how in adolescence and adulthood, other attachment figures become the object of support; these include siblings, extended family members, and intimate partners. Of course, teenagers and adults generally do not cry uncontrollably as an attachment strategy. More likely, they utilize more mature communication techniques, such as talking, texting, or activating mental representations to connect with others and to reassure themselves that they are loved and cared for. In this way many negative emotions, such

The Components of Attachment and Attachment Styles There are four key components of attachment: • Safe haven: The attachment figure is a safe haven for the child because he or she provides such comfort in times of need. • Secure base: The caregiver provides a secure and dependable base from which the child can explore the world and take risks. • Proximity maintenance: The child strives to stay near the caregiver. Some researchers believe that the evolutionary purpose of proximity maintenance is to help the child stay alive. • Separation distress: When separated from the caregiver, the child will become upset and distressed. Studies in the 1970s demonstrating separation distress were conducted by another researcher closely associated with attachment theory, Mary Ainsworth. In her well-known 1978 “strange situation” study, Ainsworth observed infants’ responses to being left alone and then reunited with their mothers. Twelveto-18-month-old infants were separated from their mothers in a laboratory situation; however, only about 60 percent showed typical signs of separation distress. When reunited with their mothers, this group showed joy, initiated contact with their mothers, positively responded to being held, and returned to playing with toys. Ainsworth labeled these children “securely attached.” They seemed to trust that when their attachment systems were activated, they could regain proximity with their attachment figure. However, not all of the infants showed this expected behavior. Twenty percent, for example, did not show any separation distress when separated from their mothers, as if their attachment systems were not activated. Then, they ignored or actively turned away from their mothers upon their return. Ainsworth called these children “avoidant insecure attached.” Still other infants and toddlers protested being separated from their mothers, but they could not be soothed by the mother upon her return. Ainsworth called these children (less than 20 percent) “resistant insecure attached.” Thus began the

90

Attachment Theories

classification and study of attachment styles. Later, in 1986, researchers M. Main and J. Solomon added a fourth attachment style called “disorganizedinsecure attached,” based upon their research. Ainsworth’s research demonstrated that these individual differences were correlated with infant–parent interactions in the home during the first year of life. Children who appear secure in the strange situation, for example, tend to have parents who are responsive to their needs. Children who appear insecure in the strange situation (i.e., avoidant or resistant insecure) often have parents who are insensitive to their needs, inconsistent, or rejecting in the care they provide. In 2011, Jude Cassidy and Jonathan Mohr wrote that “based on repeated daily interactions with an attachment figure, babies develop reasonably accurate (mental) representations of how the attachment figure is likely to respond to their attachment behavior.” In developing secure attachment, Daniel Siegel stated that the “repeated experiences of parents reducing uncomfortable emotions (e.g. fear, anxiety, sadness), enabling child to feel soothed and safe when upset, become encoded in implicit memory as expectations and then as mental models or schemata of attachment, which serve to help the child feel an internal sense of a secure base in the world.” In discussing how insecure attachments are formed, Byron Egeland, Elizabeth Carlson, and Alan Sroufe stated in 1993 that “caregivers who are generally unavailable and rejecting have infants with internal representations of themselves as unworthy and unlovable,” and that research indicates that maternal depressive behavior leads to insecure attachment. In contrast to securely attached children, insecurely attached children have learned over time that it takes more dramatic behaviors to elicit the needed response from their caregivers, and this behavior may lead to discomfort in some parents. Such behaviors may include poor self-esteem and self-regulation ability, aggressive toward or isolation from peers, and low frustration tolerance, according to Egeland and colleagues. Different Attachment Styles Children who feel secure and are able to depend on their adult caregivers exhibit secure attachment. When the adult leaves, the child may be upset, but he or she feels assured that the parent or caregiver will return; upon the caregiver’s return, the child feels happy. When frightened, securely attached children

will seek comfort from caregivers. These children know that their parent or caregiver will provide comfort and reassurance, so they are comfortable seeking them out in times of need. Ambivalently attached children usually become very distressed when a parent leaves. This attachment style is relatively uncommon, affecting an estimated 7 to 15 percent of U.S. children. Research suggests that ambivalent attachment is a result of poor caregiver availability; that is, these children cannot depend on their caregiver (attachment figure) to be there when the child is in need.Children with an avoidant attachment tend to avoid parents or caregivers. When offered a choice, these children will show no preference between a caregiver and a complete stranger. Research has suggested that this attachment style might be a result of abusive or neglectful caregivers. Children who are punished for relying on a caregiver will learn to avoid seeking help in the future. Adult Attachment Since Ainsworth’s strange situation study, a number of researchers have demonstrated links between early parental sensitivity and responsiveness and attachment security in adulthood. Specifically, failure to form secure attachments early in life can have a negative impact on behavior in later childhood and throughout life. For example, children diagnosed with oppositional-defiant disorder (ODD), conduct disorder (CD), or post-traumatic stress disorder (PTSD) frequently display attachment problems, possibly due to early abuse, neglect, or trauma. Adult relationships are also attachment relationships, and early life experiences can significantly impact later relationships. However, the experiences that a person has throughout his or her life can help overcome an anxious or ambivalent attachment style generated in childhood, so people are not stuck with an attachment style that developed from being poorly parented. Nevertheless, early attachments can have a serious impact on later relationships. For example, adults who have good self-esteem and happy and long-lasting romantic relationships also typically had secure attachments in childhood. If they could count on their parent to be there for them as a child, they are confident that their partners will be there for them as an adult. In contrast, insecure adults worry that others may not completely love them, whereas avoidant adults appear to not care about close relationships.

Automobiles



Researchers now understand that it is not purely a parent’s behaviors that create an insecure attachment. Rather, it is a combination of parent factors, child factors, and environmental factors. According to Egeland, Carlson, and Sroufe, some parental contributions to insecure attachments include ineffective or insensitive care, physical or emotional unavailability, abuse and neglect, substance abuse, parental psychopathology, and prolonged absence. Child contributions to insecure attachments include the child’s emotional unavailability, difficult temperament, premature birth, medical conditions causing unrelieved pain, hospitalizations, failure to thrive syndrome, congenital problems, and genetic disorders. Environmental contributions to insecure attachment include poverty, a violent atmosphere, lack of support from the father or extended family members, multiple out-of-home placements, family disorganization, and lack of stimulation. Whether or not a person is secure or insecure in his or her intimate adult relationships may reflect the experiences that he or she had with parents or other caretakers. Specifically, Bowlby’s notion of mental representations suggests that a person’s early caregiving relationships help form expectations about how relationships should work out. Once a person develops such beliefs, he or she seeks out relationships that confirm his or her beliefs. A secure person believes that others will be there for him or her and seeks out others who confirm this, whereas an insecure person may believe that others cannot be counted on and thus seeks others who prove him or her right. According to Kim Bartholomew and Leonard Horowitz (1991), there are four styles of adult attachment: secure (positive view of self and others), preoccupied (negative view of self and others), dismissing (positive view of self, negative view of others), and fearful (negative view of self and others). Neil Ribner Jason Ribner Alliant International University See Also: Attachment Parenting; Bowlby, John; Brazelton, T. Berry; Parenting Styles. Further Readings Ainsworth, M., M. Blehar, E. Waters, and S. Wall. Patterns of Attachment: A Psychological Study of the Strange Situation. Hillsdale, NJ: Erlbaum, 1978.

91

Bartholomew, Kim and Leonard M. Horowitz. “Attachment Styles Among Young Adults: A Text of a Four-Category Model.” Journal of Personality and Social Psychology, v.61 (1991). Bowlby, J. “Attachment Theory, Separation Anxiety and Mourning.” In American Handbook of Psychiatry, David A. Hamburg and Keith H. Brodie, eds. New York: Basic Books, 1975. Bowlby, J. A Secure Base: Parent-Child Attachment and Healthy Human Development. London: Routledge, 1988. Cassidy, J. and J. J. Mohr. “Unsolvable Fear, Trauma, and Psychopathology: Theory, Research, and Clinical Considerations Related to Disorganized Attachment Across the Lifespan.” Clinical Psychology Science and Practice, v.8 (2001). Egeland, B., E. Carlson, and L. A. Sroufe. “Resilience as Process.” Development and Psychopathology, v.5 (1993). Hanson, R. F. and E. G. Spratt. “Reactive Attachment Disorder: What We Know About the Disorder and Implications for Treatment.” Child Maltreatment, v.5 (2000). Main, M. and J. Solomon. “Discovery of an InsecureDisorganized/Disoriented Attachment Pattern: Procedures, Findings and Implications for the Classification of Behavior.” In Affective Development in Infancy, T. B. Brazelton and M. Yogman, eds. Norwood, NJ: Ablex, 1986. Obegi, Joseph H. and Ety Berante. Attachment Theory and Research in Clinical Work With Adults. New York: Guilford, 2009. Siegel, D. “Toward an Interpersonal Neurobiology of the Developing Mind: Attachment Relationships, ‘Mindsight,’ and Neural Integration.” Infant Mental Health Journal, v.22 (2001).

Automobiles From the first family vehicles introduced in the early days of the industry to the wide variety of styles available in the 21st century, automobiles are a major element of American family life. Until the 1960s, residents in the United States consistently owned more automobiles than any other nation in the world. The first cars were considered novelties, were driven on ill-kept roads, and often

92 Automobiles broke down on family drives. In contrast, modern automobiles are usually driven on well-maintained roads, and if the cars are well-maintained, they seldom break down in transit. All family vehicles share the utilitarian purpose of transporting families in their daily lives and carrying them on family trips to both near and distant destinations. The choice of family vehicles has also shifted in response to increased numbers of women and teenagers in the labor force and because of an increase in the number of single-parent families, which are far more likely to be headed by females than males. The increase in the number of blended families means that families may be larger than those of the latter 20th century, calling for larger vehicles. Evolving technology continues to make cars faster, safer, and more accommodating, but they may also be contributing to the decline of family togetherness, according to some critics. In addition to DVD players, Wi-Fi access, and satellite radio, which offer families numerous entertainment venues, individual family members often spend entire family trips on personal devices such as smartphones, portable gaming systems, e-readers, tablet computers, and portable music devices. Early Automobiles and Family Life Brothers Charles and Frank Duryea created the first American gasoline-powered car in 1893. Other inventors had launched electric and steam-powered vehicles, but these never caught on with American families. As a result, 86 percent of cars on the road in the 1910s were gasoline-powered, which at the time were considered more environmentally friendly in urban areas than horses. In 1902, Ransom E. Olds became the first manufacturer to mass-produce automobiles, but Henry Ford changed American life with the introduction of the affordable, familyfriendly Model T in 1908. Ford was the first manufacturer to harness the power of the moving assembly line and to pay workers well enough to afford the product they produced. The Model T quickly became the most popular car on U.S. roads due to its affordability and durability. In 1916, Congress passed the Federal Aid Road Act, which allotted $75 million to upgrade rural roads used by federal mail carriers, making automobile travel safer and less taxing on drivers and their automobiles. By 1917, 40 percent of all cars on the road were Fords.

Initially, automobiles were novelty contraptions for the wealthy, beyond the grasp of most American families. On millions of U.S. farms, however, the situation was somewhat different. The automobile came to be considered a necessity, and it was used for both work and pleasure. In 1910, 11 percent of all automobiles in operation in the United States were purchased for farm use. Farmers could buy automobiles, along with other farm equipment, from local suppliers. About half of operating expenses went to buying gasoline for farm vehicles. By the time the United States entered World War I in 1917, 27 percent of all automobiles were bought by farmers. By 1920, 8.1 million automobiles were registered in the United States. When installment plans for purchasing automobiles were introduced in 1925, many average families were able to join the ranks of automobile owners. Two years later, Americans owned 80 of all automobiles in the world, and automobile culture began to cominate. By 1930, more than half of all families owned an automobile. Numerous automobile manufacturers entered the market, offering more choice than Ford’s Tin Lizzie, which came only in black, and Ford’s market share fell from 55 percent in 1921 to 31 percent in 1929. However, even more damaging to the industry was the Great Depression, during which automobile production fell by 75 percent. The automobile significantly impacted the ways in which families spent time together. Americans went for drives after dinner, and weekend automobile trips became the norm. Some families began foregoing church on Sundays so that they could enjoy short road trips. The first drive-in restaurants were introduced, and they became gathering places for American teenagers and families. The first drive-in theater was erected in Camden, New Jersey, in 1933, setting off a trend that allowed adult family members to enjoy popular films while children slept or played on nearby playgrounds. Teenagers in the family saw drive-in theaters as a way to escape family supervision, and they became popular dating destinations. Changing social mores after World War I had turned many cars into what became known as “bedrooms on wheels.” Family vacations in the 1930s were spent at resorts in the mountains or along the coast, beyond the reach of a subway or train. Most cars in the United State were driven by male family members or young women who were more adventurous than their mothers. Many males



believed that women were incapable of operating vehicles and dealing with unexpected situations that arose while driving. However, some women were quick to understand the independence that the automobile offered. While automobile manufacturers and salesmen touted the car as a means of keeping families together, young adults often saw the automobile as a means of escaping family life. The demand for war materials during World War II signaled an end to the Great Depression, and in the postwar boom, automobiles came to be considered a family necessity. In the rapidly expanding suburbs, new homes included two-car garages. One car was necessary for the breadwinner (usually the husband) to travel to his job; the other car was necessary for the wife to fulfill her shopping and childrearing obligations. The Postwar Years The 1950s was the automobile’s Golden Age, when factories manufactured 9.3 million automobiles. These cars were larger, wider, heavier and considerably more powerful than automobiles of earlier generations, and gas remained cheap. The station wagon became the vehicle of choice for many families, but no matter what make or model, family vehicles were treated with affection and great care. In 1956, the Federal Aid Highway Act provided the funds for building the Dwight D. Eisenhower National System of Interstate and Defense Highways, which linked small towns and big cities from coast to coast in the 48 states by a network of wide, high-speed freeways that allowed drivers to travel for hundreds of miles without encountering any obstacles such as traffic lights. While this precipitated the phenomenon of roadside America, comprised of motels, fast food restaurants, gas stations, tourist traps, and souvenir shops, it also proved the death knell of many small towns that originated as rail stops or as oases on the old rural and county highways. New suburbs sprung up along the interchanges that the new national highway system built. Interstate highways meant that vacation destinations became more diverse, and family visits to relatives living in other parts of the country became routine. Catering to this new mobility, automobile manufacturers began providing features that made cars more comfortable for the long haul. Cigarette lighters made smoking more convenient and provided a means of warming bottles or plugging in

Automobiles

93

electric shavers. Arm rests, radios, air conditioning, power steering, power locks and windows—at first features on only the most luxurious models—soon became more common on down-market brands. Advertising such as Chevrolet’s motto “see the USA in your Chevrolet” served the dual purposes of attracting customers to particular manufacturers while encouraging the booming tourist industry. By the late 1960s, women entered the workplace in unprecedented numbers; some were married with children, some were single with children, and some chose to remain single and childless. As a result, females adopted a relationship with their cars independent, of the family realm for the first time in history. Automobiles in the Late-20th Century In 1973, the Organization of Petroleum Exporting Countries (OPEC) enacted an embargo against the United States in response to American support for Israel, rather than Egypt, during the Yom Kippur War. The average price of gasoline climbed from 35 cents a gallon in 1970 to 63 cents a gallon by decade’s end. This oil crisis, while technically short lived, created a demand for more fuel-efficient cars designed to reduce dependency on foreign oil. Consumers also demanded that automobiles be more environmentally friendly, leading to the introduction of compact cars and hatchbacks as popular family vehicles. The U.S. automobile industry, once thought monolithic and impenetrable, soon lost significant market share to foreign car makers, such as Honda and Toyota, which offered stylish and affordable alternatives to the heavy, gas-guzzling models produced stateside. The average price of gasoline ranged from 85 cents a gallon in 1980 to 90 cents a gallon in 1989. Vehicle ownership held steady at 132 million from 1990 to 2000. By 1988, there were 1.8 vehicles on the road for each U.S. household. Six million families owned three or more cars, and 2.7 million owned four or more. Because many teenagers worked after school, parents considered an additional car a necessity, and by 1996, 60 percent of American families owned two or more cars. Average gas prices reached $1.00 per gallon for the first time around 1990. Gender and Safety Issues Much has been written about the role of the automobile and its influence on gender roles. For males,

94 Automobiles sports cars and pick-up trucks have long been associated with being masculine. For females, their choice of automobile has tended to be more pragmatic. Automobile advertisers have reinforced stereotypes by marketing cars intended for the male market by showcasing beautiful women and speed, whereas those intended for females point out their ability to meet family needs. Despite the fact that most women drive and that women are less prone than men to be involved in automobile accidents, the stereotype of females as inattentive or overly cautious drivers has persisted throughout the history of the automobile. Throughout the automobile’s history, city planners have been forced to deal with traffic congestion and the need to widen streets and make constant road repairs. Automobile safety has been an issue since cars were invented, and the number of lives lost from accidents has significantly increased as the number of the cars on the road has increased. According to the National Highway Traffic Safety Administration, there were 33,186 automobile-related deaths in 1950. This rose to an all-time high of 54,052 in 1973, before seatbelts became standard equipment. By 2011, automobile-related deaths were below the 1950 rate, despite millions more vehicles on the road, thanks to increased safety equipment and standards. Consumer advocate Ralph Nader led the push for improved safety in American cars beginning in the 1960s. While the safety of adult family members has been drastically improved, exact requirements for keeping small children safe vary from state-to-state. Infants and toddlers are required to be restrained in specially designed car carriers, but the issue becomes less clear when deciding whether children between the ages of 5 and 9 are safer in car seats or buckled in with seat belts. In 2000, more than half of 1,283 accidental deaths of American children were from automobile accidents. Contemporary Families and Automobiles Soaring gasoline prices in the 21st century have created economic dilemmas for many families. In July 2012 Money magazine estimated that in 2012, 39 percent of Americans were spending between $100

and $249 each month on gasoline; 34 percent were spending between $250 and $499 each month; 12 percent were spending more than $500 monthly; and 15 percent were spending less than $100. Part of the issue is that sport utility vehicles (SUVs) are the vehicle of choice for most American families. SUVs are costly, averaging around 16 miles per gallon, as compared to small cars that may get as much as 40 miles per gallon. Elizabeth Rholetter Purdy Independent Scholar See Also: Middle-Class Families; Single-Parent Families; Stepfamilies, Suburban Families; Vacations. Further Readings Angulo-Vasquez, Vicki. “Booster Seats or Seat Belt? Motor Vehicle Injuries and Child-Restraint Laws in Preschool and Early-School Age Children.” Journal for Specialists in Pediatrics, v.10/4 (2005). Baum, Arthur W. “Adventures in the Family Car.” Saturday Evening Post, v.226/18 (1953). Berger, Michael L. The Automobile in American History and Culture: A Reference Guide. Westport, CT: Greenwood Press, 2001. Geels, Frank W., et al. Automobility in Transition? A Socio-Technical Analysis of Sustainable Transport. New York: Routledge, 2012. Groening, Stephen. “Automobile Television, the PostNuclear Family, and SpongeBob SquarePants.” Visual Studies, v.26/2 (2011). “How Much Does Your Family Spend on Gas Per Month?” Money, v.41/6 (2012). Jackle, John A. and Keith A. Sculle. Motoring: The Highway Experience in America. Athens: University of Georgia Press, 2008. Lewis, Lucinda. Roadside America: The Automobile and the American Dream. New York: Harry N. Abrams, 2000. McCarthy, Tom. Auto Mania: Cars, Consumerism, and the Environment. New Haven, CT: Yale University Press, 2007. Seiler, Cotton. Republic of Drivers: A Cultural History of Automobiling in America. Chicago: University of Chicago Press, 2008.

B Baby Boom Generation The “Me Generation” in the United States refers to the baby boomer generation and the self-involved characteristics of the people associated with baby boomers; they are characterized by self-absorption and material greed. Baby boomers were born between 1946 and 1964, grew up in an economically prosperous period in history, and became accustomed to “getting what they wanted, when they wanted.” Writer Tom Wolfe dubbed these folks the “Me Generation” in the 1970s. Wolfe wrote, “Whatever the Third Great Awakening amounts to, for better or for worse, will have to do with this unprecedented post–World War II American development, the luxury, enjoyed by so many millions of middling folk, of dwelling upon the self.” The baby boomers had already experienced complex and often unusual changes in their circumstances from birth to mid-1970. They were born and raised when the economy was growing at a rapid pace, and they were afforded the luxuries of that economy. They grew up during the 1960s and a time of political protests and radical cultural changes that included the sexual revolution and the introduction of Eastern religions. The civil rights movement gave these rebellious young people more reason to stand on one side or the other of the movement, rarely achieving a “middle of the road” mode of thought.

Idealistic politics were shattered with the assassinations of President John F. Kennedy and civil rights leader Martin Luther King, Jr. They experienced the turmoil of the nation under the eye of the world leaders with the Watergate scandal and the resignation of Richard Nixon. The majority of veterans from the Vietnam War were from the baby boomer generation, including those who protested the war, along with the draft dodgers who fled to other countries; they would eventually be pardoned and permitted to return to the country and enjoy lucrative employment—oftentimes better than thejosb of veterans of the conflict. The Me Generation was criticized for the culture of narcissism. The reaction against self-­ fulfillment and the traits characterized by that came from the older generation, who grew up with nothing during the Great Depression. This same older generation had learned to do without during World War II due to the rationing of such items as sugar, gasoline, and even nylon hosiery. Often, the women of that generation were forced to work in low-paying factory jobs to support the war effort. During these difficult times, that older generation learned the meaning of self-sacrifice, a hard work ethic, and the importance of saving money—not to spend when the mood would strike. They learned to treasure family ties, traditional religious faiths, and other cultural traditions that they believed were the foundation of the country. In contrast, 95

96

Baby Boom Generation

this is the generation that is now referred to as the “Greatest Generation.” The Me Generation took over with their non-traditionalism, health and exercise fads, New Age spirituality, discos and hot-tub parties, their views on the sexual revolution and promiscuity, and self-help books. The young women of that era began to realize the need to compete with men in the workplace and to express themselves as individuals rather than being the “little woman” with no say in her needs and endeavors. Revlon joined the cause with a “lifestyle” perfume called Charlie and it soon became the world’s bestselling perfume. The focus on self-fulfillment became known with such films as An Unmarried Woman, Kramer vs. Kramer, and Private Benjamin. The me-first attitude was often satirized in television sitcoms such as All in the Family and Seinfeld. These shows lacked both a developed plot and lesson that would serve as a take away for the audience. They were simply watching a show about nothing in particular. However, as with any situation in societal matters, one must not generalize that all baby boomers were self-involved. The 1970s was a time of rising unemployment and an erosion of faith in social and political institutions. The leading edge of the baby boomers was counter-culture hippies and political activists during the 1960s, and is often referred to as the “Now Generation” rather than the Me Generation. Men, often very young men, were returning home from the Vietnam War, and were both physically and emotionally scarred—some more than many would like to admit. These veterans did not come home to ticker tape parades, as did their predecessors from World War II. Instead, they came home to isolation, sneering, being spat upon, and unemployment because they were thought of as “baby killers” or “war mongers.” This begs the question of why the change from 1960 to 1970? Elements of frustration had entered society, and the radicalism of the 1960s was replaced with frustration—frustration over the violence; frustration over the inflation that was occurring; and most importantly frustration over the lack of reforms that they had hoped to see occur within society in the 1960s. This frustration forced this generation to look inward at themselves, rather than outward toward society as a whole. As with any generation and considering advancements in technology, science, and especially

medicine, each generation is living longer, working longer, and, for the most part, living smarter. According to the Huffington Post, each day, 10,000 Americans turn 65. The Post claims that “baby boomers are behind an unprecedented age quake that will shake up not only this country, but the rest of the developed world.” Today’s younger generation, now referred to as the “Me, Me, Me Generation,” are concerned that those moving into old age have not saved enough for retirement, will drain the government for benefits such as Medicare, and their ill health will cripple the nation’s health system. Joel Stein, from Time magazine, attempted to make a connection between the two generations in his cover story titled “The ME ME ME Generation,” which stated on the cover, “Millennials are lazy, entitled narcissists who still live with their parents.” When comparisons are made between the Millennial and the Me Generation, it is important to put those comparisons into a 21st-century context and the issues that are faced today that would drive a Millennial to live at home. With every generation, as the generation before and more than likely the generation to come, society tends to generalize and insist that each new generation is self-absorbed and lazy. Can that be? The Industrial Revolution made individuals more independent and powerful because they were able to start a business, form organizations, and move to the city. With every passing generation grows fear and misunderstanding. The older generation fears new technology and changes. As Joe Coscarelli wrote in the Daily Intelligencer, each generation is remaking, remodeling, elevating and polishing one’s very self (Me)! This had always been an aristocratic luxury … since only the very wealthiest classes had the free time and the surplus income. Now, nearly all of society is without total constraints on self-development for lack of time or money. Christopher J. Kline Westmoreland County Community College See Also: Boomerang Generation; Caring for the Elderly; Childhood in America; Civil Rights Act of 1964; Civil Rights Movement; Evolutionary Families; Feminism; Gender Roles; Information Age; Medicare; Social History

of American Families: 1941 to 1960; Social History of American Families: 1961 to 1980; Social History of American Families: 1981 to 2000; Social History of American Families: 2001 to the Present. Further Readings Coscarelli, J. “The Me Me Me Generation vs. The Me Decade.” http://nymag.com/daily/intelligencer/ 2013/05/me-me-me-generation-vs-the-me-decade .html (Accessed July 2013). Fischer, Claude S. Made in America: A Social History of American Culture and Character. Chicago: University of Chicago Press, 2010. Levine, Kenneth. The Me Generation … By Me (Growing Up in the 60s). New York: Kirkus Media, 2012.

Baby M Baby M was a pseudonym for the child at the center of a famous custody dispute between a surrogate mother and the intended parents who commissioned the surrogacy. The Supreme Court of New Jersey decided the case in 1978, becoming the first American court to consider the validity of a surrogacy contract, ultimately ruling that the contract was unenforceable. The court thereby upheld a woman’s right to change her decision after she agreed, under a surrogacy contract, to be artificially inseminated with a man’s sperm and to surrender the baby to him and his wife. The intended parents in the Baby M case, William and Elizabeth Stern, married in July 1974, after meeting as doctoral students at the University of Michigan. Elizabeth Stern learned that she might have multiple sclerosis, and feared that a pregnancy would pose a serious health risk. Deciding not to have biological children was particularly difficult for William Stern, who was the sole survivor in his family of the Holocaust. After considering adoption, the Sterns arranged a surrogacy through the Infertility Center of New York. In February 1985, William Stern and Mary Beth Whitehead, the surrogate, entered into a surrogacy contract. The contract stipulated that through artificial insemination with Stern’s sperm, Whitehead would become pregnant, deliver the child, give the child to the Sterns, and terminate her maternal rights so that Elizabeth

Baby M

97

Stern could adopt the child. In exchange, William Stern would pay Whitehead $10,000. As soon as Whitehead delivered a baby girl on March 27, 1986, she indicated to the Sterns that she did not want to part with the baby. Nonetheless, she surrendered the child to the Sterns on March 30. The next day, Whitehead visited the Sterns and asked to have the child, even if for a week. She appeared despondent, and the Sterns turned over the child, not wanting to risk the chance that she would commit suicide. Whitehead then refused to return the baby, and fled to Florida. The Sterns filed a lawsuit for the custody of the child and for the enforcement of the surrogacy contract that would terminate Whitehead’s parental rights. The baby was forcibly removed from Whitehead’s custody four months later. The New Jersey trial court upheld the surrogacy contract as valid, ordering Whitehead’s parental rights to be terminated if in the best interests of the child. Whitehead appealed, and was granted continued visitation pending the appeal. The New Jersey Supreme Court, granting direct certification, held that the surrogacy contract was invalid on two grounds. First, the court found a direct conflict with existing New Jersey statutes. Second, the court found a conflict with New Jersey’s public policy, as expressed in its statutory and decisional law. On the first point regarding statutory conflict, the New Jersey Supreme Court noted several inconsistencies between the surrogacy contract and New Jersey statutes. First, a surrogacy contract conflicted with the New Jersey laws prohibiting the use of money in connection with adoptions. Second, there was a conflict with the New Jersey laws requiring proof of parental unfitness or abandonment before a termination of parental rights was ordered or before an adoption was granted. Third, there was conflict with the New Jersey laws that made consent to adoption revocable in private placement adoptions. On the second point regarding public policy conflict, the New Jersey Supreme Court found that the surrogacy contract conflicted with the state’s public policy in several ways. For example, New Jersey gave equal rights to both natural parents concerning their child, as opposed to a surrogacy contract that gave the father rights at the expense of the mother. Furthermore, the surrogacy contract disregarded

98

Baby Showers

the best interests of the child in contravention of public policy. The court also noted that the state’s policies on a mother’s consent to the surrender of a child differed from the procedures undertaken by the Sterns and Whitehead. Finally, in striking down the surrogacy contract, the court distinguished between an adoption and a surrogacy. The court concluded that in a civilized society, there are some things that money cannot buy. Having determined that the surrogacy contract was illegal and unenforceable in New Jersey, the court decided the custody of Baby M according to the traditional child’s best interests. Under this inquiry, the court awarded child custody to the intended parents—the Sterns—because they had a strong relationship with the child, their finances were better, and they had demonstrated a desire and ability to nurture and protect the child while also encouraging her independence. On the other hand, the surrogate mother was mentally, emotionally, and financially less stable. Nonetheless, she received visitation rights to Baby M. Some states subsequently followed New Jersey’s lead, determining that surrogacy contracts are unenforceable. Other states, however, permit surrogacy, but not the payment of surrogates. Yet other states allow both surrogacy and payment for surrogacy. Foreign countries also differ in their approaches to the regulation of surrogacy. These differences in the approaches to surrogacy have resulted in fertility tourism, where couples looking to commission a surrogacy travel to jurisdictions with favorable surrogacy laws. The ultimate public policy decision for American state courts and legislatures on this issue is whether to declare surrogacy contracts enforceable, void and unenforceable, or enforceable only if noncommercial. As the Baby M case illustrated, there are complicated considerations involved in determining whether a woman should be able to serve as a commercial surrogate. Margaret Ryznar Indiana University See Also: Adoption, Open; Foster Families; SingleParent Families; Surrogacy.

Further Readings Matter of Baby M, 537 A.2d 1227 (NJ,1988).

Jones, Rachel K. and April Brayfield. “Life’s Greatest Joy? European Attitudes Toward the Centrality of Children.” Social Forces, v.75 (1997). Ryznar, Margaret. “International Commercial Surrogacy and Its Parties.” John Marshall Law Review, v.43/4 (2010).

Baby Showers A baby shower is a celebration in honor of a pregnant woman and the impending birth of her child, which is hosted by her family or friends. The celebration serves as a rite of passage into motherhood, and the woman is “showered” with gifts that will help her care for her baby. Infant care products and layette items are common gifts, and guests often provide the pregnant honoree with advice about the birth and childrearing process. Traditionally, baby showers are attended by women, although modern variations include cogendered couples showers at which both of the parents are honored. Until the 20th century, most traditions dedicated to childbirth centered on a naming ceremony or religious baptism. In ancient Egypt and Greece, women were isolated from society with their newborns for several days after giving birth, after which the mothers would be reintegrated into society and honored with a celebratory meal. Although many cultures feature traditions of gift giving to expectant mothers, celebrations were typically not held until after the birth of the child. Baby showers first became popular in the mid20th century during the post–World War II baby boom. The conventional baby shower features baby-themed decorations, party games, and a giftopening period. The expectant mother’s family or friends host the shower, usually during the third trimester of her pregnancy, when the possibility of miscarriage is greatly reduced. At the event, the mother is assigned a distinctive place to sit as she opens gifts, and someone records the name of each gift giver to facilitate the writing of thank-you cards later. Gifts are displayed or passed around for guests to view. Games often include physical tasks, such as diapering dolls, or contain a guessing element, such as predicting the future newborn’s gender, length, weight, and birth date.



In addition to celebrating the expectant mother’s new role, baby showers were originally initiated to assist parents with the financial burden of brining a baby into the home. The expenses associated with starting a family are offset by gifts of furniture, clothes, and infant care items. The baby shower signals a woman’s entrance into a new identity and mode of life, and the party functions to ease the transition and welcome her into a community of mothers. The financial strain of parenthood is lessened by gifts, and the anxiety of become a mother is, ideally, reduced by the advice and encouragement provided by other women. Generally, baby showers are only given to first-time mothers, who will most likely reuse many of the items if she has additional children later on. While the traditional baby shower paradigm remains prevalent in the 21st century, alternate shower styles have grown in popularity. Nontraditional shower options tend to forgo overly feminine decorations, and often do not feature interactive party games. Couples showers include male guests, often feature alcohol, and are generally regarded as more casual. Feminist showers focus on the mother-to-be as an independent woman, and tend to include elements of wisdom sharing or gift giving that explore the complexities inherent in becoming a mother. Office showers are held in the workplace, are hosted by colleagues, and tend to be low-key and shorter in duration than traditional showers. Eco-showers feature environmentally friendly décor and food, and guests are encouraged to bring preowned or environmentally friendly gifts. Adoption and surrogacy showers are also becoming more common, and do not feature pregnancy-related activities or games because the expectant mother is not pregnant. Technology has allowed baby showers to become increasingly personalized. Electronic registry systems allow for expectant parents to pre-select desired gifts, and customer scanning guns allow shoppers to indicate which presents they have purchased to avoid duplication of gifts at the shower. Ultrasound technology has also altered both the gift-giving and the guessing-game aspect of baby showers, if the parents-to-be have revealed the gender or the name of the baby in advance. When expectant mothers plan for induction or a Cesarean section delivery, even the date of the baby’s birth can be predetermined.

Baby Showers

99

A three-tier “cake” made out of rolled diapers and diapering accessories is a common gift at a baby shower. Such gifts are usually pink or blue, depending on the gender of the child.

Research Research on baby showers include investigations of their consumer-driven nature, their reinforcement of traditional gender roles, and their function as a riteof-passage ceremony. Baby showers have also been studied for their social significance and the unique form of female bonding that they enable. Physicians and public health officials have analyzed the effectiveness of baby showers in teaching women about baby safety and accident prevention. Some expectant mothers feel ambivalent about their showers, citing the excess and type of infant care advice that they receive at the shower, and the tensions between the professional and personal aspects of life that arise during pregnancy. Generally, baby showers are

100

Bandura, Alfred

regarded as a time of generosity between and among a community of women.

with her until her death in 2011. They raised two daughters.

Deborah M. Sims University of Southern California

Influence and Research Focus Considered one of the most influential psychologists of the 20th century, Bandura is a pioneering researcher in the areas of social cognitive theory, personality psychology, cognitive psychology, therapy, social learning theory, and self-efficacy. Associated with the cognitive revolution that began in the 1960s and reshaped the field of psychology, a 2002 survey ranked Bandura among the most frequently cited psychologists of all time. Bandura’s social-cognitive approach suggests that biological, cognitive, and environmental factors all interact. He advocates a view of behavior as organized and influenced by the social systems that people create. According to Bandura, one of the most influential social systems impacting human development and human potential are families. He emphasizes the family system because it affects virtually every aspect of personal development and well-being during the formative periods of life. Bandura has argued that humans have changed little genetically over the past millennium, but family practices have dramatically changed. Furthermore, families are important social systems, according to Bandura because they provide social modeling and other forms of social guidance to pass on accumulated knowledge and effective practices to subsequent generations. Social systems and families are also important for Bandura because many of the goals that individuals seek are only achievable by working together through the type of interdependent effort consistent with a family structure. Accordingly, human beings must pool their knowledge, skills, and resources, and act with collective agency to shape their future. According to Bandura, families constitute a network of interdependencies consistent with the form of collective agency necessary for individuals to achieve their goals.

See Also: Birthday Parties; Engagement Parties; Wedding Showers. Further Readings Fischer, Eileen, and Brenda Gainer. “Baby Showers: A Rite of Passage in Transition.” Advances in Consumer Research, v.20 (1993). Nelson, Fiona. “Stories, Legends and Ordeals: The Discursive Journey Into the Culture of Motherhood.” Organization Development Journal, v.21/4 (2003).

Bandura, Alfred Albert Bandura is a psychology and social science professor. Born December 4, 1925, in Alberta, Canada, he was one of six children born to Ukrainian and Polish parents, and was raised in a small town near Edmonton. Bandura attributes his emphasis on self-directedness in education to the limited educational resources that he experienced while growing up. Many believe that his early experiences contributed to his emphasis on human agency. He graduated in 1949 with a bachelor’s degree from the University of British Columbia. During his time as an undergraduate, he became fascinated by psychology; he obtained his degree in three years, and was awarded the Bolocan Award in psychology. He earned his master’s degree from the University of Iowa in 1951, and his Ph.D. from the same institution the following year. During his time at Iowa, Bandura began his psychological research career using replicable experimental designs in laboratories in which variables could be controlled. Bandura became a faculty member at Stanford University in 1953, where he remained for his entire career. Bandura served as the president of the American Psychological Association (APA) in 1974, and holds a number of honorary degrees from universities around the world. In 1980, he became a fellow of the American Academy of Arts and Sciences. Bandura married Virginia Varns in 1952, and remained

Aggression and the Bobo Doll Experiment

Bandura’s early interest was aggression in children; his first book was Adolescent Aggression, published in 1959. In 1973, he wrote Aggression: A Social Learning Analysis. These early studies included a focus on self-regulation and self-reflection; later studies focused on self-efficacy, which is defined



as an individual’s belief in being able to reach one’s goals. These studies eventually led to his famous bobo doll experiments, conducted in 1961 and 1963. (A bobo doll is an inflatable toy with a weighted bottom, designed with a face that looks like a clown. If struck, the doll topples over and immediately returns to an upright position.) The study observed children’s behavior following an adult modeling aggression toward a bobo doll. Through this experiment, Bandura demonstrated that human beings learn through observation as opposed to simply responding to a system of rewards and punishments, as B. F. Skinner’s research suggested. Up to that point, Skinner’s theory of behaviorism had dominated the psychological interpretation of human behavior for several years. The experiment also initiated the idea of social learning theory and showed how children might be influenced through violent media images. Bandura’s book, Social Foundations of Thought and Action: A Social Cognitive Theory, was published in 1986. This book is considered a landmark in psychology, and expanded Bandura’s social learning theory into a comprehensive analysis of motivation in the role of cognitive and self-reflective processes. His book Self-Efficacy: The Exercise of Control focuses on scientific psychology and was published in 1997. It is widely cited in psychology, sociology, medicine, and management. Social Learning Theory Bandura’s social learning theory emphasizes modeling, imitation, and the significance of observational learning. His theory is based on the idea that without the ability to learn from the actions of others, learning would be laborious and hazardous. Bandura’s work broke with behaviorism. For example, in the bobo doll experiment, children beat up the doll without any encouragement or incentive, as behaviorism would suggest. Bandura utilizes behavioral terminology in his work, such as conditioning and reinforcement. The theory of social learning is relevant to criminology. In the 21st century, social learning theorists still link aggressive behavior with learned values and criminality. The theory of social learning is also relevant to the study of family because the family structure constitutes the primary social system in which modeling, imitation, and observational learning occur at the most significant developmental phases.

Bandura, Alfred

101

Self-Efficacy Bandura developed therapies around his conception of self-efficacy. This approach involves personal mastery and giving people a sense of control in situations that generally cause feelings of hopelessness. People are taught to mentally rehearse their potential to succeed in a task; research shows that this practice correlates with higher rates of success than if no mental rehearsal takes place. This type of therapy is utilized in a range of situations, including people suffering from phobias to those recovering from heart attacks. Bandura stresses the need to self-regulate one’s behavioral response to the situation. Self-efficacy therapy is also used to help people break bad habits. For example, self-efficacy therapy for smokers helps people understand and resist behavioral antecedents, or triggers, that generally lead people to smoke, such as frustration, eating, and drinking alcohol. Research has shown a strong correlation between high self-efficacy and the ability to abstain from smoking. Bandura has also examined collective efficacy and families. Researching hundreds of families, he argued that collective forms of efficacy within the interdependent family system are structurally related to the quality of family functioning and satisfaction with family life. Bandura discovered, for example, that a high sense of collective efficacy correlated with open family communication and openness by adolescents regarding their activities outside the home. In addition, parents with a high level of efficacy can positively impact their child’s development and interactions with social institutions during their formative years. Efficacy also increased the ability of a parent to resolve marital relationship problems with their spouse. Bandura found that spouses with low selfefficacy tended to avoid problem-solving strategies. He also found that the perceived self-efficacy of wives in dual career marriages impacted physical health and the emotional life. Bandura has emphasized the significance of efficacy for fathers, noting that a low sense of efficacy in relation to economic hardships tended to impair the family climate and affected depression in adolescent offspring. The opposite occurred in fathers with a high sense of self-efficacy. Bandura argued, therefore, that fathers play a significant role in family life. David J. Roof Ball State University

102

Baptism

See Also: Freud, Sigmund; Psychoanalytic Theories; Skinner, B. F. Further Readings Bandura, A. Self-Efficacy: The Exercise of Control. New York: W. H. Freeman, 1997. Bandura, A. Self-Efficacy in Changing Societies. New York: Cambridge University Press, 1995. Bandura, A. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice-Hall, 1986. Bandura, A. and R. H. Walters. Social Learning and Personality Development. New York: Holt, 1963. Evans, R. I. and A. Bandura. Albert Bandura: The Man and His Ideas—A Dialogue. New York: Praeger, 1989.

Baptism Baptism is a Christian rite, an initiation into a local church, denomination, or the larger Christian body. Most Christians believe that baptism was mandated by the Apostle Paul and modeled by Jesus, but beyond this, the ritual varies across Christian traditions in its manner, function, and meaning. In its most basic form, baptism is spiritually meaningful contact between a person entering into the Christian faith and water. Worldwide, most Christian churches baptize infants in an act also called christening or pedobaptism (“child baptism”). Roman Catholic, Orthodox, Lutheran, Episcopal, and mainline Protestant traditions practice infant baptism, though these traditions also baptize adult converts. Other traditions, such as the Evangelical Covenant Church, permit but do not require infant baptism. Still others, including all Baptists, Anabaptist groups such as Mennonites and Amish, various Pentecostal sects, and Latter-day Saints, prohibit infant baptism. Pedobaptism Pedobaptism often occurs during the typical worship service. Parents, friends, and relatives may attend the baptism and a celebration of it, often in the form of a reception afterward. Godparents, who may be family members, promise to offer spiritual guidance to the child throughout his or her life. These godparents often give extra attention to

the other rituals of spiritual life, such as buying the child’s first Bible. In this way, baptism serves to connect members of a family and fictive kin (godparents) together. Infant baptism often involves either sprinkling water (aspersion) or pouring water over the child’s head (affusion). In the Eastern Orthodox and Eastern Catholic traditions, though, infants may be fully submerged in water. Infants may wear white, lacy gowns, often a family heirloom that may be preserved as a family keepsake and displayed with other memorabilia from childhood, such as a baby’s first shoes or a lock of hair. Such artifacts connect families across generations because baptism may promote caring for the material culture of previous generations. Theologically, infant baptism may represent a child’s entrance into the life of the church, as well as parents’ and a community’s commitment to the spiritual care of that child. The parents and community may be asked by the officiant to agree to care for the spiritual needs of the child. Baptism may also be seen as a ritual that removes original sin, the sin inherent in humanity because of the fall of Adam and Eve and their subsequent banishment from paradise. Thus, baptism rituals often include words such as “The Servant of God [child’s name] is baptized in the name of the Father, and of the Son, and of the Holy Spirit,” indicating the child’s new spiritual status. In some traditions, baptism is viewed as a requirement for salvation and thus entrance into heaven, so children who die before baptism are presumed to be sent to hell or some alternative. However, in 2007, Pope Benedict XVI, the leader of the Roman Catholic Church at that time, announced “reasons for prayerful hope” that unbaptized infants are bound for heaven. Baptism may be followed years later by confirmation, a ceremony wherein youth publicly affirm their baptism. Credobaptism In contrast to pedobaptism, advocates of credobaptism, or “believer’s baptism,” believe that only those who have reached an age of spiritual maturity—generally over age 8—are eligible for baptism. Indeed, this disagreement fueled the Radical Reformation, the split of Anabaptists from both Catholics and Protestants in the 1500s. Such groups may still permit infant and child dedications, in which parents ask the local church to spiritually support their family.



Adult baptizers may permit aspersion; affusion; submersion; or immersion, which itself may describe any form of dipping the body in water. In immersion, the believer may be partially immersed, standing or kneeling in water, with more water poured over the head. Different traditions may consider any one or only one of these forms of baptism valid. Additionally, traditions may mandate that baptism occur in running water, rather than in a pool, pond, or baptistery (a font located in the sanctuary of many churches). Furthermore, some traditions baptize in the name of Jesus’ only while others name all three persons of the Holy Trinity. Baptism may occur after a candidate has undergone study and interrogation by the local community about his or her beliefs, or it may happen immediately following a candidate’s request, without fuller investigation by the clergy or community. Baptism may require a commitment to the local church, but not all traditions demand this. For traditions that recognize only adult baptism, the meaning of this ritual is likewise disputed. For many, it marks entrance into the church as a local body, granting the individual rights and responsibilities in congregational life, such as taking communion. It may also mean that the person baptized belongs to the wider denomination, or is an authentic member of a worldwide Christianity. It may be a supernatural experience, or it may be viewed as an outward symbol of a preceding inward spiritual experience. In some traditions, baptism is a singular event, so that if a baptized person converts to a different Christian tradition, the baptism is still valid and no additional baptism is needed. For others, such as Anabaptists (meaning “re-baptizer”), infant baptisms deny individual religious conscience, and thus people baptized before they were spiritually mature must be rebaptized. For others, baptism is such a precise ritual that any previous baptism that did not adhere to the tradition’s standards is invalid, so converts, even if they were baptized in a very similar tradition, must be rebaptized. While baptism as a ritual event, meaning-making activity, and family tradition that may draw families closer, it can also create or reveal rifts in families, particularly as families become more diverse. For example, as the rate of interfaith marriage rises, parents may hesitate to baptize infants because of pressure from a spouse who does not

Baptism

103

support baptism, out of respect for extended family, or out of the belief that a child of interfaith parents should make independent decisions about religious identity at a later life stage. Additionally, changes in religion across generations that mirror the broader movement away from the infant-baptizing mainline Protestant religions toward irreligion, nonbaptizing traditions, or adult baptizing traditions—for example, Episcopal grandparents of a Mormon child— may mean that failure to baptize a child according to family tradition causes strain. Trends Baptism rates are, across American Christianity, falling, with exceptions among Seventh Day Adventists, some Pentecostal groups, and Latterday Saints. The reasons are multiple: a declining birth rate that means fewer children in (and outside of ) churches in general; an increase in interfaith marriages; an increase in single motherhood, which women may anticipate will stigmatize them at church and thus prevents their attendance; a rise in nondenominational churches that stress internal experiences of spirituality over ritual; and a general decline in religious adherence more broadly, especially among those in their childbearing years. Significantly more people born before 1980 than born after 1980 identify as people of faith or are active in religious life, and young people today are less likely than young people of the past to be active in religious life, suggesting that overall baptism rates will continue to fall. Rebecca Barrett-Fox Arkansas State University See Also: Catholicism; Christianity; Evangelicals; Protestants; Rituals. Further Readings Kobler, Kathie, Rana Limbo, and Karen Kavanaugh, “Meaningful Moments: The Use of Ritual in Perinatal and Pediatric Death.” American Journal of Maternal/ Child Nursing, v.32 (2007). Spinks, Brian D. Reformation and Modern Rituals and Theologies of Baptism: From Luther to Contemporary Practices. Burlington, VT: Ashgate, 2006. Wills, David W. Christianity in the United States: A Historical Survey and Interpretation. Notre Dame, IN: University of Notre Dame Press, 2005.

104

Bar Mitzvahs and Bat Mitzvahs

Bar Mitzvahs and Bat Mitzvahs A bar mitzvah or bat mitzvah, meaning “son/ daughter of the commandment” in Aramaic, refers a Jewish series of rituals performed by adolescent males at age 13 and females at age 12. The ceremony of becoming bar mitzvah or bat mitzvah is not required by Jewish law but is held sacred. The ceremony consists of, among other things, leading part of a worship service and reading the sacred text in front of the assembly. This paramount event in the life of a Jewish youth has evolved over the centuries, but the origins of the ritual date back as early as the 1st century c.e. Many Jewish parents of past and present view this rite of passage as an important step in the life of their adolescent progeny. A male, per Jewish law, is not required to observe mitzvoth (“commandments”) until he has reached the age of maturity. In rabbinic literature of late antiquity, maturity may not have been determined by age, but rather by the first sign of physical maturity, namely, the appearance of pubic hair. Eventually, 13 was designated as the age when each boy was initiated into adulthood, and was required to observe mitzvoth. A 2nd century c.e. authoritative text states that “at five, one should study Scripture; at ten, one should study Mishnah [rabbinic text]; at thirteen, one is ready to observe mitzvoth” (Mishnah Avot, 5:24). Another early rabbinic text explains that at age 13, the boy’s father would “bring him in front of each elder to be blessed, strengthened, and to pray that he might be privileged to study Torah and perform good deeds” (Talmud, Sofrim 18:5). Near the age of 13, the well-known 1st century historian Josephus was recognized in Jerusalem by the elders for his accurate understanding of Jewish law. Process The process of becoming bar mitzvah, including the ceremony and festivities, has evolved through the centuries. As it is performed today, this rite of passage is imbued with both symbolic meaning and sentimentality, particularly because it is an amalgamation of authoritative rabbinic precepts and centuries-old traditions. By the 20th century, the process of becoming bar mitzvah had grown to include many procedures that vary across Jewish

subgroups. These procedures include (1) wearing tefillin (“phylacteries”) for the first time (tefillin are small boxes containing scriptural passages that are bound to the head and arm during worship); (2) receiving an aliyah during a synagogue service (i.e., “ascending” to the podium to make blessings over the weekly Torah portion); (3) the father publicly declaring that the bar mitzvah is thereafter responsible for his actions (i.e., “Blessed be He Who has exempted me from the punishment of the child”); and (4) reading all or part of the Torah portion and/or reading an additional portion (haftarah) from one of the books of the prophets. Jewish females 12 years of age are also initiated into adulthood with a ceremony that, among most Jewish groups today, resembles the process of becoming bar mitzvah, although this was not always the case. In fact, not until 1922 was a ceremony of becoming bat mitzvah performed in the United States. On this occasion, Rabbi Mordecai Kaplan, founder of the Reconstructionist Movement, permitted his daughter to recite a blessing and read a selected text from the Torah in front of the assembly. Today, nearly every Reconstructionist and Reform congregation, and a majority of Conservative congregations, celebrate the process of becoming bat mitzvah in a similar manner as the bar mitzvah. The Orthodox community celebrates a bat mitzvah in a variety of ways that may or may not mimic the process of becoming bar mitzvah; nevertheless, high value is placed on both rites of passage. A candidate for becoming bar mitzvah or bat mitzvah begins preparations a few months to a year prior to the ceremony date. The rabbi often assigns a teacher to the candidate and regular sessions commence, usually once per week. In the early stages of instruction, the candidate obtains greater knowledge of several key rituals. As the special day approaches, classes intensify, and the candidate begins to learn how to read from the Torah, which also includes lessons by the synagogue cantor on Torah cantillations. Rehearsals may also commence a week or two prior to the ceremony. Perhaps most important, the candidate, if a male, receives instruction on tefillin. As mentioned previously, tefillin are black boxes containing four biblical passages that are donned daily during prayer. The candidate is instructed on how to wear, remove, and store tefillin properly. The wearing of

Barbie Dolls



tefillin is of sufficient significance that the family often celebrates this aspect of becoming bar mitzvah on par with a major graduation. For example, parents often hire a photographer to take pictures of the candidate wearing tefillin. The parents may also buy fine leather tefillin and present it to the candidate as a gift. The actual day of the ceremony is a special event for the entire family and the Jewish community. Several people are involved in the process, including the rabbi, instructors, photographers, family members who are invited to read portions of the Torah during the service, those involved in physical preparation of food and facilities, and individuals who give monetary contributions and gifts to the synagogue or the family. The experience of becoming bar mitzvah and bat mitzvah need not be the pinnacle of Jewish learning and observance. However, a growing number of Jewish families view this day as a pinnacle. In fact, the American Jewish community has seen a dramatic increase in Jewish school dropout rates after the age of 13. On the other hand, to certain segments of the Jewish community, the process of becoming bar mitzvah and bat mitzvah will always retain momentous meaning and joy. Trevan G. Hatch Loren D. Marks Louisiana State University See Also: Judaism and Orthodox Judaism; Passover; Rituals. Further Readings Fishbane, S. “Contemporary Bar Mitzvah Rituals in Modern Orthodoxy.” In Ritual and Ethnic Identity: A Comparative Study of the Social Meaning of Liturgical Ritual in Synagogues, J. N. Lightstone and F. B. Bird, eds. Ontario, Canada: Wilfred Laurier University Press, 1995. Marcus, I. G. The Jewish Life Cycle: Rites of Passage From Biblical to Modern Times. Seattle: University of Washington Press, 2004. Salkin, J. K. “Transforming Bar/Bat Mitzvah: The Role of Family and Community.” In Nurturing Child and Adolescent Spirituality: Perspectives From the World’s Religious Traditions, K. M. Yust, A. N. Johnson, S. E. Sasso, and E. C. Roehlkepartain, eds. Lanham, MD: Rowman & Littlefield, 2006.

105

Barbie Dolls Introduced in 1959 by Mattel, Barbie was the first doll marketed to American girls that looked like an adult. She had a mature figure, painted eyes and lips, and a demure, sideways glance. Barbie was different from anything else on the market at that time, which was dominated by baby dolls and porcelain adult dolls. Ruth Handler, wife of one of Mattel’s founders, designed Barbie in response to the need she saw for young girls to have dolls with which they could practice growing up. Apart from the doll, Barbie also represented another innovation in the toy industry: the use of market research. No toy company before had used consumer input to help design their products. Mattel tailored Barbie’s image and used targeted ads to appeal to children and adults based on the results of their market research. In the decades since her debut, Barbie has reflected the changing attitudes about women in American culture while also becoming a symbol and touchstone for debates about gender socialization, the sexualization of children, and girls’ self-esteem. Barbie’s Production Barbie was created by the Mattel Corporation, which was started by and named after Harold Matson and Elliot Handler in 1945. Ruth, Handler’s wife, was responsible for the business and marketing strategies of the company, while Elliot designed and engineered the toys. In 1955, Mattel had its first major success with the Burp Gun, the first toy advertised on television. On March 9, 1959, Ruth introduced Barbie—the first toy she designed—at the American Toy Fair. Ruth was inspired by her daughter Barbara’s interest in paper dolls, and recognized a gap in the market. While there were some fashion dolls available at the time, such as Miss Ginger, produced by Revlon, these more closely resembled girls than women. Their bodies were flat and soft, and their facial features were rounded and childlike. Handler wanted to create a doll with adult features for children. On a trip through Switzerland, Handler saw a Bild Lili doll in a shop window. Bild Lili was based on a comic of the same name, and the doll was produced as a novelty toy for adult men. Lili was a fullchested, adult doll that was sold with different outfits. However, her accessories and outfits were not sold separately. The doll served as the inspiration

106

Barbie Dolls

Barbie has been a part of the fashion doll market for more than 50 years but has been the subject of numerous controversies surrounding body image. The Campaign for a Commercial-Free Childhood called for the Girl Scouts to end their partnership with Barbie after Mattel ran an ad campaign in the 2014 Sports Illustrated swimsuit edition showing the doll posing in sexy swimsuits.

for the first Barbie, which looked strikingly similar to Lili but was sold with separate accessories. Handler brought the Lili doll back to the United States to be used as a model for her fashion doll. Initially, manufacturing the doll proved problematic because Handler wanted it to be made of plastic and exhibit the same level of detail as Lili. The designers argued that technological limitations made production difficult, but Handler believed that the designers were simply uncomfortable with creating a doll that looked like an adult woman. Handler also wanted to create separate outfits with real snaps and zippers for the doll, and hired Charlotte Buettenback to design the clothing. The doll’s body was then modified so that the clothing would not look too bulky. These modifications later came under attack for encouraging girls to idealize unreasonable physical dimensions. Barbie was unveiled at the American Toy Fair on March 9, 1959, and was rejected by all of the major retailers. Undeterred, Ruth Handler hired Ernest Dichter, a Viennese psychologist who analyzed

consumer buying patterns and motivations, to help design an effective ad campaign. He recommended that the doll be sold as a “teenage fashion model,” which would encourage vicarious fantasy play. He also suggested including catalogs of clothing with the dolls and avoiding creating a specific personality for the doll, allowing for more imaginative play. While Barbie’s sales were initially poor, eventually 350,000 were sold in 1959, exhausting Mattel’s supply. Mattel added Ken in 1961, and then other friends and family to the line. However, to encourage open-ended fantasy play, Barbie never had a family of her own. In the 1970s and 1980s, Barbie’s design continued to change. Her gaze shifted from the side to straightforward in 1971. Barbie continued to be a top seller for Mattel, earning the company over $1 billion in 1993 alone, despite criticism leveled at the toy. Criticism of Barbie As the feminism movement grew in the mid- to late-1960s, Barbie came under increasing scrutiny. In 1963, a New York Times article reflected that

“Best Interests of the Child” Doctrine



many girls saw Barbie as a grown-up version of themselves, which concerned feminist activists and parental groups. They believed that Barbie’s unrealistic body type would cause girls to develop eating disorders, not to mention the fact that Barbie’s ever-expanding line of clothing, shoes, and accessories fostered a consumer culture of materialism. They also argued that her outfits, adult features, and high-heel shoes encouraged the premature sexualization of girls. In addition, parents and others argued that Barbie reinforced gendered stereotypes that would hold girls back. For example, in the 1970s Baby-Sitter Barbie came with three books, including How to Lose Weight, which included only one piece of advice: “Don’t Eat.” In 1991, “Teen Talk Barbie” uttered 270 different phrases, one of which was “math class is tough,” reinforcing stereotypes about girls and math. Debates about Barbie’s influence as a role model continue to the present. Alexandra Carter University of California, Los Angeles See Also: Advertising and Commercials, Families in; Feminism; Games and Play; Toys. Further Readings Formaneck-Brunell, Miriam. Made to Play House: Dolls and the Commercialization of American Girlhood. Baltimore, MD: Johns Hopkins University Press, 1998. Gerber, Robin. Barbie and Ruth: The Story of the World’s Most Famous Doll and the Woman Who Created Her. New York: HarperCollins, 2010. Rogers, Mary F. Barbie Culture. Thousand Oaks, CA: Sage, 1999.

“Best Interests of the Child” Doctrine Divorces in the United States that involve minor children can be a tedious, costly, and jurisdictionally dependent process. The process varies by state, as do the rules governing child custody decisions. Although there are commonalities across states around child custody decisions, one shared objective is the use of the best interests of the

107

child doctrine. This standard guides statutory law regarding child custody decisions in every state. Although the application of the best interests of the child standard is most prevalent in child custody matters, it is also used in other situations. Examples of these situations include the termination of parental rights, adoption, and foster care placements. Over time, these situations have become proportionately outweighed by divorce proceedings in which parenting plans or custody arrangements are made to assure that this legal standard is met. When divorce occurs and parents are unable to amicably decide child custody arrangements, a court determination of the child’s best interests becomes necessary. Historical Development Historically, the best interests of the child standard was not the common standard applied when deciding where a child should reside. Before 1815, young children were placed in the custody of their mother because of their “tender” age in a policy known as the “tender years doctrine.” Thus, for years, decisions based on consideration of a child’s young age were integrated into American case law as a way to better serve children’s developmental interests against the typical consideration of children as parental property, and thus against paternal rights. However, in the Pennsylvania case of Commonwealth v. Addicks (1815), a father’s request for custody was based on the moral interests of the children due to the fact that the wife was an adulterer, which was illegal at the time. The court revised its previous decision, based on the tender years doctrine, which awarded custody to the mother, and granted custody of the children to the father. The court acknowledged the need for strong moral guidance, especially for the eldest child, and that separating the children would not benefit them. Although the specific circumstances and decision made in this case do not directly conform to a typical application of the best interests of the child standard today, the guiding principles are the same as in many current state statutes. All states adhere to a best interests of the child standard, although the application can greatly vary. Attempts at uniformly applying a standard across states, through legislation such as the Uniform Marriage and Divorce Act (1970), have generally been unsuccessful. There are some common

108

“Best Interests of the Child” Doctrine

considerations in applying the best interests standard, which include the safety of the child, the continuity of child residence, the type of care that the child may expect to receive in the home, and the timeliness of the court decision. Some states also consider the nature of family bonds and relationships, parent and child mental and physical health, and child preference. Many states use an implicit style of application, consideration, or weighting of conditions, rather than an explicitly stated set of criteria for evaluation. States that do use an explicit set of criteria in best interests of the child determinations are guided by statutory requirements of factors to consider. Many other states take a goaldirected approach with their statutory guidelines. Health, Safety, and Developmental Needs of Children Different courts apply different standards in evaluating the best interests of the child; however, all state statutes governing these decisions consider the health, safety, and/or developmental needs of a child in assessing potential options. Courts often make decisions to remove children from a home due to neglect; exposure to domestic violence; and physical, emotional, or sexual abuse. The scope of what can be considered is not limited to such acts of violence, but also include other circumstances that may be detrimental to the child’s development (e.g., substance abuse and unhealthy or licentious sexual behavior). If needed, courts have the authority to appoint a case worker or guardian ad litem to evaluate the child’s situation and make suggestions or act as a voice for the interests of the child. Many jurisdictions refer divorcing parents to designated parent education programs that emphasize creating a better post-divorce environment for children. There is some evidence that such programs improve children’s adjustment. Transitions and Maintaining Familial Relationships One of the primary goals of adhering to the best interests standard is to limit children’s transitions. In cases of divorce, parents who take primary physical custody of the children immediately following the initial separation are generally the ones who maintain custody when residential arrangements are legally determined. Considering the best interests of the child, courts generally

lean toward maintaining continuity of residency to prevent unnecessary or excessive changes in the child’s life. Research shows that discontinuity can be stressful for children, disrupting their routines and established relationships. As such, the timeliness of placement decisions is of concern to the courts. For children in which the state must assume responsibility in loco parentis (instead of a parent), such as in cases requiring a termination of parental rights, limiting transitions can be especially difficult and taxing on children’s well-being. However, the incidence of this is infrequent in the divorce process. Limiting transitions operates in concert with maintainence of familial relationships. By maintaining these relationships, continuity remains, and bonds between family members are facilitated. For this reason, special consideration is generally given to grandparents who petition for access to their grandchildren. In some circumstances, visitation by brothers and sisters not living together is also considered. Even when out-of-home custody arrangements are required, the preference is that children are placed with other family members or in situations which most closely resemble such settings. However, the preference for living situations that involve family members is not without limits, because stepparents are commonly afforded no legal standing in custody decisions during divorce proceedings. Preference In some states, children’s preferences are considered. In these situations, the child is expected to be of a “reasonable age” to make his or her preference known (the standard for this determination varies by state), so preference can be included in applying the best interests of the child standard. Children who reside in states that allow for consideration of their preference may have a voice in determining what is best for them, including preference for living arrangements. As children age, the agreed-upon custody arrangements can, and in many instances do, become a hindrance or disruption to the child. In these circumstances, revision of the standard may be needed, but it is more likely that the interpretation of the agreement may be relaxed to accommodate the child’s preference. Parental preference for child custody arrangements is only considered in situations in which parents agree to a parenting plan. Even here, courts have discretion

Bettelheim, Bruno



and can override parental preference to maintain a child’s best interests. Today, the needs of the child outweigh the wants of the parents. This represents a substantial deviation from the way that child placement determinations have historically occurred. Anthony J. Ferraro Florida State University See Also: Child Advocate; Child Custody; Child Safety; Children’s Rights Movement; Custody and Guardianship; Divorce and Separation; Parenting Plans; Shared Custody; Social History of American Families: 1790 to 1850; Social History of American Families: 1961 to 1980. Further Readings Buehler, C. and J. M. Gerard. “Divorce Laws in the United States: A Focus on Child Custody.” Family Relations, v.44 (1995). Kohm, L. M. “Tracing the Foundations of the Best Interests of the Child Standard in American Jurisprudence.” Journal of Law & Family Studies, v.10 (2008). Maccoby, E. E. and R. H. Mnookin. Dividing the Child: Social and Legal Dilemmas of Custody. Cambridge, MA: Harvard University Press, 1992. Shuman, D. and A. Berk. “Judicial Impact: The Best Interests of the Child.” In Parenting Plan Evaluations: Applied Research for the Family Courts, K. Kuehnle and L. Drodz, eds. Oxford: Oxford University Press, 2012.

Bettelheim, Bruno Born on August 28, 1903, Bruno Bettelheim was an influential child psychologist. A fluid and eloquent writer, Bettelheim produced works that were popular with the general public. Especially interested in emotionally disturbed children, Freudian analysis, and fairy tales, Bettelheim greatly affected how American families dealt with children. Despite his influence and academic reputation, Bettelheim was involved in several controversies during his lifetime, involving topics ranging from anti-Semitism to autism. After his death, former patients indicated inconsistencies between his writing and his treatment of those in his care. Controversy regarding

109

Bettelheim’s academic credentials also cast a new light on his work. Despite this, Bettelheim remains a seminal influence in working with the emotionally disturbed. Background Bettelheim was born to a secular Jewish family in Vienna, Austria, where his family owned a sawmill. Although he entered the University of Vienna at 18, years old, he soon left so that he could take over management of the family business after his father’s death. Beginning in 1930, Bettelheim and his first wife, Gina Alstadt, cared for a child named Patsy, an American who was later described as autistic. After Germany annexed Austria in 1938, Bettelheim was arrested by the Nazis because of his Jewish background, and was imprisoned in concentration camps at Dachau and Buchenwald. Permitted to immigrate to the United States in 1939, Bettelheim was hired by Ralph Tyler to serve as a research associate at the Progressive Education Association, located at the University of Chicago. Serving as an associate professor of psychology at Rockford College in Rockford, Illinois, from 1942 to 1944, Bettelheim published an article examining behavior of prisoners in German concentration camps, one of the earliest examinations of this issue. Bettelheim returned to the University of Chicago in 1944, receiving a dual appointment as an assistant professor of psychology and director of the Sonia Shankman Orthogenic School, a laboratory school for children with emotional disturbances. Influence Bettelheim made significant changes at the Orthogenic School upon his taking over as its principal. An advocate of milieu therapy, which encouraged children to form strong attachments with adults within a structured and loving environment, Bettelheim set up an atmosphere where this could take place. Students were encouraged to share their feelings and thoughts with the staff in an effort to encourage personal growth. Ideas that were perceived as “good” were labeled “orthogenic,” and those that were perceived as “bad” were labeled “unorthogenic.” As one of the very few residential psychiatric facilities available to children, the Orthogenic School’s work attracted national attention. Bettelheim published broadly, and asserted a high rate of success in his treatment of emotionally

110

Birth Control Pills

disturbed children, for whom other types of therapy had not helped. Bettelheim wrote of the joys and sorrows of childhood in such a way that he became extremely popular, and he frequently appeared on radio and television programs to espouse his views on childrearing. Using Freudian methods of analysis, Bettelheim also began to analyze traditional fairy tales to discern their emotional and symbolic importance for children. Tales such as those transcribed by the Brothers Grimm or modernized by Charles Perrault were often considered too scary for children in their original form. Bettelheim, however, suggested that the themes of abandonment, death, injury, loss, and magic permitted children to wrestle with their fears and emotions in a healthy, abstract way. Allowing children to read, analyze, and interact with fairy tales at their developmental level would, Bettelheim suggested, permit the children to establish a greater sense of significance and determination in their lives. Bettelheim published these ideas in his 1976 book, The Uses of Enchantment, a bestseller that was awarded the National Book Award. Bettelheim’s endorsement of fairy tales had a significant impact on their use, with many educators calling for their increased use in the classroom. Bettelheim retired from the University of Chicago in 1973, although he continued to write and publish. In the 1980s, Bettelheim became a widower when his second wife died, and he then suffered a stroke that left him impaired. On March 13, 1990, Bettelheim committed suicide. After his death, inconsistencies in his biography came to light, as did allegations that his methods with emotionally disturbed children did not always align with his theoretical works. Instead of being an understanding and kindly authority figure, Bettelheim was revealed by some of the children who had resided at the Orthogenic School as an angry, vengeful, and unpredictable disciplinarian who often threatened children with punishment if they did not behave. Additionally, he blamed parents, especially mothers, for their children’s autism and emotional disorders. Despite this, Bettelheim played a significant role in shaping how American families thought about childrearing in the latter half of the 20th century. Stephen T. Schroth Knox College

See Also: Adler, Alfred; Attachment Theories; Child Abuse; Family Development Theory; Freud, Sigmund; Spock, Benjamin. Further Readings Bettelheim, Bruno. A Good Enough Parent: A Book on Child-Rearing. New York: Alfred A. Knopf, 1987. Bettelheim, Bruno. The Uses of Enchantment: The Meaning and Importance of Fairy Tales. New York: Knopf, 1976. Pollack, Richard. The Creation of Dr. B.: A Biography of Bruno Bettelheim. New York: Simon & Schuster, 1997.

Birth Control Pills Birth control pills are prescription oral medications primarily taken to prevent pregnancy. Because there is not a birth control pill for males, discussions of the birth control pill focus on the female consumer. All legal birth control pills are regulated by the U.S. Food and Drug Administration (FDA) and are only available with a prescription from a licensed medical professional. These medications are often referred to as oral contraceptives, and are colloquially known as “the pill.” Birth control pills are synthetic hormones that interfere with the human female reproductive cycle. Proper use of birth control pills inhibits female fertility by preventing ovulation and thickening the lining of the cervix, which inhibits sperm motility. Birth control pills are a popular method of birth control in the United States and around the world. Usage rates vary along differences of age, ethnicity, income levels, marital status, and education. Birth control pills are also prescribed for a number of noncontraceptive purposes, including acne, irregular periods, and other menstrual disorders. Strong social, political, and religious arguments have been made in support of, and against, the use of birth control pills since they hit the market in the 1960s. In the 21st century, birth control pills are one of a number of options that American women have for addressing their family planning and health concerns. The Search for Reproductive Control The 19th century was a period of increased knowledge about reproduction. Scientists’ discovery of



the female egg and mechanics of fertilization elucidated the basics of human reproduction. Physicians and chemists developed several strategies to prevent pregnancy, ranging from condoms to diaphragms and spermicides. However, none of these techniques directly inhibited the production and release of eggs or sperm. Furthermore, these contraceptive methods required intervention at the time of sexual contact. As contraceptives became available in the United States, a morality-based backlash ensued, led by Anthony Comstock, a U.S. postal inspector and the founder of the New York Society for the Suppression of Vice. Federal and state laws, collectively referred to as Comstock laws, were enacted beginning in 1873. These laws focused on criminalizing the distribution of materials regarding contraception under obscenity grounds. This drastically curtailed dissemination of contraception information to women and men who sought it. Related laws passed at the state level made possession or use of contraceptives illegal. In 1914, Margaret Sanger, an advocate for contraception for impoverished women unable to feed their children, used the term birth control in her journal The Woman Rebel and at her birth control clinic. For the next four years, Sanger fought charges brought against her for violating the Comstock laws. In 1918, a court decision provided a liberal interpretation of New York State’s Comstock law, allowing physicians to prescribe contraceptives for health reasons, and enabling Sanger to continue her work. As her movement grew, Sanger founded the American Birth Control League, which later became the Planned Parenthood Federation of America. Her legacy as a birth control pioneer is marred by her eugenicist beliefs, which included a presumed hierarchy of races. Developing Birth Control Pills In the late 1920s, the two key hormones involved in female reproductive systems—progesterone and estrogen—were isolated and identified by scientists. While manufacturing hormones gained sophistication and lowered costs for reproductive research, those interested in developing oral contraceptives faced meager funding sources and government obstruction. In 1953, Sanger introduced philanthropist Katherine Dexter McCormick to Gregory Pincus, a leading researcher on reproductive hormones and contraceptives.

Birth Control Pills

111

There are two types of oral birth control: the combined oral contraceptive pill, and progestogen-only pills. The pill allowed couples to control the number of children they had and when they had them, and led to more women in the workforce.

McCormick offered to fully fund research into the contraceptive potential of orally administered hormones. Pincus collaborated with his colleague, Min Chueh Chang, whose expertise included testing animal response to hormone administration. Pincus also partnered with John Rock, who had already tested the synthetic hormone progestin as a treatment for women’s infertility. Rock hypothesized that postponing ovulation with progestin would increase fertility once treatment was discontinued. Together, they developed the first birth control pill, known as Enovid. The 1956 human subject trials of Enovid in Rios Piedras, Puerto Rico, and Los Angeles, California, were later criticized for putting women of color at risk for severe side effects, in addition to leaving them without medical care or contraception afterward. There was no inquiry into the root cause of side effects or into the few deaths of participants in the Puerto Rico trials. These trials resulted in

112

Birth Control Pills

the administered dose being lowered to minimize side effects, and dosing the medication with a seven-day gap to allow for withdrawal bleeding that mimics a menstruation period to make the birth control pill seem more natural. Despite the trials’ focus on Enovid’s contraceptive potential, it was submitted to the FDA and approved for prescription usage to treat menstrual disorders and infertility in 1957. By 1959, an estimated 500,000 women in the United States were taking Enovid to treat menstrual disorders. Enovid was submitted again to the FDA for approval to be used as an oral contraceptive in 1960. Legalizing Access to Birth Control Pills Although Enovid had received FDA approval for use in the United States, Comstock laws were still on the books in some states that resulted in limiting women’s access to birth control pills. Connecticut’s 1879 law criminalized providing contraceptive counseling and medical services to married people. Planned Parenthood League of Connecticut’s executive director, Estelle Griswold, and Dr. C. Lee Buxton were found guilty of violating the law, and were fined. They appealed to the Supreme Court, and in 1965, the resulting Griswold v. Connecticut decision made accessing birth control legal for married women nationally. State laws continued to limit unmarried women’s access to birth control until the Supreme Court ruling on Eisenstadt v. Baird in 1972, which allowed legal access to birth control for all people. This had serious legal and social implications. Birth control pills became more readily available to all women, regardless of marital status or state of residency. While laws limiting birth control were struck down in the courts, some women began organizing to protest the potential medical risks of taking birth control pills. After reading The Doctor’s Case Against the Pill, by Barbara Seaman, Senator Gaylord Nelson called for congressional hearings on the issue in 1970. After realizing that only men were testifying at the “pill” hearings, Seaman and other women staged protests and interrupted the proceedings to demand that women’s experiences with birth control pills be included in the testimony. Taking a feminist stance, they argued that pharmaceutical companies and physicians treated women as guinea pigs. The Nelson Pill Hearing resulted in the FDA requiring the first patient package insert

for medication that outlined potential risks and known side effects. Social Effects of the Birth Control Pill Support for birth control pills has continued to bring together a complex set of interests. Fears of overpopulation and out-of-wedlock childbirth helped push advocacy for the birth control pill from its inception. Birth control pills were noted for their ease of use and they did not disrupt sexual intercourse. In 1964, President Lyndon B. Johnson secured legislation that provided federal funding for birth control for poor Americans. Viewing lowered birth rates as a means of reducing poverty both continued Margaret Sanger’s vision and raised concerns about modern-day eugenics among conservatives and progressives. While feminists continued to push for more information about birth control pill safety, allowing all women access to birth control pills has continued to be a widely shared value. By giving women control over the timing of pregnancies and the size of their families, the pill was seen as a source of empowerment. Along with simply giving women more direct control over their reproductive capacity, access to birth control pills helped many women complete their education and enter the workforce. Despite a dip in usage of birth control pills after the Nelson Pill Hearings, usage numbers rebounded in the 1970s. By the 1980s, despite the Roman Catholic Church’s continuing disapproval of the birth control pill, large numbers of Catholic women used it and had the support of many priests. As a key element in America’s sexual revolution, birth control pills enabled women to experience sex without a significant fear of pregnancy, and allowed couples to choose when and how many children to have. In the words of feminists, biology was no longer destiny. Birth Control Pills Today Subsequent generations of birth control pills have been developed to address health concerns. The original high-dosage birth control pills were taken off the market by 1988. The identification of risk factors related to blood clots has further improved the safety of birth control pills. In addition to warnings against smokers using birth control pills, the identification of the factor V Leiden blood-clotting disorder has helped identify another subpopulation at a higher risk for side effects from the medication.

Birth Order



The most common side effect of birth control pills is unexpected breakthrough bleeding. The combination low-dose birth control pill, made of estrogen and progestin, continues to be the most frequently used birth control pill. Another formulation of the birth control pill, sometimes called the “mini Pill,” contains only progestin. Many versions of birth control pills maintain the original 28-day regimen that includes a seven-day period of placebo pills that mimick the typical menstrual cycle. Newer “extended” cycle contraceptive pills appeared in the 2000s that lengthen the time inbetween placebo pills’ usage, with the goal of minimizing or eliminating placebo periods. Current battles over access to birth control pills relate to insurance coverage. Although there is a tradition of federal subsidies for birth control, which methods of birth control are covered and promoted under programs like Medicaid is a contentious issue. Laws regarding the obligation of insurance companies and employers’ obligations to cover birth control pills exist as a patchwork of state regulations. As of July 2013, 28 states require insurers to cover all FDA-approved contraceptive drugs. Other states allow insurers and employers varying degrees of choice in what contraceptive services to cover. Faithbased exemptions have drawn attention in light of the Patient Protection and Affordable Care Act of 2010 that would have required all contraceptives to be covered, with no out-of-pocket costs to patients. Exemptions for religious employers, religiously affiliated institutions like denominational universities, and for private owners who morally object to the use of birth control continue to be debated. In the mid-2000s, birth control pills were the most commonly used contraceptives in the United States, with 10.7 million women users. Female sterilization followed, with 10.3 million women having undergone the irreversible surgical procedure. Studies have found that non-Hispanic white women are more likely than Hispanic and non-Hispanic black women to use birth control pills over other contraceptive methods. College-educated women are more likely to use birth control pills, and are less likely to use female sterilization as a contraceptive in comparison to less-educated women. An estimated 1.5 million American women exclusively use birth control pills to treat noncontraceptive conditions such as excessive menstrual bleeding, menstrual pain, and acne. As the female birth control pill continues

113

to be refined, research is also being conducted in developing a male birth control pill. Ariella Rotramel Connecticut College Dianna M. Rodriguez Rutgers University See Also: Contraception and the Sexual Revolution; Family Planning; Feminism; Planned Parenthood. Further Readings D’Emilio, John and Estelle B. Freedman, Intimate Matters: A History of Sexuality in America, 3rd ed. Chicago: University of Chicago Press, 2012. Engelman, Peter C. A History of the Birth Control Movement in America. Westport, CT: Praeger, 2011. May, Elaine Tyler. America and the Pill: A History of Promise, Peril, and Liberation. New York: Basic Books, 2010.

Birth Order The impact of one’s birth position in a family has been studied in the United States for many years. In the 1930s, Alfred Adler became one of the first theorists to use birth order position in his work. Pairing birth order concepts with other family information, Adler assessed the lifestyle of his clients. Since then, the number of research studies on birth order positions has grown extensively. Even as the American family evolves, and despite criticism from some researchers, additional studies continue to support the idea of general differences in character related to birth order, which are described as first-born child, only child, middle-born children, and youngest child. In more recent years, the rise of the blended family has introduced nuances into this theory. The two major ways to consider birth order are ordinal position and psychological position. Ordinal position refers to the siblings’ literal order of birth. Psychological position refers to the way that each person interacts with others depending on her or his role in the family. Because ordinal birth order is solely based on a person’s numerical position in the family, it has benefited from more research efforts.

114

Birth Order

Psychological position, while less researched, emphasizes an individual’s interpretation of the context into which he or she was born over his or her numerical position in the line of sibling births. Despite their differences, ordinal and psychological considerations share several ideas about personality differences. First-born children are a well-researched group. These individuals usually take on the role of leaders. They tend to be high achievers, who often reach the highest academic and intellectual success in the family. They tend to be highly motivated and dominant. First-born children are responsible, conscientious, and demonstrate mature behavior. As a result of these character traits, first-born children are overrepresented among the learned. When compared to later-born siblings, they are the most conformist to parental values and influenced by authority. Although they are often viewed as competent and confident, first borns are especially vulnerable to stress and anxiety, demonstrating fear, and leaning toward pleasing others in stressful situations. For this reason, first borns are more fearful of change than later-born children. Only children share many similarities with first borns, because they are first-born children. However, there are a number of noteworthy differences between them, because of the differences that exist when siblings share a living environment. Only children are high achievers, and they exhibit great intelligence but they are usually judged as selfish and having the most behavior problems. Like first-born children, they tend to affiliate under stress but seem to have the lowest overall need for affiliation of any of the birth-order positions. Only children have the most need for achievement and are most likely to attend college. Some studies also suggest that only children are most cooperative and trusting. The term middle-born children refers to all children born between the first and last children in a family. Middle-born children are most often thought of as experiencing feelings of not belonging within the family. Commonly, they report engaging in behaviors that allow them to gain attention that would otherwise be showered upon their older and younger siblings. As a result, middle-born children have been found to compete in areas that are different than oldest children. For example, if a first-born child excels in science, the middle-born child will tend to forgo science and focus on an area that is ignored by the first-born child. In comparison to their siblings,

middle-born children usually act out the least. For these individuals, their middle-born status proves helpful in adulthood because they relate well to both older and younger people, are successful in team ventures, and are sociable. Youngest children who have a five-year gap between themselves and their closest sibling tend to demonstrate the characteristics of an oldest child. This may be because children often begin school at age 5, and the differences between a school-aged child and a newborn are great. A seven-year gap between the youngest child and their closest sibling often results in the youngest taking on the role of an only child. Youngest children are frequently viewed as rebellious and spoiled. They are the most empathic, and have the highest social interest and agreeableness when compared to their siblings. Youngest children tend to be artistic versus scientific, and are open to new experiences. Their openness has been cited as one of the reasons that youngest children are most adventurous, engage in more risk-taking behaviors, and are most likely to abuse substances such as alcohol. Psychiatric illnesses are overrepresented among youngest children. Birth Order in Blended Families Characteristics of birth-order positions tend to remain stable as the family remains stable. This makes the increasing number of blended families in the United States an area of particular attention in discussions of birth order. When children in blended families are brought together, their birth order role may change; the oldest child from a previous family may become the youngest child, or an only child may become a middle child. These changes can be disruptive on an individual and the family. Individually, children who experience a birth-order shift must reassess their role in the family in the context of their new siblings. This situation can become complex if the child is moving between two homes and family environments. On the family level, interactions may be riddled with conflict as siblings renegotiate their roles. During these times, parents play a large part in setting the tone of the family environment. Making decisions in unison about the sibling positions in the new family, and encouraging open communication among family members, helps to achieve a successful birth-order positioning transition.

Birthday Parties



Birth order has been shown to have implications for many aspects of individuals’ lives, including friendships formation, behavior problems, and career choice. These implications along with the wealth of information that can be gathered about birth order positions can be useful for everyone from the professional counselor to the recently married couple creating a blended family. However, it is important to remember that birth order characteristics are not one-size-fits-all. They can be contested, negotiated, resisted, and subverted. Birth order considerations are a tool that should be viewed through the lens of such sociocultural properties as culture, ethnicity, family values, community values, and gender. Using the tool in this way provides enriched understanding of the contextual roles that individuals play within families. Winetta Oloo Loma Linda University See Also: Adler, Alfred; Nature Versus Nurture Debate; Problem Child; Sibling Rivalry; Stepchildren; Stepfamiles; Stepsiblings. Further Readings Adler, Alfred. “How Position in the Family Constellation Influences Life-Style.” International Journal of Individual Psychology, v.3 (1937). Eckstein, Daniel, et al. “A Review of 200 Birth-Order Studies: Lifestyle Characteristics.” Journal of Individual Psychology, v.66 (2010). Isaacson, Clifford E. The Birth Order Effect: How to Better Understand Yourself and Others. New York: Adams Media, 2002.

Birthday Parties People in many different cultures celebrate the anniversary of a living loved one’s birth and will often mark the occasion with a gathering of some sort. Some birthdays also mark a special milestone or coming of age that give the celebrant various privileges. In the United States, most birthday parties during the toddler and elementary school years feature themed decorations, a gathering of the child’s friends, exchange of gifts, a cake, and games.

115

History of Birthday Parties As far back as ancient Egypt, people marked the anniversary of birth for notable people. In early Europe, people viewed birthdays with superstition and fear, rather than the joyful reverence of the modern era. These cultures believed that evil spirits could cause grief and harm to an individual on the anniversary of his or her birth. In order to ward off such spirits, the birthday person would surround him or herself with friends and family who would bring food and gifts. Some gatherings also included the use of noisemakers and the lighting of candles and torches to help ward off unwanted entities. In the Middle Ages, only the wealthy celebrated birthdays. This continued until the Reformation, when cakes filled with coins, rings, and thimbles were presented to the honored individual. The modern birthday party evolved in the late 18th century when children began to be acknowledged as individuals. Wealthy Protestants used such occasions not only to mark another year of life for the child, but also as an opportunity to teach children proper social graces. By the early 20th century, birthday parties had transcended religion and socioeconomic class. Parents, primarily mothers, usually host children’s birthday parties. Until the 1980s, most birthday parties took place at the home of the birthday child. In subsequent decades, entertainment centers began to cater to children’s birthday parties, and introduced a new element of commercialism into the events. Busy mothers could host a party at a restaurant, movie theater, bowling alley, or arcade, and not have to worry about supervising numerous children and providing all food and entertainment herself. Today, birthday parties range from small gatherings to lavish affairs and it is a billion-dollar industry. Birthday Party Rituals Contemporary birthday parties may include a theme based on popular entertainment. Children may want to focus decorations and presents around a favorite movie or television character. Bakeries can customize the cake to fit the theme and many party stores sell coordinated decorations and party favors that reflect the child’s interests. Usually, the cake is decorated with the number of candles equal to the birthday person’s age, although this tradition becomes less important the older a person becomes. Tradition also dictates that the birthday

116

Blogs

person makes a wish when he or she blows out the candles, with the superstitious belief that blowing out all of the candles with one breath signifies that the individual will receive his or her wish. Gift giving is another ritual involved with birthday parties. Not only does the birthday honoree receive gifts, but he or she will also often provide a treat bag for his or her guests, which may match the theme of the party. Games are another ritual aspect of birthday parties. Traditional games for children include Pin the Tail on the Donkey, musical chairs, and taking turns hitting a piñata with a bat until it breaks. Milestone Birthdays Various birthdays sometimes receive more celebration and planning. A baby’s 1st birthday, for example, is often important to the parents. The mother invites close friends and family to celebrate, and parents take photos of the child blowing out candles on the cake. The 13th birthday marks a child’s journey into his or her teenage years. In the Jewish faith, a boy may celebrate his bar mitzvah at this time. This religious ceremony takes place in a synagogue, and involves the reading of a passage from the Torah in Hebrew, signify that the boy has become a man. A bat mitzvah is a similar ceremony for a girl, which often takes place when she turns 12 years old. The bar or bat mitzvah may culminate with a large party, to which the parents invite family friends and extended family members. The guest of honor often receives cash and expensive gifts. During their 13th year, many Protestant and Catholic youth celebrate the Rite of Confirmation, which signifies their entry into the church as adults. This ceremony may also include large family gatherings and presents. Quinceañera ceremonies celebrate a Latin American girl’s 15th birthday. In recent decades, these celebrations have become as elaborate as weddings. Traditions vary depending on the region that the girl is from, but most celebrations involve a large gathering with extended family members at a rented hall, a formal dinner, live music and dancing, and the presentation of the birthday girl in a ceremony that resembles a debutante’s coming out. The sweet 16 party was once a rite of passage for young American women, but boys also celebrate it now. The party may include a candle lighting ceremony, shoe ceremony, and a fatherdaughter or mother-son dance. Sixteenth birthday

parties are often large gatherings including the teenager’s friends and family. In the United States, a person’s 21st birthday is often celebrated with friends, often at a bar where the birthday honoree can legally drink for the first time. Large parties decorated with an “over-the-hill” theme often mark 30th, 40th, or 50th birthdays. Ronda L. Bowen Independent Scholar See Also: Bar Mitzvahs and Bat Mitzvahs; Quinceañera Ceremonies; Sweet Sixteen. Further Readings Otnes, Cele, Michelle Nelson, and Mary Ann McGrath. “The Children’s Birthday Party: A Study of Mothers as Socialization Agents.” Advances in Consumer Research, v.22 (1995). Pleck, Elizabeth. Celebrating the Family: Ethnicity, Consumer Culture, and Family Rituals. Cambridge, MA: Harvard University Press, 2000. Thompson, Jennifer Trainer. The Joy of Family Traditions: A Season-by-Season Companion to 400 Celebrations and Activities. Berkeley, CA: Celestial Arts, 2008.

Blogs Blogs are public or private Web sites in which individuals share information about themselves and things that interest them. The term blog is a portmanteau of Web and log, and it can be used as either a noun (“I keep a blog about my travels”), or as a verb (“I blog about my travels”). The term blogger refers to the person who keeps the blog. Blogs are comprised of posts that appear in reverse chronological order so that the most recent post is at the top. A typical post may be comprised of a thought or experience, augmented with photographs or video, and links to other sites that the blogger feels will interest the reader and that expand upon the post’s topic. Some blogs are group efforts with many authors that may focus on large topics such as politics or news. Other blogs are more personal in nature, and exist for the blogger to keep in touch with friends and family. The term



blogosphere refers to the interconnected Internet subculture of blogs, bloggers, and their readers. Origins For those who maintain blogs that focus on themselves and their families, it is not the message that is new, but the medium. Much of the information posted online was once shared via letters or phone calls or recorded in a private diary. In the early days of the Internet, this information often made the rounds on email and message boards, which greatly expanded a person’s audience but still limited readers to those who the writer knew either personally or through an online forum. With the rise of various software platforms such as Blogger and WordPress, would-be bloggers could easily widen their audience without having to learn the technical details of coding for Web sites. A feature of blogs that make them more interactive than traditional Web sites is the ability for the readers to post comments and have a conversation with either the blogger or other readers. With the rise of social media such as Twitter, Tumblr, and Facebook, microblogging has become more possible. This format allows people to frequently share simple thoughts with a large audience. A microblog post may be as simple as a photograph on Instagram or a 140-character tweet on Twitter. Many bloggers reserve formal blog posts for in-depth topics. The rise of blogging in the lives of individuals and families is directly related to the creation and availability of a particular set of technological tools and software. First, the rise of the home computer and high-speed Internet in the late 20th century ushered in a new era of connectedness in which people became comfortable with conducting business and commerce through their computers. In the early 21st century, this was augmented by the emergence of the smartphone, which came with many microblogging capabilities, and allowed individuals to stay connected when they were not near their computers. Next, people grew accustomed to using the Internet and smartphones as their primary means of keeping in touch with friends and family. As a result, many people, even those who are not bloggers, regularly visit a number of blogs of people they know, or on topics that are of interest to them. Many blogs allow families spread over long distances to keep in touch.

Blogs

117

Implications for Family Life Blogging and microblogging has become a key part of many family’s social lives because it allows them to post text-based stories, photographs, and videos that depict the daily (and sometime mundane) aspects of life. As blogs become more popular, many bloggers and their family members have become concerned with maintaining security and privacy. Most bloggers keep some information private (such as where they live or their real names) in an effort to control what people know about them. Some blogs are private and require users to enter a password; this allows the blogger to control who sees it. Despite this option, the vast majority of blogs remain open for public access, thus shifting the private lives of individual families into the public blogosphere. Blogs are asynchronous, meaning that interaction is not restricted based upon one’s availability. This is in contrast to synchronous forms of interaction such as face-to-face conversations, telephone calls, and Skype, all of which all require a real-time commitment on the part of the speaker and listener. The rise of blogging not only allows for disparate geographic locations but also differing time zones, work schedules, and general preferences for reading and posting. The advent of blogs allows for the blogger to post the items whenever she chooses and the reader can read whenever he desires. Of the many positives that have emerged out of the growth of blogs, perhaps the most prominent is the ability for families who live far apart from one another to maintain a degree of contact not previously possible. Also, bloggers take comfort in knowing that they are not alone; for instance, a woman whose husband is on active duty in the military may find an audience and support among other military wives with whom she has much in common. A teacher of children with autism may create a community of other such teachers through his blog that outlines his experiences in an urban school district. One familial relationship that blogging benefits is the grandparent–grandchild relationship. Grandparents who do not live in close proximity to their grandchildren are now able to watch their grandchildren grow up online via photos and stories of their everyday activities, posted by involved parents who make the commitment to keep a blog in order to foster stronger family ties. Grandparents may also be bloggers and make a similar commitment to family involvement.

118

Blogs

One possible negative effect of blogging and microblogging is that minors may reach out online through such formats and reveal more about themselves than parents would like. This has ramifications for real-world security, and could result in cyberbullying. Often, young individuals who have grown up with the Internet do not appreciate the fact that what they post or say online may follow them for years, and could have a negative effect on future events or relationships. Thus, the rise of blogging and microblogging presents challenges for families in helping their children navigate an environment that may seem harmless but that actually presents some very real dangers. Children may unconsciously develop an online persona that is not reflective of their actual identity or self-perception and may attract unwanted attention. Many parents feel powerless in helping their child navigate the morally ambiguous waters of the Internet during the age where the child is just developing self-awareness. Identity Theft Ninety-two percent of all children under 2 years of age already have a digital footprint. This means that photographs and information about the child, whether on a blog or a Facebook post, have already entered the digital domain. This phenomenon raises additional safety concerns; sometimes parents who wish to limit online information about their child are stymied by family, friends, and others who may post photos and other information without the parents’ permission, either inadvertently or on purpose. Such instances could lead to identity theft, which can be a problem for children and adults. There have been instances of children having their identities stolen prior to entering kindergarten, thus allowing the thieves to utilize a false identity for a long period of time before they are discovered. This makes reclaiming one’s identity more difficult. Before the emergence of blogging, parents may have told strangers stories about their children and pulled a child’s picture out of a purse or wallet. The online equivalent of this is a blog, but with the immense reach of the Internet, many parents rethink how and what information they choose to share with strangers. A Case Study in Family Blogger Advocacy: Adoption A blog represents the interests of a group or person. Blogosphere culture privileges interconnectivity

and mutual influence and provides an excellent atmosphere for advocating for a particular cause or social concern. Family life issues that have attracted bloggers’ attention include domestic and international adoption, children’s rights, and raising children with physical or mental challenges. Bloggers are often good advocates for these causes because they have first-hand experience with them. Their experiences may prove helpful to others in a similar situation and may be able to connect those individuals with agencies and programs that can help them. For example, a couple who is considering an international adoption may have some of their fears allayed after reading a blog of a couple who has already undergone the process. The blog will give the prospective parents an insider’s view that no agency official would be able to provide. They may learn about some unforeseen hitches in the process, or about some unintended benefits. They may follow several blogs that present several different stories, and be able to glean from their similarities and differences what they are likely to encounter in their journey. Because blogs are so easy to create, one downside for people who are searching for specific information about adoption is that they may not be able to tell if a blogger has a hidden agenda or if they are telling only part of their story. Or, a blog may be maintained by an adoption organization that puts its interests ahead of their clients and presents a skewed view of the process. Navigating through blogs to find those that are of interest requires effort. However, Google’s use of elaborate algorithms allows the most read and searched-for blogs to appear high on the search list on its site. Using a more focused search phrase, such as “adoption advocate blogs,” can help identify more specific sites that may have more interest for the reader. The interconnectedness of these advocacy blogs allow for an author to provide links that connect one blog to another. By creating a blog, a personal blogger can contribute information to this online conversation. Because the Internet provides the means through which one can advocate for a particular cause, it should come as no surprise that there are also those who are advocating against causes that others are avid to support. In the case of adoption, it is interesting to note that the advocates are overwhelmingly advocating for the cause of adoption, whereas those

Books, Adult Fiction



who are blogging in opposition to adoption are also blogging in opposition to adoption advocates. Blogs have had a great impact upon the lives of families over the past few decades. Blogging allows for families to remain in touch with extended family while also providing a forum to advocate for causes that are of central importance to the author and his or her family. As technology continues to develop, blogs will also evolve in ways that reflect the changing trends of families and social life. Brent C. Sleasman Gannon University See Also: Child Advocate; Internet; Personal Computers; Personal Computers in the Home. Further Readings Chappell, Robert P., Jr. Child Identity Theft: What Every Parent Needs to Know. Lanham, MD: Rowan & Littlefield, 2012. “Digital Birth: Welcome to the Online World.” Business Wire. http://www.businesswire.com/news/ home/20101006006722/en/Digital-Birth-OnlineWorld (Accessed July 2013). Rosenberg, Scott. Say Everything: How Blogging Began, What It’s Becoming, and Why It Matters. New York: Broadway Books, 2010.

Books, Adult Fiction Books have been around for millennia. Before the invention of paper, animal horns, papyrus, gold, stone, and clay were used to record the history, religion, legends, and stories of different cultures. The book as it is known today began with Johannes Gutenberg’s 15th-century invention of movable type. The first book that he printed was the Bible, and for many years, primarily Bibles and other devotional materials were printed and available only to scholars and the wealthy, who were usually the only literate members of society. During the 18th century, however, a series of technological and social changes resulted in the ability to print books faster and cheaper than in the past. As more people became literate and the cost of printing declined, numerous new forms of the

119

printed word appeared in the marketplace. Newspapers kept people informed of events around the world and in their home towns, and magazines presented fiction and nonfiction stories. The 18th century also saw the rise of the novel, which popularized long-form fiction for a mass audience. Publishers such as the American News Company pirated, printed, and distributed novels by European writers at a time when international copyright laws were virtually nonexistent. Adventure novels by Daniel Defoe and Jonathan Swift became popular in the colonies before an American literary tradition was established. The popularity of novels increased after the development of the lending library, which allowed citizens to borrow books of their choosing and return them after they finished reading them. In the United States, industrialist and philanthropist Andrew Carnegie established 1,689 lending libraries throughout the country between 1883 and 1929, thereby fostering the dissemination of reading materials for both study and pleasure, especially in areas that had no public libraries or university institutions to fulfill this role. By the mid-19th century, many women were encouraged to learn how to read and they often joined women’s societies that promoted reading for pleasure and personal edification. Growing educational standards and requirements also created a more literate population interested in books. Books were often treasured items passed down to the next generation. In the early 20th century, however, a new trend was the mass marketing of pulp novels, which were printed on inexpensive, flimsy paper. These novels often cost just a dime—hence the term dime novel— and presented adventure stories or love stories that fulfilled the public’s desire for entertainment, even if they did not exhibit the type of literary ambition of established and well-known authors. Simon and Schuster’s Pocket Books, launched in the 1930s, introduced the world to the paperback novel, which proved popular due to their price and portability. James Hilton’s Lost Horizons, a Pocket Book publication about an adventurer who discovers ShangriLa in the mountains of Tibet, was the country’s first paperback bestseller. By the early 20th century, publishing had become an important American industry, and Publishers Weekly began posting its weekly list of bestselling novels in 1912, a practice that continues today. In

120

Books, Adult Fiction

the 21st century, adult fiction is just as likely to be read in print format as it is in electronic format or listened to in audiobook format. The rise of the Internet, the e-reader, and smartphones have given the modern American reader many more ways to access books. History of American Novels

The United States became an independent nation at roughly the same time as the novel became a literary art form. As a literary tradition began to be established in the United States, the novel was considered an element of high culture, meant for the elite and the educated. At first, European authors were popular; British writers such as Jane Austen, Thomas Hardy, William Thackeray, and the Bronte sisters found a wide audience in the United States. Soon, American writers gave voice to a generation of novels that reflected the moral concerns and the rise of a new culture carved out of a sometimes unforgiving land. Early American literary masterpieces include James Fennimore Cooper’s Last of the Mohicans, Nathaniel Hawthorne’s The Scarlet Letter, Herman Melville’s Moby-Dick, and Harriet Beecher Stowe’s Uncle Tom’s Cabin. Later, writers such as Henry James and Mark Twain helped to refine a distinctly American literary tradition. The types of novels that captured people’s imaginations shifted throughout the 20th century. During the first 60 years of the century, many novels had historical themes like Margaret Mitchell’s Gone with the Wind and Erich Marie Remarque’s All Quiet on the Western Front. A considerable number had religious themes including Franz Werfl’s Song of Bernadette, Thomas Costain’s The Silver Chalice, and Lloyd C. Douglas’s The Robe. A third common theme related to what readers would have felt in exotic foreign locations like Thornton Wilder’s The Bridge at San Luis Rey, Pearl Buck’s The Good Earth, and Mika Walteri’s The Egyptian. Representations of the family in early-20thcentury novels were often very traditional, with marriage and a family valued by most Americans. Frank L. Baum’s The Wizard of Oz from the beginning of the century, demonstrates an understanding of the wish for adventure by the young, but ultimately supports the strength and centrality of the American family unit. To quote Dorothy, “There’s no place like home.” This same idea ran through many novels in the first part of the century, even

when the characters’ lives were less than exemplary. F. Scott Fitzgerald’s The Great Gatsby and Ernest Hemingway’s A Farewell to Arms both provide a window into lives that break away from the norms of American family life, but the novels also illustrate how these alternatives to the traditional family often end in tragedy. John Steinbeck’s The Grapes of Wrath communicates this same message with the almost complete destruction of the Joad family, broken apart by the Great Depression, but holding its center around the strong character of Ma Joad. Steinbeck’s The Grapes of Wrath also articulates a strong social justice theme when its main character Tom Joad declares: “I’ll be all around in the dark—I’ll be everywhere. Wherever you can look— wherever there’s a fight, so hungry people can eat, I’ll be there. Wherever there’s a cop beatin’ up a guy, I’ll be there.” In the second half of the 20th century, bestsellers more typically reflected the changing social mores of the time. Contemporary settings and themes became common. John le Carré’s 1964 suspense novel The Spy Who Came in From the Cold and Jacqueline Susann’s lurid Valley of the Dolls reflected fears of the cold war and an emerging feminist movement, respectively, that were rippling through society. Increasingly, novelists focused on issues of social justice, but they would also change how they represent the American family. The white, middle class nuclear family still stood as the norm, but increasingly, novels would reflect how that symbol did not always work very well, nor did it represent the broad diversity of American families. Novels like J. D. Salinger’s The Catcher in the Rye, Judith Guest’s Ordinary People, Toni Morrison’s The Bluest Eye, and Amy Tan’s Joy Luck Club all showed a very different picture of the modern American family, whereas others like Margaret Atwood’s The Handmaid’s Tale and Cormac McCarthy’s The Road showed a disturbing future for the family. Prior to 1991, according to Publishers Weekly’s data on paperback books, nongenre fiction was the largest category published. In 1992, nongenre fiction was still a large segment of paperback novels (19 percent), second only to romances. Within a one-year period, however, it dropped to 10 percent of published paperbacks, and in 2004, it was only 4 percent. Where did those books go? Authors



J. K. Rowling reads from Harry Potter and the Sorcerer’s Stone during the Easter Egg Roll at the White House in 2010. Rowling’s Harry Potter series was hugely popular with adults.

increasingly turned to genre fiction. From 1995 to 2004, romance, mystery, horror, and suspense novels shared the top four spots on the bestseller lists, with the romance and mystery categories usually in the number-one or number-two spots. The same trend affected adult hardcover fiction. From 1985 to 2011, every number-one bestselling hardcover novel for the year fit into a genre category—mystery (13 first-place novels), science fiction/fantasy (6), suspense/thriller (4), horror (3), and romance (2). The 13 number-one mysteries are interesting for yet another reason: All of them were written by John Grisham, one of contemporary publishing’s superstars. Other superstar authors in the late-20th and early-21st centuries include horror writer Stephen King, romance novelist Danielle Steel, adventure novelist Tom Clancy, and the queen of genre romance, Nora Roberts. These top-selling authors often publish multiple novels in a single year. As

Books, Adult Fiction

121

authors, they all share a number of common characteristics: they have immediate name recognition and can sell their books on their name alone; they command huge advances; they are represented by powerful agents and control key aspects of the design, marketing and promotion of their novels; they have huge loyal fan followings; and they make huge amounts of money for themselves and their publishers, both off of their books and the movie and television tie-ins to their books. Another trend in adult fiction is mash-ups, a combination of multiple genres in a single novel or series. Paranormal romances like Mary Janice Davidson’s Queen Betsy novels, vampire detectives like Laurell K. Hamilton’s Anita Blake, and Charlaine Harrison’s Sookie Stackhouse vampire romances all fit into this category. While some mash-ups novels target men, for example, Jim Butcher’s Dresden series, they are mostly a phenomenon for what is usually deemed “women’s fiction,” meaning novels whose audience tends to be women. After 2000, a new genre of adult fiction developed: new adult novels. New adult novels combine mash-up genres with protagonists between 18 and 25 years old, and include themes that focus on coming of age, sexuality, and entering the world after high school. The target market for these novels traditionally would have been young adults, but new adult novels are often geared toward adult women. The huge popularity of J. K. Rowling’s Harry Potter series, Stephanie Meyer’s Twilight series, Suzanne Collins’s Hunger Games trilogy, and E. L. James’s 50 Shades of Grey series are all indicative of this category of new adult fiction. James’s novels include significant explicit sexual content, another growing characteristic in some women’s novels. Important American Adult Novels The most important American novels are both entertaining and insightful, which helps them stand the test of time. Cooper’s Last of the Mohicans (1826) was one of the first novels that developed the ideals of the solitary and individualistic American frontiersman and the noble Indian. The idea of the frontier as a wilderness to be tamed augmented the concept of Manifest Destiny that structured the settling of the American west and continues as a symbol today. Stowe’s novel Uncle Tom’s Cabin was published in 1851 as a newspaper serial, but was so popular

122

Books, Adult Nonfiction

that it was produced as a novel in the following year. Stowe’s portrayal of African American slaves helped perpetuate negative stereotypes, but the novel publicized the inhumanity of slavery and energized the abolitionist movement. Novelist Ernest Hemingway once said that all American literature began with Mark Twain’s Adventures of Huckleberry Finn (1884), the tale of a mischievous boy who runs away from his alcoholic father and sails off down the Mississippi River with a runaway slave named Jim. The novel’s themes of racism, ignorance, and hypocrisy function as both entertainment and social criticism. Upton Sinclair’s 1906 novel The Jungle provided a stark representation of the dangers of urban life in the midst of the industrial advancements of the early 20th century. The novel takes place amid the Chicago meatpacking industry, and shines a light on the hardscrabble life of recently arrived immigrants who left their homelands with hopes of a better future in the United States. As social commentary, the book led to the enactment of the Meat Inspection Act and the Pure Food and Drug Act of 1906. Novelist John Steinbeck once said, “I want to put a tag of shame on the greedy bastards who are responsible for this [Great Depression],” which was the focus of his 1939 novel The Grapes of Wrath. The novel follows the Joads, an Oklahoma family impoverished by the Great Depression and the Dust Bowl, as they travel to California to seek a way to support their family, a situation that was playing out with hundreds of thousands of families whose lives were destroyed during those years. Harper Lee’s To Kill a Mockingbird, published in 1960, held up a mirror to society in its depiction of racism and rape in a small southern town, as seen through the eyes of a young girl. Awards for American Novels In the United States, many organizations bestow awards on the books regarded as noteworthy in a given year. The two most well-known of these awards are the National Book Award for fiction and the Pulitzer Prize for fiction. The National Book Awards were established in 1936 by the American Booksellers Association. Beginning in 1980 the award was given to two novels—one for the best hardcover novel and one for the best paperback novel. The Pulitzer Prize for Fiction has been given to a work of distinguished fiction by

an American author since 1918. Popular authors to win this award in recent years include Cormac McCarthy, David Foster Wallace, and Jeffrey Eugenides. The Nobel Prize in Literature is the field’s highest honor, and it is bestowed each year by the Swedish Academy. Writers of literature from around the world are eligible for it, and although many of the noble laureates have been novelists, this prize for a lifetime’s body of work does not limit itself to fiction. American authors who have won the award include Sinclair Lewis, Eugene O’Neill, Pearl S. Buck, William Faulkner, Ernest Hemingway, John Steinbeck, Saul Bellow, Isaac Singer, and Toni Morrison. Laura Chilberg Black Hills State University See Also: Books, Adult Nonfiction; Books, Children’s; Reading to Children; Theater. Further Readings Collins, Jim. Bring on the Books: How Literary Culture Became Popular Culture. Durham, NC: Duke University Press, 2010. Korda, Michael. Making the List: A Cultural History of the American Bestseller, 1900–1999. New York: Barnes and Noble, 2001. Sutherland, John. Bestsellers: A Very Short Introduction. New York: Oxford University Press, 2007. Thompson, John B. Merchants of Culture: The Publishing Business in the Twenty-First Century. Cambridge, UK: Polity Press, 2010.

Books, Adult Nonfiction In ancient times, nonfiction works consisted of records of history, business records, tax records, population and law records, and religious works. For much of recorded history, scribes made and kept books, many of which were rare and sacred, mainly used for purposes of religion, commerce, and matters of state. Books as they are known today began with Johannes Gutenberg’s invention of movable type in the 15th century. The printing press meant that books no longer needed to be laboriously copied by hand. Type could be set,



and multiple copies of books could be printed at a time. Most of the earliest books were nonfiction; the Bible was the first book that Gutenberg printed. For generations after that, religious materials were the majority of printed texts. Beginning in the 18th century, new forms of nonfiction printed materials gradually became available. These included newspapers, magazines, monographs, pamphlets, and broadsheets. Those who could read now had access to numerous types of information, and they could explore topics at their leisure. They could follow political events at home and abroad or take up hobbies by building on a solid base of knowledge gleaned from books. The development of lending libraries and reading societies further increased readers’ access to information. Wealthy industrialist Andrew Carnegie built over 1,600 lending libraries in the United States between 1883 and 1929, which gave access to vast reserves of information to everyone in those communities. As public schools expanded, so did school libraries, which provided generations of American readers their first significant experiences with the various categories of nonfiction—biographies and autobiographies, histories, how-to books—in addition to fiction. By the early 20th century, publishing had become an important American industry with its trade publication, Publishers Weekly. Publishers Weekly began posting a weekly list of bestselling nonfiction books in 1912, a practice that has continued into the 21st century. Today, nonfiction comes in various formats beyond the traditional book. Audiobooks, e-readers, the Internet, and smartphones all electronically relay information, giving people options on how best to access information. 17th- to 19th-Century Nonfiction Books The Bible and other religious materials were doubtless the first nonfiction works widely read in the early colonies. These texts would have been printed in Europe and transported to the New World. The first known printing press in colonial America was constructed in 1638 at the Massachusetts Colony’s new college, Harvard, by a Mrs. Glover, who had sailed from London with her husband, the Rev. Joseph Glover, five children, several skilled workers, and a printing press. This press first issued a broadside called The Freeman’s Oath. Other early books were an Almanac for 1639, and the Bay Psalm Book.

Books, Adult Nonfiction

123

John Smith may have authored the first North American book, A True Relation of Such Occurrences and Accidents of Noate as Hath Happened in Virginia (1608). Another early nonfiction book was William Bradford’s journal that was published in 1651, History of Plymouth Plantation, 1620–47. Books became more common in the 18th century. Benjamin Franklin, along with being a polymath and founding father, was also an author and printer, whose Autobiography of Benjamin Franklin is still read today. Printing played a large role in the American Revolution, with democratic and patriotic publications like Tom Paine’s Common Sense, that were both informative and popular in the emerging country. In the 19th century, travel books became popular, as wealthy Europeans traveled to far-flung places around the world—China, India, or Brazil—and wrote about the customs of people for the benefit of others who were unlikely to have such adventures. Turning that custom on its head was Frances Trollope, an English novelist whose most famous work was the 1832 book Domestic Manners of the Americans, a satiric look at the habits and customs of those in the United States, where she had spent considerable time traveling before living for a while in Cincinnati. Well-known fiction writers were also known for their travel literature. Included in this group is Mark Twain, whose Innocents Abroad, or the New Pilgrim’s Progress (1869) and Roughing It (1872) are both travel books. Washington Irving’s The Alhambra: A Series of Tales and Sketches of the Moors and Spaniards (1832) was also a popular travel book, as well as Nellie Bly’s Around the World in 72 Days (1890). At a time when travel was expensive and difficult, travel books filled a niche in people’s lives, giving them a view of the world way beyond their daily experiences. Social inequality was a key theme of the latter half of the 19th century, when abolition of slavery and women’s suffrage became important political issues. Narrative of the Life of Frederick Douglass, an American Slave (1845) was the nonfiction equivalent of Harriet Beecher Stowe’s fictional Uncle Tom’s Cabin. Women’s rights were another divisive social issue. Many American women read Mary Wollstonecraft’s A Vindication of the Rights of Women (1792), imported from England, which advocated for a type of equality between the sexes and the education of women. They may have also read Margaret Fuller’s

124

Books, Adult Nonfiction

Woman in the Nineteenth Century (1845), one of the first works written by an American that espoused the idea of equality between men and women. Bestselling American Nonfiction Books Literacy rates rose in the 20th century, and as a result, books became more popular. The idea of a bestselling novel was instituted through Publishers Weekly, lists of top-selling books, both fiction and nonfiction, based on sales figures from booksellers around the country. In the 1910s, many bestselling nonfiction titles dealt with trends in education or the effect of expanding industrialism and technology on society. The Montessori Method: Scientific Pedagogy as Applied to Child Education in the Children’s Houses (1912) by Maria Montessori explained a revolutionary new way to educate children in a holistic way. The Education of Henry Adams (1919) was a critique of the newly modern world that seemed at odds with the world of the author’s youth. Hugo Munsterberg’s Psychology and Industrial Efficiency (1913) merged the developing field of psychology with the scientific study of the workplace to create a new field known as industrial psychology. Bestselling nonfiction titles in the 1920s included John Maynard Keynes’s The Economic Consequences of the Peace (1920), which readers hoped would provide insight into international politics in the aftermath of World War I. Will Durant’s The Story of Philosophy (1926) introduced philosophy to casual readers in an accessible way. Emily Post’s Etiquette (1923) helped a generation learn comportment and manners at a time when people from various socioeconomic classes and immigrant backgrounds were striving to fit in and move into the higher echelons of American society. Simon and Schuster’s creation of crossword puzzle books; Robert L. Ripley’s original Believe It or Not (1929); and Chic Sale’s The Specialist (1929), a how-to book on outhouses that sold 1.5 million books before it went out of publication, all provide a glimpse into how Americans looked to nonfiction for entertainment. The decade of the 1930s was one of economic and social hardships. Nonfiction books during this decade focused less on current events, and more on past history and present entertainments. Top10 bestsellers for the decade included joke books, puzzle books, self-improvement, and diet books as Americans forged ahead into what they hoped would be a much better future. The publication of

Boners by Viking Press in 1931 started a trend in publishing “bloopers” that continues today. People wanted to play games (Contract Bridge Blue Book of 1933), look forward to a long, youthful, and interesting life (Life Begins at Forty, 1933; Wake Up and Live! 1936; How to Win Friends and Influence People, 1938), and in the last year of the decade, investigate what seemed to be happening in Europe by reading Adolf Hitler’s Mein Kampf. The top-selling nonfiction books during the first half of the 1940s focused on World War II. William Shirer’s Berlin Diary (1941), Richard Tergaskis’s Guadalcanal Diary (1943), and Ernie Pyles’s Brave Men (1944) are prime examples of popular nonfiction books. In the second half of the decade, certain best-selling books foreshadowed the issues that would soon grip the nation. Richard Wright’s Black Boy (1945) brought attention to issues of race; Victor Kravchenko’s 1946 memoir I Chose Freedom chronicled his defection from the Soviet Union as the Cold War began; and the Alfred Kinsey’s research report Sexual Behavior in the Human Male (1948) was a shocking (to some) and groundbreaking work in the nascent field of human sexuality. As the postwar baby boom began, new parents relied on Benjamin Spock’s The Common Sense Book of Baby and Child Care (1946), which was groundbreaking in that it reassured parents that they could easily provide all the natural loving care that their child would need to grow up to be a successful adult. Domestic topics ruled the nonfiction bestseller lists in the 1950s. As suburbia expanded, readers sought help in achieving the American dream. Howard Cobb’s Your Dream Home shared the top-10 list of 1950 with The Betty Crocker Picture Cook Book and Gayelord Hauser’s Look Younger, Live Longer. In 1955, with the nation’s deepening concerns about juvenile delinquency, Rudolf Flesch’s Why Johnny Can’t Read hit the top-10 nonfiction list. These kinds of how-to books would substantially increase over the next decades, providing individuals with a wealth of books that they could use to self-diagnose family dysfunctions and implement whatever was the newest solution to that problem. Helen Gurley Brown’s Sex and the Single Girl (1962) helped embolden a generation of women to seek independence—financial and sexual—before marriage. Betty Friedan’s The Feminine Mystique followed in this same topic, identifying the alienation of suburban wives, isolated from fulfilling



professions, and opportunities for education, and deadening their misery with alcohol and tranquilizers. Seen as a catalyst for second-wave feminism, the book sparked substantial changes in women’s roles, changing family dynamics for good. Truman Capote’s journalistic In Cold Blood (1966) inaugurated the true-crime genre with the story of the author’s investigation into the murders committed by two Kansas drifters, their trial, and their execution. Politics, especially after the assassination of President Kennedy in 1963, became the subject of several bestsellers, including Theodore White’s The Making of the President (1961) and Arthur Schlesinger’s A Thousand Days: John F. Kennedy in the White House (1965). Youth culture brought about an interest in many new-age and supernatural topics. Bestsellers in this genre included Jess Stearn’s biography Edgar Cayce: The Sleeping Prophet (1967), psychic Jeane Dixon’s My Life and Prophecies (1969), and Linda Goodman’s Sun Signs (1969). A seminal work in environmentalism, Rachel Carson’s Silent Spring (1962) was also published in this decade and prompted the federal government to enact the Clean Air Act and the Clean Water Act to deal with the increasing problem of pollution. The self-help movement was a product of the 1970s, and bestsellers such as I’m O.K., You’re O.K. (1972) by Thomas Harris and The Joy of Sex by Alex Comfort helped readers find ways to strengthen their relationships. Marabel Morgan’s The Total Woman, Robert Ringer’s Looking Out for #1, and diet books such as Herman Tarnower’s The Scarsdale Diet and Nathan Pritikin’s The Pritikin Program for Diet and Exercise kicked off a trend for books to help people lose weight that intensified in subsequent years. Betty Friedan’s The Feminine Mystique followed in this same topic, identifying the alienation of suburban wives, isolated from fulfilling professions, opportunities for education, and deadening their misery with alcohol and tranquilizers. Seen as a catalyst for second wave feminism, the book sparked substantial changes in women’s roles, changing family dynamics for good. In the 1980s, diet and exercise books continued to be popular, but they were joined at the top of the bestseller lists by books about investing, which reflected the era’s economic growth and the interests of the baby boomers. Douglas R. Casey’s Crisis Investing: Opportunities and Profits in the Coming Great Depression (1981) and Charles J. Givens’s

Books, Adult Nonfiction

125

Wealth Without Risk (1988) are just two of the titles in the growing field of personal business books. The celebrity autobiography was a popular choice in the 1990s. Charles Kuralt, Bo Jackson, Dolly Parton, and Ronald Reagan all wrote autobiographies, while biographers told the stories of Nancy Reagan, Princess Diana, and Harry Truman. The search for spiritual enlightenment also consistently hit the list of bestselling nonfiction books. Robert Bly’s Iron John: A Book About Men (1991) tried to provide the same kind of consciousness raising for men that women found in the feminist movement. Betty J. Eadie’s Embraced by the Light (1993) focused on the idea of life after death, and Deepak Chopra published The Seven Spiritual Laws of Success (1995). At the same time, Neale Donald Walsch in Conversations with God, Book I (1997) and Billy Graham’s Just as I Am (1997) promoted Christianity, and the Dalai Lama’s The Art of Happiness (1999) promoted Buddhism. Jack Canfield’s Chicken Soup for the Soul series, first published in 1993, provided inspirational stories about real people. While the first book targeted a general audience, soon all members of the American family had a book, including moms, dads, teens, and grandparents. Political punditry ushered in the new century. From 2000’s The O’Reilly Factor by Bill O’Reilly to 2010’s Spoken From the Heart by Laura Bush, a significant number of the published books were on the topic of politics, current events, and politicians. O’Reilly, Glenn Beck, Al Franken, Bill Clinton, Hillary Clinton, George W. Bush, Sarah Palin, Ann Coulter, and Rudy Giuliani published books on contemporary America. On the more humorous side, there was Stephen Colbert with I Am America (And So Can You!) (2007), Michael Moore’s Dude! Where’s My Country? (2003), and Jon Stewart’s The Daily Show with Jon Stewart Presents Earth (The Book): A Visitor’s Guide to the Human Race (2010), which presented a humorous look at contemporary American politics. Rick Warren’s religion-focused The Purpose Driven Life made the top-10 nonfiction bestsellers list for three straight years. Major Awards for American Nonfiction Books The National Book Award for nonfiction recognizes an outstanding work of fiction by a U.S. citizen. Begun in 1936, it has had a number of changes over time, increasing categories and then reducing them

126

Books, Children’s

again. At present, the National Book Award is given for fiction, nonfiction, poetry, and young people’s literature. The Pulitzer Prize gives many different awards in the field of journalism and letters, drama, and music. For the journalism categories, the submissions must be from a U.S. newspaper or news site that publishes at least once a week. All submissions for the letters, drama, and music submissions must be by American authors. Laura Chilberg Black Hills State University See Also: Books, Adult Fiction; Books, Children’s; Child-Rearing Manuals; Self-Help, Culture of. Further Readings Korda, Michael. Making the List: A Cultural History of the American Bestseller, 1900–1999. New York: Barnes and Noble Books, 2001. Ross, Catherine, Lynne McKechnie, and Paulette M. Rothbauer. Reading Matters: What the Research Reveals About Reading, Libraries, and Community. Westport, CT: Libraries Unlimited, 2006. Thompson, John B. Merchants of Culture: The Publishing Business in the Twenty-First Century. Cambridge, UK: Polity Press, 2010.

Books, Children’s Children’s books are specifically designed by adults to entertain, educate, and engage young readers. The earliest books written and published in the United States for children were either made of inexpensive, disposable paper stock or more durable materials designed to withstand heavy use. Children’s books from England were found in colonial homes, and the American publishing industry for juvenile readers emerged around 1835, when printing and production technology emerged to launch the industry. Early children’s books were designed to last through the use of many siblings, and then to be handed down to the next generation. Initially, they contained educational content geared more toward male readers, but once the market proved profitable, books were designed to appeal to all children.

Literacy was highly valued in colonial America, and children’s books evolved to correspond with familial needs that changed over time. By the 1640s, Massachusetts had decreed by law that every child must learn to read the Bible. Before the American Revolution, various denominations hoped that cultivating readers would help their congregations grow and become more prosperous. Women were encouraged to become literate enough to teach young children to read, but not literate enough to challenge the male heads of households. Instruction and amusement were combined in alphabet rhymes. The New England Primer (ca. 1686–90) was the most widely purchased textbook during the colonial period, and contained the earliest printed religious alphabet rhyme in America. It utilized rote learning (memorizing) to teach the alphabet, numbers, and lists of words. Primers contained alphabet rhymes that were considered essential tools for teaching reading. Mothers used these inexpensive texts to teach children from ages 3 to 5 to read. After the American Revolution, the New England Primer was widely adapted by publishers and produced under the titles The New York Primer, The American Primer, and The Colombian Primer, but it gradually lost popularity as parents found more secular primers that children enjoyed. The popular British story of Little Goody Two-Shoes (1765) illustrates how parents used the phonetic arrangement of letters to teach sound and language development. John Locke, the famous pedagogue of the Enlightenment era, encouraged the use of blocks with letters to teach children the alphabet. Alphabet rhymes began to appear on toy blocks that were first manufactured during the 1830s. However, the stories’ protagonist, Margery Two-Shoe, embodied the virtue of industry that was so important to Americans of the early republic. By the late-1800s, primers and one-syllable storybooks were printed on linen-based pages. These linen books became popular because of their vibrantly colored illustrations and durability—they could be washed and reused for the next child in a growing family. Alphabet books, which focused on pictures more than text, were adapted for instruction in history, moral education, botany, and zoology. American lexicographer Noah Webster developed the primer The American Spelling Book, also



known as the “blue-backed speller,” as a means to standardize spelling and punctuation in the United States. Webster hoped to establish a sense of nationalism; he believed that developing a national standard for the English language was necessary to achieve cultural independence from England. With westward expansion, public education was limited, so literacy education often occurred at home, in one-room schools, and in church-based programs. More consistent literacy education emerged with secular Sunday schools (also called “first day societies”), which were seasonal schools that met once a week on Sundays to provide basic instruction in reading for children and adults. Tracts written in a lively and entertaining style became the predominant genre for literacy education. They were written in easy language for children, who then took them home and taught other family members how to read. Literacy was linked to industry, and was considered to be an American virtue. The American Sunday-School Union (ASSU) was established in Philadelphia in 1817 as a coalition of local Protestant Sunday school groups. The Union’s goals included establishing a network of Sunday schools on the frontier, as well as providing established communities with libraries and materials for religious instruction. This nonsectarian organization pioneered development of a distribution network for American literature considered essential in health, history, travel, biography, and science. Through the ASSU, writers from many denominations revolutionized the reading habits and tastes of American youth. The ASSU set out to create indigenous children’s literature, following the British model of cheap repository tracts developed by Hannah More during the 1790s. The Union commissioned American writers to create stories with American subjects and settings, and American artists and engravers illustrated many of them. The Union published these books in attractive small formats that appealed to children. They were affordably priced, and remained influential until the 1860s, when public libraries began to provide easy access to attractive children’s literature. The American Tract Society (ATS) was established in 1825 to consolidate the publishing ventures of diverse Bible societies, gospel tract ministries, and denominations, including the New York Tract Society (founded in 1812) and the New

Books, Children’s

127

England Tract Society (founded in 1814). The ATS established a network of colporteurs (traveling salesmen of Christian literature) that sold and distributed literature, led services, and provided counseling in communities. McGuffey Eclectic Readers became the most popular and widely distributed school books in America, selling over 122 million copies between 1836 and 1920. Quickly eclipsing The New England Primer and Noah Webster’s blue-backed Speller, William Holmes McGuffey formatted the series with short passages of prose or verse, illustrated with wood engravings, and accompanied with questions to test the pupil’s comprehension of words and morals imbedded in passages. The higher-level McGuffey readers contained extracts from works by famous authors that were sometimes the only means that high school or academy students had to learn about literature. Because boys were more likely to attend school, most of the content was geared toward them. Although the Eclectic Readers were criticized for being didactic and moralistic, they remain in print, and are considered by many to be an important force in shaping the collective American consciousness. Nursery Rhymes Nursery rhymes, known in early America as Mother Goose rhymes, are verses or chants that adults use to entertain children from birth to about the age of 5. Lullabies (originally “lull to bye-byes,” meaning to lull to sleep) are songs where the tune is more important than the words. “Hush-a-bye, baby, on the tree top,” is thought to have been the first English language poem penned on American soil. It was a Pilgrim youth’s depiction of a common Native American practice of swaddling a baby in a birch cradle upon a branch of a tree so that the mother could watch her infant as she did other tasks. In England, Mother Goose’s Melodies, or Sonnets for the Cradle (1791), published by John Newbery, became an influential early collection of nursery rhymes that was pirated and then reprinted in the United States by Isaiah Thomas. Songs for the Nursery, or Mother Goose’s Melodies for Children (1719) was the first collection of nursery rhymes widely circulated in the colonies. Monroe and Francis of Boston published Mother Goose’s Quarto, or Melodies Complete (ca. 1825), and later produced Mother Goose’s Melodies, the

128

Books, Children’s

only Pure Editions (1833), which included new engravings by American artists such as Abel Bowen, Nathaniel Dearborn, and Alexander Anderson. Publisher Thomas Fleet created an apocryphal story that his mother-in-law collected the nursery rhymes. According to Fleet, Mistress Elizabeth Goose, the widow of Isaac Goose (Vergoose or Vertigoose) was born in 1665. At the age of 27, she married and became the stepmother to 10 children, and then bore six children of her own. The endearing story of Mother Goose’s rhymes has given her a lasting place in American literary lore. Based upon an incident of a lamb that followed a girl to school, which probably happened in rural America, the popular children’s poem “Mary Had a Little Lamb” was written by Sarah Josepha Hale of Boston and first appeared in the children’s periodical Juvenile Miscellany in 1830. Adventure Stories and Chapbooks Even after independence, the United States remained largely culturally dependent on England until the War of 1812. Americans continued to read British children’s books, order British products, and emulate English styles of dress and manners. In 1794, an itinerate Episcopal parson named Mason Locke Weems became an agent for Philadelphia publisher Mathew Carey, and traveled throughout the United States peddling chapbooks, or “good books” on religion and right living. Weems observed that amid a land of plenty, many Americans had an inclination toward gluttony and bawdy entertainments. He saw for that the thousands of newly or partially literate readers, chapbooks became an introduction to popular literature, indigenous “frontier” tales, and embellished biographies for readers of all ages about George Washington (1800), Francis Marion (1809), and Benjamin Franklin (1815). A few of Weems’s biographic fables, illustrated with archetypal images (such as Washington and the cherry tree), entered into American mythology. Samuel Griswold Goodrich, also known as Peter Parley, developed a new style of instructional book that was simple, attractive, and conversational in style. Goodrich was a Boston-based publisher whose The Tales of Peter Parley about America (1827) became a bestseller for young readers. Every year until his death, Goodrich published additional volumes on travel, animals, biblical geography, astronomy, mythology, and other subjects, selling over

7 million copies in the United States alone. Jacob Abbott was another 19th-century author of popular educational books, whose Rollo series included biographies of famous people and travelogues. Family Stories and Girl’s Stories Harriet Beecher Stowe revolutionized children’s literature with Uncle Tom’s Cabin, or Life Among the Lowly (1852), which appeared in serial form in an antislavery newspaper published in Washington, D.C., in 1851 and 1852, before it was published in book form. Though it was considered the first family novel, it was quickly condensed and adapted for children. Uncle Tom’s Cabin became one of the most important American literary documents of the 19th century regarding family; it evangelized for abolition of slavery, which at that time was the cornerstone of the southern plantation economy. Stowe structured the story to be read aloud in parts. Jane Smiley, in her 2001 introduction to Uncle Tom’s Cabin, stated that it was “estimated that every [newspaper installment] was read by or to fifteen people, thereby crossing boundaries of literacy and class in a way that expensive bound books could not.” Like many women authors of her time, Stowe took up her pen to fight injustices that she perceived threatened the American family. Inspired by the characters in John Bunyan’s Pilgrim’s Progress (1678), Stowe created an antislavery narrative that was driven by chiaroscuro, the interplay of light and dark, or good and evil. Uncle Tom’s Cabin spotlighted the inhumanity of slavery through certain literary devices that Stowe employed to prompt sympathetic imitation among American women. Historians have speculated that Stowe’s mourning over her last born, as well as her rage over changes in the Fugitive Slave Act in the Compromise of 1850, shaped her fictional character of Little Eva as a child redeemer endowed with an intuitive spiritual sensibility. After Abraham Lincoln announced the drafting of an Emancipation Proclamation in late September 1862, Stowe called on Mary Lincoln in New York City to request an invitation to the White House. Lincoln met Stowe and supposedly greeted her with the comment, “So you’re the little woman who made this Great War.” When news of the Emancipation Proclamation arrived on January 1, 1863, Stowe was attending a New Year’s celebration at the



Boston Music Hall. The crowd gave Stowe a standing ovation. In children’s literature, more sentimental concepts of girlhood and female adolescence emerged out of the Civil War. Northern publishers developed lucrative family markets, so American literature achieved an economic boost after the war. The phenomenon of girl and family stories (or domestic novels), written by female authors, reflected the development of a middle-class domestic audience that became pivotal to American literary history. These authors projected their desire for societal change into their juvenile female characters and subsequently on young readers. Early female authors of girl’s stories included Elizabeth Stuart Phelps Ward, whose Gypsy Breynton books included Gypsy’s Cousin Joy (1866), Gypsy’s Sowing and Reaping (1866), and Gypsy’s Year at the Golden Crescent (1867). The heroine, Gypsy Breynton, was a charming 12-year-old tomboy. Sophie May, the pseudonym for Rebecca Sophia Clarke, was known for her Little Prudy series, first published in 1863. Her lively and natural girl characters existed within familial environments, and lightly conveyed moral lessons. Louisa May Alcott’s Little Women (1868), set during the Civil War, established the standard for adolescent girlhood by tracing the struggles of the March sisters, who endured various adversities while their father was at the front. Little Women and its sequels began the literary genre of family stories that could be read aloud for pure entertainment. Susan Coolidge, the pseudonym for Sarah Chauncey Woolsey, who wrote What Katy Did (1872), delineated a new, more realistic girl character. Katy Carr, the protagonist in What Katy Did, was the first American girl character to be disabled by a spinal injury from being physically active. At the time, girls were not encouraged to roughhouse with siblings, and Katy became an innovative character because through her moral and spiritual development (rather than transparent literary devices), she overcame adversity. Civil War and Ephemeral Books The Civil War put a halt to publishing children’s books because paper and supplies became scarce— especially in the South. Chapmen, itinerant street vendors who often sold small booklets along with buttons, ribbons, pins, and other small items, spread news of the day. Chapbooks, far less fragile

Books, Children’s

129

and more portable than broadsides, were purchased unbound and untrimmed in sheets to be assembled at home. For new and partially literate readers, chapbooks often provided the first entrée into the world of popular literature beyond single-sheet broadsides. Chapbooks, like some broadsides, were primitively printed, utilizing worn-out type, and illustrated with second-hand woodcuts. They contained romantic tales, riddles, puzzles, and jokes in a small four- to 24-page format. Dime novels emerged after 1860, when Beadles published series of cheap, sensational paper-covered books known as 10-cent novels. Sales increased during the Civil War and Beadles were distributed on both sides of the conflict. Most Beadles adventure stories were 100 pages long, and described frontier life with folk heroes such as Buffalo Bill and Davy Crockett, along with stereotypical Native American characters. The series lost popularity at the end of the 1800s. Horatio Alger, Jr., crusaded against child labor in urban areas, and was a prolific author of ragsto-riches stories of poor urban youths who raised themselves up in the world by hard work, thrift, and resisting temptation. His first success was Ragged Dick, or, Street Life in New York (1867), and set the formula for the rest of the series. Mark Twain was a successful journalist who wrote classics including The Adventures of Tom Sawyer (1876) and The Adventures of Huckleberry Finn (1884). Edward Stratemeyer introduced the world to serialized juvenile fiction: The Rover Boys, the Bobbsey Twins, the Motor Girls, Tom Swift, the Honey Bunch series, Bomba the Jungle Boy, the Hardy Boys, and Nancy Drew. Stratemeyer built a publishing empire that sold millions of volumes. New York publisher McLoughlin Brothers pirated British titles using cheap materials. From the antebellum period to approximately the 1870s, many stories published by McLaughlin Brothers for children were derisive of ethnic groups and alcoholics, contained especially gruesome or violent incidents that captured the public interest, and included advertising jingles for products. McLoughlin Brothers became a major producer of cheap children’s books, games, paper dolls and soldiers, hand-colored toy books, games, and alphabet books that were traded and collected. McLoughlin Brothers adapted the British children’s poem “Death and Burial of Poor Cock Robin” in 1862.

130

Books, Children’s disability is allegorical, but her sunny disposition brings joy to those around her. Disabled characters in popular fiction taught children about adversity, just as disabled veterans were returning from the war and families needed to provide refuge for them.

One of popular artist Howard Pyle’s (1853–1911) most famous and most recognizable illustrations, The Buccaneer Was a Picturesque Fellow, was published in 1905 in a pirate ballad titled The Fate of a Treasure Town.

This rhyme’s theme gently presented death rituals to children that became significant in popular Civil War culture; animals took on the roles of humans, illustrating different aspects of a funeral. McLoughlin Brothers did not hire illustrators until American economic isolation during the Civil War forced their hand. Beginning in 1863, McLoughlin Brothers employed a variety of printing techniques including hand stenciling, zinc etching, and later chromolithography. The Civil War dramatically changed the tone of children’s books; conflicts shifted from pragmatic instruction to sentimental plot devices using angelic children as characters. “Faith Douglas,” published in The Little Pilgrim (1863), the leading northern Christian children’s magazine addressing war themes, epitomized this new romanticized depiction American childhood. Faith is blind, and her

Adventure Stories Howard Pyle was known as the father of American children’s book illustration. Pyle established an art school in the Brandywine Valley of Pennsylvania that launched the careers of N. C. Wyeth, Jessie Wilcox Smith, and Sarah S. Stilwell Weber. Pyle got his start illustrating St. Nicholas, an American magazine for children known for high quality fiction that was published monthly between November 1873 and March 1940. Editor Mary Mapes Dodge was a widow with two sons when she started writing and editing to support her family; her best-known children’s book was Han Brinker. She selected some of the finest writers and illustrators to contribute serialized versions of novels to St. Nicholas, including Louisa May Alcott, Mark Twain, Robert Louis Stevenson, and Rudyard Kipling, before they were published in book form. American companies including Kellogg’s and Faultless Starch Company of Kansas City were part of a marketing trend that produced small advertising booklets directly targeting children between 1880 and 1950. Advertising literature for children contained jokes, puzzles, and trivia that appealed to young readers. The booklets were generally 16 pages long, printed in three colors, with small formats, and were given as promotional material at Christmas time. Consumerism has always been part of modern children’s books, but now children became part of broader marketing strategies for families. Twentieth-Century Children’s Books Young readers were introduced to a new style of American fantasy when L. Frank Baum ignited the young readers’ imaginations with The Wonderful Wizard of Oz (1900). This first Oz book, published by a small press on the brink of bankruptcy in August 1900, created a phenomenon when it sold over 55,000 copies in the first three months from word-of-mouth advertising. However, librarians did not approve of many Oz books written by subsequent authors. By the late 1960s, children’s librarians questioned the worthiness of the series to take up space on shelves. The Emerald



City of Oz (1910) caused controversy during the 1960s because it contained a chapter called “How the Wogglebug Taught Athletics,” which described sugarcoated mind-expanding pills. Picture books today are designed for reading aloud so that parent and child interact with the art (as an object lesson) and expand vocabulary. Booth Tarkington wrote Penrod (1914), the fictional adventures of “the worst boy in town” that revealed aspects of the small-town, Midwestern American psyche during the transition from horsedrawn carriages to the automobile. Enormously popular with readers, Penrod and his mongrel dog Duke create a clubhouse in an abandoned carriagehouse to entertain two African American brothers. H. A. Rey wrote Curious George as a first grade primer that had lots of drama and action. Theodore Geisel, known as Dr. Seuss, was a freelance cartoonist who became a hugely popular children’s writer by offering richly inventive moral tales with simple language. Maurice Sendak expanded children’s fantasy with his Where the Wild Things Are (1963), which has remained popular with subsequent generations of children. After World War II, advances in offset lithography made the production of children’s picture books cheaper, and American publishers increased production. Children’s books allow for the investigation of new ideas, places, people, and things while practicing communication. In inner-city neighborhoods, many households with children do not have any books. During the civil rights movement, children’s books began to reflect increasing multiculturalism; it was possible to work in full color media like watercolor, gouache, collage, and pastels. Publishers were compelled to eliminate stereotypical illustrations with repetitive depictions of characters based upon race. Parents are encouraged to participate in book selection, whether purchased or borrowed from a library, to make sure that books their children read are not too advanced, and if the book is challenging, to read it together. Contemporary Trends Today, picture books feature more folklore, reflecting different ethnicities and nonfiction. The U.S. government is perhaps the largest publisher in the world; it publishes children’s books on specific themes that might not necessarily be offered from commercial publishers. The Department of

Books, Children’s

131

Homeland Security publishes information on disaster preparedness, and other agencies produce children’s literature usually targeting second- and thirdgrade students on agency-related topics in science and technology. Recognizing that Native American children receive government-supplied processed food that has led to high rates of type-2 diabetes, the Department of Health and Human Services, in conjunction with the Indian Health Service, has produced the Eagle Books series by Georgia Perez, a specialist in diabetes education. These books teach young readers about traditional Native American values, and the importance of eating healthy food and living an active life. The diversity of family structures, including same-sex couples as parents, is increasingly the topic of juvenile fiction and picture books. Recognizing that gender roles are societal constructions, Lois Gould’s groundbreaking children’s book X: A Fabulous Child’s Story (1978) tells the story of parents who promise to equally share child-rearing duties and not to impose traditional gender roles on their child. Baby X is not limited to participating in strictly boy’s or girl’s activities, but is brought up to participate in all kinds of activities. The message is that it is okay and natural for children to explore nontraditional roles. Bobbie Comb’s 123: A Family Counting Book (2000) appears to be a traditional counting book for children in early elementary school, but it also focuses on the variety of families in which a child might recognize as his or her own. Families appear in different configurations (e.g., biracial, extended multigenerational families, two mother, two fathers), designed to show children affirming images of all kinds of family life. Jennifer Carr’s Be Who You Are (2010) is a children’s book designed for families, educators, and caregivers who may come in contact with gender nonconforming and transgender children. In this story, a child is born in a boy’s body but feels like a girl inside; the child’s parents are supportive in the child’s journey of self-awareness and desire to live authentically. While the traditional classics in children’s books remain popular in print and e-book formats, these stories demonstrate an evolution in thinking about what children need in today’s society. The impact of electronic media on the publishing industry is shaping how many children’s books are published and how they are available to young readers. Parents and educators continue

132

Boomerang Generation

to examine how young readers interact with print and electronic books; some feel that inexpensive paperback books are better for children than enhanced e-books that have distracting content that may impede literacy training. However, boys are more likely to read e-books for pleasure as the amount of time reading print books has declined with social networking and smartphone use among adolescents. Meredith Eliassen San Francisco State University See Also: Books, Children’s; Childhood in America; Magazines, Children’s; Reading to Children. Further Readings Adomeit, Katherine, George Alfred, Margaret MacDonald, Audrey Wood, and Grace Wood. “A Comprehensive Study of the Wizard of Oz Books by L. Frank Baum and His Successors: The Reasons for the Continuing Controversy Over Them and Their Place in a Public Library Collection for Today’s Children.” San Francisco: San Francisco Public Library, 1966. Carpenter, Humphrey and Mary Prichard. Oxford Companion to Children’s Literature. New York: Oxford University Press, 1984. Kiefer, Monica. American Children Through Their Books, 1700–1835. Philadelphia: University of Pennsylvania Press, 1948. Kismaric, Carole and Marvin Heiferman. The Mysterious Case of Nancy Drew & The Hardy Boys. New York: Fireside Books, 1998. MacCann, Donnarae and Olga Richard. The Child’s First Book: A Critical Study of Pictures and Texts. New York: H. W. Wilson Company, 1973.

Boomerang Generation Many generations are labeled based on social issues relevant to historical events at their time of birth. Children born after World War II are known as the baby boomers. Young adults born a decade or two before 2000 are called Generation Y, and those born around 2000 are known as the Millennials. Generation Y, who are young adults in the

2010s, are like all generations before them facing unique challenges. At the time of life when they should be establishing careers, moving into homes or apartments, and launching independently into the world, many are encountering unanticipated obstacles requiring a return to their parents’ homes. These young adults have “boomeranged” from the relative independence of college and early adulthood back to an environment that requires new negotiations over old rules regarding curfews, financial contributions, overnight guests, and more. Young adults who had left home to study assumed that they would have the same opportunities to become independent as their older siblings and previous generations. However, the economic environment changed following the recession of 2008 and 2009, and many entered a job market that is not particularly welcoming. Others who left home to pursue employment ended up downsized or underutilized. Still others married and had successful careers, only to become divorced and unable to maintain two independent households. These adults may return to their parents’ home with their young children. Many young adults entered the military with an expectation that the skills that they learned there would make them marketable in the private sector, or that they would be able to attend college and find employment after discharge. Many young adults in their 20s now plan to return home after college because they are savvy about housing costs, student loans, employment opportunities and potential earnings. While this option may allow them to save money for the future, parents, having saved for retirement and looking forward to their newfound freedom from child rearing, might now find their finances in jeopardy because of the necessity of subsidizing additional family members. Social norms have changed and individuals are adjusting. Wars, natural disasters, divorce, unemployment, and other circumstances have required the redefinition of families and what is “normal” for various members. Historically, returning home after establishing independence was uncommon, although extended, multigenerational families were common. The boomerang generation is finding its way through a troubled economy with limited opportunities and shifting social norms.



Who Invented The Name and What Does It Mean? The boomerang generation, so named by the media, refers to the 18- to 35-year-old offspring of the baby boomer generation who find themselves unable to do as generations before had: to find employment and launch independent lives. Instead, they return to their parents’ homes, hoping that economic circumstances will change. Like boomerangs, children return to their launch pad after an initial period of independence comes to an end. Some sociologists have termed this phenomenon the “accordion family” to reflect that a family unit expands and contracts. In Japan, this group is referred to as “parasitic singles.” Demographics Recent data seems to indicate that large numbers of adult children live with one or both of their parents. U.S. Census data from 2008 indicated that 22 percent of men and 18 percent of women ages 25 to 34 lived in a multigenerational family household. While many perceive the boomerang generation as proof of the failure of individuals or society to provide opportunities for independence, a 2011 study of 2,078 individuals ages 18 to 34 conducted by the Pew Research Center found some surprising outcomes. According to the Pew study, financial concerns because of a challenging job market, led many to accept jobs that they did not want, while a third of those questioned returned to school, and an equivalent group postponed marriage and parenthood. Level of education influences the likelihood of living with parents; college graduates (10 percent) are less likely than non-graduates to live at home (22 percent). Similarly, the young adult’s living circumstances are correlated to their employment status. Nearly half (48 percent) who were not employed lived with their parents or moved in temporarily for financial reasons, versus 35 percent who worked full or part time. A majority (78 percent) is content with the resulting living arrangements, and 77 percent are optimistic about the future. Nearly half (48 percent) pay rent, while 89 percent contribute to household expenses. Approximately half who live with their parents find that relationships are the same as before they moved back, while one-quarter feel they are worse, and the remainder describes them as better. Parents are split; an

Boomerang Generation

133

equal number report being satisfied having their children move home and being dissatisfied with the development. Extended Dependence on Parents A survey of 2,000 young adults found that most felt that it was reasonable to stay in their parents’ home for up to five years after they moved back, while those age 55 and older felt that three years was long enough. Sixty-three percent stated that they knew someone who had moved home due to financial challenges. A study by Kim Parker highlighted the financial connection between parents and their 25to 34-year-old children. This group found that in 75 percent of cases, the parents’ finances impacted the children positively, whereas 25 percent said that the effect was negative. College enrollment determined whether or not regular financial help was provided. Enrolled students were more likely to receive parental financial support (31 percent) than those not enrolled (12 percent). Disruption of the Family Life Cycle While most of the media focus on the boomerang generation has concentrated on economic and employment circumstances, there are also concerns about individual development and family functioning. Returning to the parental home may affect maturation, independence, and the ability to establish a relationship with a significant other. Young adults may delay starting families, which may affect both family size and childbearing options. Parenting styles may be affected by changes in employment, education, or marital status. Family life may be disrupted as children are forced to relocate due to shifts in their parents’ employment or financial status. High rates of single parenthood, divorce, and remarriage further complicate the situation. All of these concerns arise in a shifting economy in which society has largely moved away from the traditional extended family model. In the early to mid 20th century, children were expected to grow up, leave the family nest, and become independent adults, then establish families and help their aging parents. With this becoming less feasible for so many, professionals suggest that parents and the returning child draw up an agreement laying out the responsibilities of both parties within the household.

134

Bowen, Murray

Conclusion Children who have left home but need to return due to financial and other challenges are called the boomerang generation. Parents who anticipated becoming empty nesters, downsizing, beginning new adventures, or retiring have had to delay their plans as children, who had taken steps to establish independence, find themselves in a precarious position. Assistance from parents and a return to home base may offer them the solid footing that they require to establish permanent independence at a later time. This phenomenon underscores the fact that each generation needs to negotiate its coming-of-age experiences in relation to the realities of the day.

Rooted in scientific study, systems theory examines anxiety in families caused by too much closeness or too much distance in relationships. This anxiety is both caused by current dilemmas and external stress, as well as by unresolved generational issues that have been passed down.

Bowen, Murray

Roots in Science Bowen was trained as a medical doctor and became interested in psychiatry during his five years as a military physician in the early 1940s. He turned down a fellowship in surgery at the Mayo Clinic to accept a position at the Menninger Clinic, in Topeka, Kansas, in 1946. There, he met Karl Menninger, who had focused his career on revising Freud’s theories to apply them to an American society that had changed a great deal from the Victorian notions embodied in Freud’s work. Bowen focused his work at the Menninger Clinic on studying the family relationships of schizophrenic children, particularly the unique relationships between these children and their mothers. In 1954, he moved this research to the National Institutes of Mental Health (NIMH) in Maryland. There, he began to depart from the current psychological thinking that mental illness was only manifested in and controlled by the patient. He began to demonstrate that many of the symptoms that the patient experienced were also manifested in the family. He also found that these symptoms could be found in more “normal” and less disturbed families to a varying degree. In 1959, Bowen was offered a professorship at Georgetown University, and he taught there until his death in 1990. During his tenure there, he began to develop his theory into a systematic approach for treating families that displayed a wide range of symptoms. He developed a center devoted to his systems theory that continues to provide practice and inservice trainings for practitioners worldwide. Bowen drew heavily on the works of evolutionary scientists such as Charles Darwin to influence his strategies and theories. He believed that one day it would be possible to construct a comprehensive human theory based on scientific facts alone.

Murray Bowen made significant contributions to the fields of psychiatry and marriage and family therapy with the development of what came to be known as systems theory, or simply Bowen theory.

Key Concepts in Systems Theory Differentiation is the idea of being autonomous while still maintaining family relationships, and is at the heart of systems theory. Bowen believed that

Adele Weiner Metropolitan College of New York Kim Lorber Ramapo College of New Jersey See Also: Baby Boom Generation; Emerging Adulthood; Empty Nest Syndrome; Midlife Crisis; Multigenerational Households. Further Readings Cohn, D’Vera. “Multi-Generational Living During Hard Times.” Report of the Pew Research Institute, 2011. http://www.pewsocialtrends.org/2011/10/03/the -economics-of-multi-generational-living-during -hard-times (Accessed September 2013). Ludwig, Robi. “How Long Is Too Long for Boomerang Kids to Live With Their Parents?” Huffington Post. http://www.huffingtonpost.com/robi-ludwig/how -long-is-too-long_b_3748365.html (Accessed August 2013). Parker, Kim. “The Boomerang Generation Feeling OK about Living with Mom and Dad.” Pew Research Institute (2012). http://www.pewsocialtrends.org/ files/2012/03/PewSocialTrends-2012-Boomerang Generation.pdf (Accessed July 2013).

Bowlby, John



much anxiety is caused in the family when individual members become emotionally fused due to poor interpersonal boundaries. In unhealthy families, when one person tries to become autonomous, other members turn against him or her. In a differentiated family, individuals are able to contain their anxiety, allowing emotional issues to be dealt with. Bowen believed that the behaviors of individuals in families was deeply influenced by birth order, meaning their age position in the family. This has become one of the most widely known and most controversial of his theories. He also believed in the concept of generational impact; that is, much of the stress that a family experiences is passed down from previous generations. For example, if a family’s elders were new immigrants that constantly worried about money for survival, that issue will find its way into later generations, causing stress and worry over money. Bowen popularized the concept of triangles, which is based on the idea that misery loves company. This type of communication takes place when distressed or anxious family members seek others within the family to take their side, ostracizing other members in the process. Bowen developed the concept of emotional cutoff to explain how each person reacts differently to stress. Some will storm off and ignore people, and others will attack another person. Both of these are examples of emotional cutoff, in which rational thinking and sensible approaches go out the window in an individual’s attempt to avoid conflict and pain. Bowen believed that this was a temporary strategy that never allows a person to gets to the heart of the underlying problem. Further Contributions Bowen and his followers made significant, lasting contributions to the field of family therapy. Many of today’s standard practices were developed as a part of systems theory. In addition to the above-mentioned concepts, Bowen pioneered the following: • The use of genograms, or family history drawings, to illustrate themes and conflicts in families. • Therapy sessions that included individual family members or groups of family members, rather than the whole family.

135

• Having therapists model the concept of differentiation by questioning—without emotional engagement or emotional cutoff—the behavior, thoughts, and feelings of clients. • Having the therapist form a new, healthy triangle with the family, one in which the therapists reacts without emotional involvement with individual members, thus allowing them to focus more on their behavior. • Using the technique of coaching to demonstrate that healing can be achieved not by making dramatic changes in a person’s psychology but by changing how a person acts within a social network of siblings, parents, children, and extended family relationships. Malcolm Smith University of New Hampshire See Also: Bowlby, John; Divorce and Religion; Family Therapy; Mid-Life Crisis ;Psychoanalytic Theories; Systems Theory. Further Readings Bowen Center for the Study of Family. http://www .thebowencenter.org (Accessed August 2013). Bowen, Murray. Family Therapy in Clinical Practice. New York: J. Aronson, 1978. Brown, J. “Bowen Family Systems Theory and Practice: Illustration and Critique.” Australian and New Zealand Journal of Family Therapy, v.20/2 (1999). Kerr, M. One Family’s Story: A Primer on Bowen Theory. Washington, DC: Bowen Center for the Study of the Family, 2003.

Bowlby, John John Bowlby was a British psychologist, whose work with children led to the development of attachment theory. By studying a child’s early relationships and their interactions with primary caregivers, Bowlby discovered much about the effects of parenting on child development. His attachment theory left a lasting impact on the fields of family studies, sociology, education, childcare, and parenting. Although largely discredited at first, Bowlby’s attachment

136

Bowlby, John

theory has since become the basis for much of the field of human psychological development. Bowlby was a British psychoanalyst and director of the Tavistock Clinic in London, where he became interested in the emotional effects suffered by children who are separated from their families. In 1951, he was commissioned by the World Health Organization to report on homeless children across Europe. He produced a controversial report that indicated that in order for a child to develop a healthy psyche, he or she must develop an intimate, caring, and uninterrupted relationship with his or her mother or permanent caregiver. The idea that each child must develop a secure base became the framework for attachment theory. Bowlby developed his theory over time, influenced by evolutionary biology, cognitive psychology, and theories of human development, he came to understand that interaction patterns with caregivers that occur very early in someone’s life have a lasting impact on later behaviors and emotional patterns. In 1957, Bowlby formally presented his ideas in a paper titled “The Importance of A Child’s Ties to His Mother.” This paper was greeted with critical outrage in the psychoanalytic community that was still beholden to Sigmund Freud’s psychoanalytic theories. In 1959, Bowlby identified a three-step process that a child goes through when separated from a caregiver: protest, denial, and detachment. This framework spawned a new focus on separation and loss research. Detachment/Denial Bowlby’s life had been deeply influenced by parents who followed strict British traditions in childrearing, in which strong emotional relationships between children and parents were considered signs of weakness, and were not encouraged. Instead, strict discipline, emotional control, and relegating child care to a hired nanny were the accepted practices among upper-middle-class families. It has been widely reported that as an infant, Bowlby only saw his mother once a day, at tea time. At the age of 7, he was sent away to a boarding school, which he despised. He later reported that his separation from his nanny and his boarding school experience affected his life and may have been the driving force behind his later discoveries. Following boarding school, he spent a brief stint in the military and then attended Trinity College, where

he received a medical degree in 1928, focusing on developmental psychology. After graduation, he spent time volunteering in a school for maladjusted children, where he became fascinated with child psychiatry and psychology. After his initial work on attachment theory, Bowlby studied what he called “separation behavior,” or the stress that an infant or child feels when he or she is cut off from his or her primary caregiver, both emotionally and physically. This work had a significant impact on the study of grief and loss, as well as the study of child behavior. However, Bowlby’s theories and research were not widely accepted because they contradicted much of the psychoanalytic thinking of the time. For example, Anna Freud, the daughter of Sigmund Freud, claimed that an infant’s ego was not developed enough to allow for more than short and meaningless bouts of grief. Others believed that the greatest trauma that an infant suffered was separation from the mother’s breast, a physical rather than emotional separation. In spite of this criticism, Bowlby began perhaps his greatest contribution to the field of child development in 1969, with the publication of the first of what would become a trilogy of books. The first volume was Attachment, followed in 1972 by Separation: Anxiety and Anger, and the final volume was Loss: Sadness and Depression. It was not until the 1980s that Bowlby’s theory and substantial writings took hold among the scientific community. A renewed interest spawned a field of research that has extended scientists’ understanding of early childhood relationships and the effects that those relationships have on a person’s development. Bowlby’s theory, although still not without critics, has been described as the dominant lens through which psychologists and family scientists view early childhood social development today. Bowlby’s work has also had a lasting impact on grief and loss theory. Bowlby once told his son that “we suffer the same feelings of loss when a loved one dies as a child feels who’s lost his mother.” Malcolm Smith University of New Hampshire See Also: Bowen, Murray; Family Therapy; Systems Theory.



Boy Scouts

137

Further Reading Bowlby, John. A Secure Base: Parent-Child Attachment and Healthy Human Development. London: Routledge, 1988.  Bowlby, R. Fifty Years of Attachment Theory. London: Karnac, 2004. Bretherton, I. “The Origins of Attachment Theory: John Bowlby and Mary Ainsworth.” Developmental Psychology, v.28 (1992). Waters, E., J. Crowell, and H. Waters. “Attachment Theory and Research at Stonybrook.” http://www .psychology.sunysb.edu/attachment (Accessed August 2013).

Boy Scouts Founded by the retired British military leader Robert Baden-Powell in 1907, the Boy Scouts was subsequently incorporated in the United States in 1910 and experienced rapid growth in the decade that followed, quickly outpacing the Boys’ Brigades and the YMCA as membership organizations for boys. Still active in the 21st century, its current mission “is to prepare young people to make ethical and moral choices over their lifetimes by instilling in them the values of the Scout Oath and Scout Law,” with a vision to “prepare every eligible youth in America to become a responsible, participating citizen and leader who is guided by the Scout Oath and Scout Law.” The Boy Scouts seek to accomplish these developmental goals by providing camaraderie, training in citizenship, outdoor activities, and other informal educational opportunities. Of the over 100,000 scouting units nationwide, approximately 70 percent are chartered to faithbased organizations, 22 percent to civic organizations, and the remainder to educational organizations such a parent-teacher associations and private schools. According to the Boy Scouts’ 2012 annual report, nearly 2.7 million boys are active scouts. The largest percentage of the faith-based Boy Scout groups belong to the Church of Jesus Christ of Latter-day Saints (430,557 youth membership), followed by United Methodist churches (363,876), Roman Catholic churches (273,648), Presbyterian churches (125,523), Lutheran churches (116,417), and Baptist churches (108,353). Embedded in the

Robert Stephenson Smyth Baden-Powell, also known as Lord Baden-Powell, was a lieutenant-general in the British Army, writer, founder of the Scout Movement and first chief scout of the Boy Scouts Association.

Scout Oath is duty to God and country, which explains the organization’s close association with faith-based organizations. From Boys to Men Building character was at the heart of the formation of many youth movements at the end of the 19th century and the beginning of the 20th centuries. For the Boy Scouts, this included the perceived need of primarily middle-class, Protestant men to socialize teenage boys into their value system in the midst of a rapidly changing and more urbanized society. Critics charged that the uniform, badges, rituals, and ceremonies were akin to a nationalistic promotion of the military. The British founder, Robert Baden-Powell, was a military man who discovered that his manual for training soldiers, Aids to Scouting, was being used by leaders in youth movements to train boys. William Smith, founder of the Boys’ Brigade, encouraged Baden-Powell to write something specifically for youth; thus, the first precursor

138

Boy Scouts

to the Boy Scout Handbook, Scouting for Boys, appeared in 1908. While it is true that Baden-Powell wanted to produce patriotic young men prepared to defend their country, other aspects of scouting moved in different directions. He did not believe that military-type drills fit the needs of adolescent boys and instead incorporated the woodcraft ideas of Ernest Thompson Seton into the Boy Scouts. Seton, having settled in America, considered woodcraft to be first and foremost a recreation in the service of turning boys, increasingly separated from the farm and frontier, into men. It was not intended to be vocational training or preparation for the military. Incorporating his understanding of American Indians, Seton considered the wilderness an ideal setting for such youth recreation and founded the Woodcraft League to carry out his vision. Among the founding board of directors and the first chief scout in the Boy Scouts of America, Seton wrote the first handbook for the Boy Scouts of America, in which the ideas of BadenPowell and his personal Woodcraft League ideas were thoroughly merged. Key Developments and Lived Experience Key developments in the subsequent growth and development of the Boy Scouts of America include the first issue of Boys’ Life in 1911, a magazine devoted to fiction and stories of interest to boys that reinforce themes and values learned during troop meetings and wilderness excursions; the introduction of the Order of the Arrow, laden with Indian lore and initiation rites in 1915; a federal charter granted by Congress in 1916; the introduction of the Cub Scout program for younger boys in 1930; the addition of the Webelos program for older Cub Scouts in 1954; and the introduction of the Exploring Program (1959) and Venturing programs, open to boys and girls (1998), designed to address the perennial problem of holding the interest of older adolescents who are making the transition to adulthood. The experience of being a Boy Scout consists of weekly troop meetings that are broken down into smaller group activities, combined with periodic weekend adventures and summer camping. Opportunities for progress toward higher standing within the ranks and earning badges based on the acquisition of and demonstration of skills are offered during these times. Within this context friendships are formed, leaders are made,

teamwork is encouraged, character is developed, and fun is experienced. As in any organization, peer interaction in a competitive environment can also prove dysfunctional, exclusive, and cruel. It is the job of the leaders to keep scouts on a healthy trajectory. Controversy and Future Challenges Lawsuits, primarily beginning in the 1990s, challenged the exclusion of girls, homosexuals, and atheists from the Boy Scouts. The national organization defended itself and its right as a private organization to carry out its mission as it saw fit. The public outcry of key constituents (including some in the religious and fundraising communities) prompted self-examination over exclusion based on sexual orientation. A resolution to remove this exclusion was approved and announced May 2013, effective January 1, 2014. The policy excluding homosexuals from adult leadership was not under consideration and is still in force. Key churches have pledged continued support, the Church of Jesus Christ of Latterday Saints, United Methodists, and Roman Catholics among them. Others are supporting alternative organizations, including Southern Baptists and the Assemblies of God. The Boy Scouts was founded for the purpose of developing boys into men in a changing world. Identifying what is core and what is peripheral to its mission in an increasingly pluralistic world, and deciding how to respond to criticisms, will continue to be a challenge to the organization in the coming years. Douglas Milford University of Illinois at Chicago See Also: Camp Fire Girls; Girl Scouts; Soccer Moms; YMCA; YWCA. Further Readings Bailey, Victor. “Scouting for Empire.” History Today, v.32/7 (1982). Blum, Debra E. “Donors Await Boy Scouts Decision on Gay Ban.” Chronicle of Philanthropy, v.25/12 (2013). Boy Scouts of America. “Boy Scouts of America Statement” (May 23, 2013). http://www.scouting.org/ sitecore/content/MembershipStandards/Resolution/ results.aspx (Accessed March 2014).

Boy Scouts of America. “2012 Annual Report.” http:// www.scouting.org/filestore/AnnualReport/2012/324 -168_2012AnnualReport.pdf (Accessed March 2014). Dart, John. “Key Churches Support Scouts’ Policy on Gays.” Christian Century (June 26, 2013). Eagar, W. McGillycuddy. Making Men: The History of Boys’ Clubs and Related Movements in Great Britain. London: University of London Press, 1953. Jeal, Tim. The Boy-Man: The Life of Lord Baden-Powell. New York: William Morrow, 1990. Salzman, Allen. “The Boy Scouts Under Siege.” American Scholar, v.61/4 (1992). Seaton, Earnest Thompson. Boy Scouts of America, A Woodcraft, Scouting, and Life-Craft: With Which Is Incorporated by Arrangement General Sir Robert Baden-Powell’s Scouting for Boys. New York: Doubleday, Page & Company, 1910. Macleod, David. Building Character in the American Boy: The Boy Scouts, YMCA, and Their Forerunners, 1870–1920. Madison: University of Wisconsin Press, 1983.

Brazelton, T. Berry Thomas Berry Brazelton is a pediatrician and author, best known in professional circles for his pioneering work on the behavioral assessment of neonates and the study of mother-child interaction. Brazelton’s approach is characterized by close attention to newborn behavior as an evolved system of communication and an indicator of innate personality. Brazelton advocates early intervention for at-risk infants and has advised American political leaders on what the government can do to promote healthy families. In addition to his scholarly work, he has also written numerous books that are popular with general audiences. Academic Career Brazelton was born in Waco, Texas, in 1919, and educated at Princeton and Columbia University. He combined his training in pediatrics and child psychiatry in a private practice with clinical research, largely at Harvard Medical School. Brazelton’s earliest research concerned the development of reciprocal communication between mother and child in the development of feeding habits. Publishing

Brazelton, T. Berry

139

with Kenneth Kaye and others, Brazelton explored the push-pull nature of mother–infant interaction in situations such as establishing feeding routines, sleep schedules, and other cycles of communication and adaptation. Unlike practitioners who considered the newborn either a “blank slate” without social abilities or a creature prematurely produced (in relation to other primates) from the womb, Brazelton believed that infant behaviors, activity states, and emotional expressions are forms of communication that the child uses to participate in its social environment. Brazelton is best known in professional circles for his 1973 development, with a team of colleagues, of a basic examination of newborn abilities, alternately known as the Neonatal Behavioral Assessment Scale (NBAS) or the Brazelton Neonatal Assessment Scale (BNAS). Unlike the Apgar test that rates physiologic function at birth, the NBAS gauges autonomic, motor, state, and social-interactive responses at any point during the first eight weeks of life. Although it is not without its detractors, the NBAS serves as a standard for research and a tool for early intervention specialists. Doctors and other clinicians can become certified in Brazelton’s Newborn Behavioral Observations (NBO) system, which includes the NBAS and other programs of evaluation, through the Brazelton Institute at Boston Children’s Hospital. The NBO System epitomizes Brazelton’s belief that infants operate as social human beings from birth: they control their responses to their environment, communicate, and seek to control their environment through communication. In eliciting 28 behaviors and 18 reflexes, trained practitioners identify an infant’s strengths, vulnerabilities, and unique temperament, and can use these to make recommendations to parents for the individual infant’s care. Public and Political Career Brazelton became a household name in the 1970s and 1980s with his books, syndicated newspaper column “Families Today,” cable television series What Every Baby Knows, and guest appearances on daytime television. What Every Baby Knows ran from 1983 to 1995 on Lifetime and earned him a Daytime Emmy Award for Outstanding Service Show Host in 1994. In his book Mothers and Infants (1969), Brazelton helped readers determine if their

140

Breadwinner-Homemaker Families

babies demonstrated “quiet,” “active,” or “average” temperaments. This distinction formed the basis for various calming, feeding, and sleep-training techniques, and sent the message that, though unique, the infant was in no way abnormal in his or her temperament. In his popular Touchpoints series, Brazelton described those moments in a child’s development when regressions in learned skills take place, usually to a caregiver’s frustration and dismay. Rather than seeing these touchpoints as negative, Brazelton urged parents and caregivers to recognize that the child is about to experience growth and that touchpoints are moments of preparation for what is to come. More generally, Brazelton broached specific childrearing issues such as discipline, sleep, feeding, and sibling rivalry in a series of books, cowritten with Joshua D. Sparrow, called The Brazelton Way. As with the NBAS, Brazelton’s systems all worked to reveal to the parent the infant’s earliest capabilities and preferences, and in so doing, inspire the parents’ confidence. Whereas earlier child-rearing experts believed themselves to be the only legitimate sources of childrearing knowledge, Brazelton echoed Benjamin Spock in encouraging the parent to trust his or her instincts. Brazelton has consistently argued that local, state, and national governments should invest more resources to help struggling families, especially those in which both parents work. Brazelton was president of the Society for Research in Child Development from 1987 to 1989, president of the National Center for Clinical Infant Programs from 1988 to 1991, and he cofounded the outreach group Parent Action. He lobbied with the Alliance for Better Child Care and was appointed by Congress to the National Commission on Children in 1989. In 1993, he testified before Congress in support of the Family and Medical Leave Act, which grants mothers three months of unpaid leave after childbirth; he also advocated for Public Law 99457, which extends the Individuals with Disabilities Education Act to include young children. In February 2013, T. Berry Brazelton was awarded the Presidential Citizens Medal, the nation’s secondhighest civilian honor. Cornelia C. Lambert University of Oklahoma

See Also: Child-Rearing Experts; Child-Rearing Practices; Nature Versus Nurture; Parent Education. Further Readings Brazelton, T. Berry. Mothers and Infants: Individual Differences in Development. New York: Delacourt, 1969. Brazelton, T. Berry and Barbara Koslowski. “The Origins of Reciprocity: The Early Mother-Infant Interaction.” In The Effect of the Infant on Its Caregiver, Michael Lewis and Leonard A. Rosenblum, eds. New York: Wiley, 1974. Brazelton, T. Berry, and J. K. Nugent. The Neonatal Behavioral Assessment Scale. Cambridge, MA: Mac Keith Press, 1995. Lester, Barry M. and Joshua D. Sparrow. Nurturing Children and Families: Building on the Legacy of T. Berry Brazelton. Oxford: Blackwell-Wiley, 2010.

BreadwinnerHomemaker Families The family pattern known as the breadwinnerhomemaker system, which is characterized by men as the sole family wage earners and women as fulltime homemakers, emerged in the mid-19th century as the United States became more industrialized. Before that time, most goods and services consumed by a family were produced in the home, on the farm, or in a nearby workshop. Under these conditions, the idea of a single wage earner in a family was meaningless because every member of the family contributed to the production of goods and services. As the U.S. economy expanded in the 19th and through the mid-20th century, the economic nature of family life and the relationship between spouses changed. By the 21st century, however, profound economic changes once again revolutionized marriage and family life. Married men were much less likely to be the only wage earners in their families. The breadwinner-homemaker system had all but disappeared. Industrialization, Economics, and Marriage Before the American Revolution, the colonies’ economy was based on agriculture. Men were



primarily responsible for work in the barn and the fields, while women were primarily responsible for family gardens, cooking, cleaning, and caring for children. Women were also the primary manufacturers of goods consumed in daily life. They sewed and mended clothing, made soap and candles, and cooked all food from scratch. A small number of women worked outside the home as innkeepers, shopkeepers, craftspeople, printers, teachers, and landholders. Some worked as nurses and midwives and produced medicines, salves, and ointments. In the southern colonies, enslaved African American men and women performed all labor on farms and in houses that belonged to a plantation’s owners. The United States slowly began to industrialize after the American Revolution. The first factory, which produced textiles, was built in New England in 1790. Young unmarried women were drawn into these early factories to produce many of the things that they had previously made in their homes because of their familiarity with the equipment and techniques. Factories depended on these young female workers because men were still needed on family farms. The U.S. economy began to quickly expand in the early to mid-1800s, resulting in a labor shortage. Great waves of immigrants came to the United States to pursue employment in factories. Native-born white women were pushed out of the labor force as immigrant men and women became preferred workers. Between 1860 and 1920, Americans’ views about women and motherhood profoundly changed as a way to justify and encourage women to leave the labor force. A new cultural ideal emerged, stressing women’s moral duty and responsibility to remain in the home and care for their families. This ideal, which is referred to as the “cult of true womanhood” or the “cult of domesticity,” praised and rewarded women for taking care of their children, homes, and husbands. In return for their domestic efforts, their husbands were to provide financially for them by working in business and industry. An implicit feature of the cult of true womanhood was that the home and children served as a barometer of the husband’s economic success. In addition, the breadwinner-homemaker system put wives in a secondary, dependent, and subordinate role in relation to their husbands. Women had no economic resources and were dependent on their husbands’ willingness to share their earnings. Husbands who

Breadwinner-Homemaker Families

141

provided for their wife and children gained power in the relationship. This cultural ideal, though held by women and men in all social classes, was only attainable by the middle and upper-middle classes. Immigrant women, poor women, and African American women remained in the labor force. By the beginning of the 20th century, more than 5 million women and girls over age 10 were in the labor force. Most of them worked in domestic labor and personal service industries. A substantial proportion of women worked on farms, and a small group worked as teachers in elementary and secondary schools. Women were also employed in the trade and transportation industries as sales clerks, telegraph and telephone operators, stenographers, secretaries, accountants, and bookkeepers. World War I accelerated women’s entrance into new fields of employment because there was a shortage of male workers during their service in the armed forces. After the war, however, many women left the labor force because many state and local governments prohibited wives from taking jobs that could be filled by returning veterans. When the Great Depression hit in 1929, the gains that women had made in the labor force during World War I were lost as the few jobs available were given to men. It was not until the United States entered World War II in 1941 that women reentered the workforce in significant numbers and in dramatically new ways. First, a greater number of women became employed than ever before. Between 1940 and 1946, 5.5 million women (nearly two-fifths of the female population) entered the labor force. Second, married women—even those with young children—entered male-dominated occupations. They became welders and shipbuilders, giving rise to the popular image of Rosie the Riveter, who became the symbol of employed women during the war. They worked as switch operators, precision tool makers, crane operators, lumberjacks, drill press operators, and stevedores. Finally, African American women found new employment opportunities beyond domestic work, which had been their typical source of employment in earlier decades. For all women, however, performing these traditional male jobs did not increase women’s rate of pay. After World War II ended in 1945, women were pressured to return to their so-called traditional roles as housewives and

142

Breadwinners

mothers as returning men went back to work and resumed their duties as the family breadwinner. The breadwinner-homemaker system began to break down as women entered or re-entered the labor force, but the cultural ideology supporting the system continued. For instance, in the 1950s, Americans in every social class believed in the ideal of women working in the home and men earning the family’s living, even though 25 to 30 percent of all married women were employed. Many of those working women, however, did so out of economic necessity, and when they had young children, they were often plagued by feelings of guilt for going against the prevailing ideology that they should be caring for them full time. By the end of the first decade of the 21st century, 59 percent of women aged 18 and older were in the labor force, while nearly half (47 percent) of the total labor force was composed of women. Sixty percent of mothers of infants and 73 percent of mothers whose children are aged 18 or younger are in the labor force. In families where both husband and wife are employed full time, women contribute almost half (47 percent) of their families’ income, on average. The breadwinner-homemaker system is no longer a viable economic model for most American families. Women’s labor is essential to the U.S. economy and individual families and households. Constance L. Shehan University of Florida See Also: Cult of Domesticity; Family Consumption; Family Farms; Family Values; Homemaker; Marital Division of Labor Further Readings Coontz, Stephanie. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books, 1992. Jones, Jacqueline. Labor of Love, Labor of Sorrow: Black Women, Work, and the Family From Slavery to the Present, 2nd ed. New York: Basic Books, 2010. Margolis, Maxine. Mothers and Such: Views of American Women and Why They Changed. Berkeley: University of California Press, 1984. Margolis, Maxine. True to Her Nature: Changing Advice to American Women. Prospect Heights, IL: Waveland Press, 2000.

Breadwinners Breadwinners, or the breadwinner model in families, refers to individuals who are the sole financial providers for their families. Traditionally, breadwinners have primarily been fathers. Although some may assume that the breadwinner/father has existed throughout history, this is not the case. Throughout the 1800s, the majority of families lived in rural homesteads where both mothers and fathers worked to create food, shelter, and other necessities for the family. As manufacturing jobs in the cities became more available, men moved to the cities for work, to be followed by their families. This initiated a move toward the breadwinner model, however, the breadwinner/father and stay-at-home mother model was most common after World War II. Not all families were able to afford this model, and many working class and poor families had both working mothers and fathers (dual-earner families). Beginning in the 1970s, many industries became obsolete or moved overseas. This changed the breadwinner/father model because most families could not afford to only have one earner in the household. In 2005, the U.S. Census Bureau found that 5.5 million parents were stay-at-home parents; the vast majority (all but 100,000) were women. The number of stay-at-home fathers is increasing, though slowly, which means that mothers are the sole breadwinners in those households. This has caused some conservative pundits (i.e., Erick Erickson and Lou Dobbs) to express concern and outrage over these shifting family roles, based on the assumption that breadwinning mothers are unnatural and dangerous for children. Just as men’s roles in their families have changed, stereotypes and expectations regarding men in families have also experienced changes. For example, most young adults believe that in order to be a good father, one must be a successful provider and a successful parent; this was not the case during the time that the breadwinner/father model was most common. There is also evidence, however, that people do not believe that nonmarried fathers (divorced, never married) can be successful breadwinners, perhaps because of the assumption that these fathers are deadbeat dads. Expectations About Fathers Researchers have reported that fathers are generally portrayed as either overly positive (i.e., good



provider and involved) or extremely negative (i.e., lazy and unavailable). This has been referred to as the “good-dad, bad-dad” dichotomy. Although fatherhood is more complex than this dichotomous categorization suggests, positive fathering has been and continues to be focused around fathers’ abilities to financially provide for their families. Prior to the 1970s, men were not judged on how well they parented, but on how well they financially provided for their families. Fathers, at least middle-class fathers, who were not the sole financial providers of their families were considered inadequate as fathers and men. Although comparatively few fathers serve as sole financial providers for their families today, there is evidence that the cultural image of fathers still includes the breadwinning expectation. Beginning in the 1970s, a new type of father image emerged that replaced the sole breadwinner/provider ideal. This new image incorporated active parenting, and many termed this the “New Father.” The New Father was assumed to employ more responsibilities at home, in addition to the financial support that he provided to his family. In spite of media and social scientists’ predictions about, and attention to, New Fathers and expectations for men to more fully engage in fathering and childcare, the most consistent and popular expectation that young adults associate with fathers remains that of the breadwinner/provider. Over 70 percent of participants in one study rated married fathers highly on breadwinning/providing attributes; the only parenting item selected that frequently was “protector.” Also, this study reported that even negatively stereotyped fathers were associated with breadwinning characteristics. For example, fathers who were stereotyped in negative ways (e.g., fathers assumed to be uninvolved, uncaring, or irresponsible) were associated with breadwinningrelated characteristics such as being busy, stressed, and providers. This suggests that even fathers who are negatively viewed as parents are still perceived to be involved or somewhat-skilled breadwinners/providers. Thus, while fathers may be gaining in New Father characteristics, the traditional expectation that fathers are breadwinners remains. Research has found that only fathers who are good breadwinners/providers are associated with good parenting characteristics. This suggests that prospective New Fathers must be more than just good parents. They must also be good breadwinners/providers.

Breadwinners

143

Breadwinner Fathers’ Influence on Family Although breadwinner fathers are often able to financially provide for their families, there is evidence that the more hours that fathers work, the weaker the bond between father and child. Fathers who spend more time at home, and who play a bigger role in parenting, are better able to communicate and maintain relationships with their children. Some studies have compared breadwinner fathers to breadwinner mothers. Breadwinner fathers spend far less time interacting with their families than breadwinner mothers. Mothers set aside special time when they are available to interact with their families. This does not seem to happen for breadwinning fathers. Breadwinning mothers also spend more time in household activities than breadwinning fathers. These trends suggest that breadwinning mother households have a much more balanced system of parental roles than do breadwinning father households. Some researchers find that many individuals within breadwinner households feel satisfaction with their roles, likely because they are fairly clear (i.e., the father works and manages the money; the mother raises the children). Jessica Troilo West Virginia University See also: Child Support; Child Support Enforcement; Cultural Stereotypes in Media; Deadbeat Dads; New Fatherhood. Further Readings Coughlin, Patrick and Jay C. Wade. “Masculinity Ideology, Income Disparity, and Romantic Relationship Quality Among Men With Higher Earning Female Partners.” Sex Roles, v.67 (2012). Lewis, Jane. “The Decline of the Male Breadwinner Model: Implications for Work and Care.” Social Politics, v.8 (2001). Meisenbach, Rebecca J. “The Female Breadwinner: Phenomenological Experience and Gendered Identity in Work/Family Spaces.” Sex Roles, v.62 (2010). Troilo, Jessica and Marilyn Coleman. “College Student Perceptions of the Content of Father Stereotypes.” Journal of Marriage and Family, v.72 (2008). Wimbley, Catherine. “Deadbeat Dads, Welfare Moms, and Uncle Sam: How the Child Support Recovery Act Punishes Single-Mother Families.” Stanford Law Review, v.53 (2000).

144

Breastfeeding

Breastfeeding Before the mid-19th century, women had few alternatives to breastfeeding. If a mother was unable to nurse, wet nurses were the next-best option. Glorification of femininity and motherhood in the 19th century accorded breastfeeding a status so sacred that by the time safe milk substitutes were produced in the 1850s, only families in desperate situations used them. However, the development of professional science-based medicine in the latter part of the 1800s gave medical practitioners the authority to gainsay tradition and Victorian mores; by the Progressive Era, bottle feeding gained widespread cultural support. It was not until around the time of feminism’s second wave in the 1960s that American women began to reclaim breastfeeding as a political act and a way to connect with their newborns. After record numbers of women joined the workforce in the 1970s and 1980s, the question of whether or not a mother should breastfeed her infant became politically charged. Recent trends in intensive and attachment motherhood reveal that patterns of extended breastfeeding in modern society are not much different from those of the past: prescriptive advice about breastfeeding reveals just as much about class and racial relations in America as it does about current medical and cultural thinking about what is best for the baby. Eighteenth and Nineteenth Centuries Before the development of viable milk substitutes, infant survival depended upon the ability of a woman, usually the mother, to provide suitable nourishment from her breast. In colonial America, maternal breastfeeding was de rigeur, and the practice was supported by both Puritan clergy, who saw in the breast God’s divine plan for infant nourishment; and by medical theorists, who noted the increased survival rate of babies fed on mothers’ milk. During the American Revolution and early republic, motherhood and its attendant duties were also associated with civic virtue. Families sought substitute nurses in cases where mothers died in childbirth, were incapacitated, or were unable to breastfeed. If friends or family members were not available, families hired wet nurses, typically women from economically marginal populations who took in babies after they had given birth. Ideally, a wet nurse would produce enough

milk to support her child (if living) and any she took in; in the worst-case scenario, none of the children received ample nourishment. American families were less likely than their western European contemporaries to employ wet nurses purely for convenience; nevertheless, most towns had active markets for wet nurses and their services. In the south, wet nursing crossed racial lines, with African American women frequently serving as wet nurses for white children. Patterns changed in the 19th century with the cultural emphasis on bourgeois domesticity and its special reverence for motherhood. Self-sufficient, breastfeeding mothers came to represent not only Republican virtue, but also piety, submission, and all of the other Victorian-era virtues associated with the “angel in the house.” As leaders of the home, mothers were responsible for the health and character of their children, and because it was generally believed that babies absorbed temperament and milk from their nurses, maternal breastfeeding was the epitome of virtuous motherhood. Sending an infant to a wet nurse was believed to be potentially hazardous. If a wet nurse was necessary, a hired nurse became part of a proper home’s domestic service, leaving her own children, including infants, at home. From the employer’s point of view, taking in a wet nurse insured that the child in question received enough milk and was not exposed to the pernicious influences associated with the lower classes. It often meant, however, as Janet Golden writes in A Social History of Wet Nursing in America, “trading the life of a poor baby for that of a rich one” because wet nurses had to abandon their children in pursuit of steady employment. Thus, while breastfeeding was practiced by people of all races and classes in 19th-century America, the benefits were mostly accorded to the children whose racial and economic status afforded them consistent quantities of breast milk. Twentieth Century At the dawn of the new century, families routinely looked to medical science when faced with questions about infant care. The answers provided by Progressive Era medical professionals changed the course of infant feeding for nearly a century. Infant food began to be manufactured in 1856, but families only used these products in dire situations because in addition to contravening ideological preferences,



artificial foods were initially suspicious and frequently expensive. Both the development of pasteurization (in the 1890s) and the development of pediatrics as a medical specialty convinced mothers that scientists knew best. Clinical practices combined with the growing artificial food industry to render breastfeeding unfashionable and outdated. Physician-assisted births, which became common by the 1930s, generated practices that effectively prevented the early establishment of breastfeeding. Mothers remained in the hospital for a week or more, with their babies kept—and bottle fed— in the nursery. Many believed that breastfeeding permanently disfigured the mother; as late as 1957, renowned pediatrician Benjamin Spock warned his readers about the potential loss of physical beauty associated with nursing. Spock’s Baby and Child Care, first published in 1946, formed the backbone of mid-century infant rearing practices. Advocating timed feedings, Spock counseled mothers to switch permanently to bottle feeding if anything—lack of sufficient milk, sore nipples, difficulty expressing milk, or extreme fatigue— complicated nursing for up to four days. Statistical data shows that both initiation and length of time breastfeeding waned over the course of the 1950s and 1960s. By 1970, only 28 percent of new mothers initiated breastfeeding, and only 8 percent were still nursing when the baby reached 3 months of age. Second Wave Feminism and Beyond Sociological trends in breastfeeding changed direction once again as a result of a second wave of feminism. While feminists of the suffrage era emphasized the similarities between men and women, and therefore their equality, feminists of the 1960s and 1970s took the opposite tack, emphasizing the unique talents of women. Childbearing and breastfeeding took center stage in political debates about the treatment of women by the medical establishment, the role of women in the workplace, and debates over the best way to raise a child. Some feminists sought to reclaim the female body and its processes, including childbirth and breastfeeding, from the largely male medical establishment. Women encouraged one another to get to know their bodies and to trust their desires and intuitions. This embodied epistemology emphasized nursing as a solely female, intimate, and yet political activity that only experienced mothers could

Breastfeeding

145

understand. Support groups and mother-to-mother educational groups formed to provide information about breastfeeding that many doctors would not. Ironically, the best known lactation support community, La Leche League, was actually founded before the feminist movement, though the two groups came to share many goals. La Leche’s book, The Womanly Art of Breastfeeding (1958), though not without its critics, became a standard for self-guided mothers, especially those whose mothers and mothers-in-law had not nursed their children. Meanwhile, the medical community responded with the development of certification in lactation consultancy. Many authors note the paradox that breastfeeding came back into fashion just as women began to make steady gains in workplace equality. By the end of the 20th century, increased numbers of women were choosing to initiate breastfeeding with their newborns, even if their careers complicated that choice in the ensuing months. The Family and Medical

Louis XIV with his nurse Longuet de la Giraudiére. Before the development of baby formulas in the 20th century, a wet nurse was the only alternative to a mother breastfeeding her baby.

146

Breastfeeding

Leave Act (FMLA) of 1993 allows for 12 weeks of unpaid, job-protected leave-of-absence for a new mother; however, not all workplaces have to comply with the law, nor are all employees eligible. If a new mother establishes breastfeeding during the weeks following her baby’s birth, she is soon faced with decisions about the continuation of the practice as she returns to work. While federal law dictates that a woman be allowed break time for the expression of her breast milk for up to one year after the birth her child, it does not ensure that all workplaces have appropriately clean and private facilities in which to pump, or access to refrigeration for milk storage. Literature from the USDA’s Women, Infants, and Children (WIC) program promotes breastfeeding as “free” food for baby, a tempting draw for the low-income mothers for whom WIC services exist. However, Linda Blum, author of At the Breast, and other experts note that breastfeeding is only free to those who can sacrifice the income from maternal employment or can afford the extra nutrition, pumps, and bottles necessary to support a comprehensive plan of nursing, pumping, and milk storage. Nevertheless, both critical and autobiographical treatments of motherhood at the beginning of the 21st century show that mothers—especially working mothers—feel an extraordinary amount of pressure to breastfeed. Following the increasing cultural interest in human milk for human babies, the medical establishment turned its attention to discerning breastfeeding’s potential benefits in the 1970s. Since then, research has shown that breast milk contributes to healthier babies. Both preterm and full-term babies fed with breast milk for up to six months have fewer ear infections, stronger lungs, more healthy flora in their gastrointestinal systems, and augmented immunologies, just to name a few proven benefits. Women who choose to breastfeed navigate a strange terrain fraught with mixed messages and counterintuitive consequences. While many doctors, medical practitioners, and the U.S. government counsel that “breast is best,” new mothers leave the hospital with formula samples provided by large international manufacturers. Commitment to breastfeeding limits not only the mother’s ability to rejoin her career, but even to enjoy public spaces such as parks, restaurants, and shopping malls. Despite federal laws that protect a woman’s right to breastfeed in public (specific protection varies by

state), cultural attitudes about the exposure of the female breast make even the most modest public display of nursing a social gamble. One of the most socially divisive trends to affect motherhood since the late 20th century is that of intensive mothering. As defined by Sharon Hays, author of The Cultural Contradictions of Motherhood, intensive mothering is practiced by those who believe that an infant needs uninterrupted attention and protection in the first years of life. This style of child rearing shares a lot in common with attachment parenting (as described by William and Martha Sears) and natural parenting, which combines child rearing with anticapitalist and (pseudo)scientific claims about how early humans might have raised their children. Each of these philosophies centers upon the child’s free access to the mother’s body for on-demand feeding and comfort. Intensive mothering is divisive because, as Chris Bobel describes in The Paradox of Natural Mothering, extended mother-child contact is only feasible in families where a single breadwinner (father) can support a stay-at-home spouse (mother). Extended breastfeeding depends on the mother’s ability to be available to the child every few hours during the first few months of life, and almost as often for months after that. For this reason, the mothers who can provide extensive on-demand breast milk are likely to be middle- and upper-class whites, among other privileged groups. Taking into account the social trends associated with breastfeeding’s recent surge in popularity sheds light on at least one conclusion: A mother’s choice to breastfeed concerns a lot more than just her desire to do what is best for her baby. Cornelia C. Lambert University of Oklahoma See Also: Attachment Parenting; Gender Roles; Maternity Leaves; Mothers in the Workforce; Myth of Motherhood; Wet Nursing. Further Readings Blum, Linda M. At the Breast: Ideologies of Breastfeeding and Motherhood in the Contemporary United States. Boston: Beacon Press, 1999. Bobel, Chris. The Paradox of Natural Mothering. Philadelphia: Temple University Press, 2002.

Golden, Janet. A Social History of Wet Nursing in America: From Breast to Bottle. New York: Cambridge University Press, 1996. Hays, Sharon. The Cultural Contradictions of Motherhood. New Haven: Yale University Press, 1996. Thulier, D. “Breastfeeding in America: A History of Influencing Factors.” Journal of Human Lactation: Official Journal of International Lactation Consultant Association, v.25/1 (2009).

Bronfenbrenner, Urie Urie Bronfenbrenner is considered a leading scholar in child development, parenting, and human ecology. His ecological systems theory, later renamed bioecological systems theory, was instrumental in the understanding of the impact that families have on children’s development. Born in Moscow, Russia, in 1917, Bronfenbrenner moved to the United States when he was 6 years old, where his father worked at the New York State Institution for the Mentally Retarded as a clinical pathologist and research director. Bronfennbrenner received his bachelor’s degree with a double major in psychology and music from Cornell University in 1938. He received a master’s degree in developmental psychology from Harvard University, and a doctoral degree from the University of Michigan in 1942. Upon receiving his Ph.D., he entered the U.S. Army as a psychologist in the Air Corps, and later transferred to the U.S. Army Medical Corps. Bronfenbrenner joined the faculty of Cornell University in 1948, and remained there throughout his career. He later married and had six children. At the time of his death in 2005, he had 13 grandchildren and one great-granddaughter. (Bio)Ecological Systems Theory Bronfenbrenner’s impact on the social sciences stems from his writings on the ecological systems theory, which views development in a multifaceted contextual manner. Presenting a nested structure of systems, he proposed that each had simultaneous influences on one’s development. His framework originally provided for four systems—the microsystem, mesosystem, exosystem, and macrosystem, and he later added the chronosystem—to gain a

Bronfenbrenner, Urie

147

complete picture of the developmental experience on individuals. Bronfenbrenner’s work on bioecological theory is said to have stimulated collaborations across social and behavioral science disciplines that previously did not exist. Because of this influence, Bronfenbrenner is considered a pioneer in a number of disciplines. Bronfenbrenner’s professional focus was not exclusively on developing his theory, however. He was also highly involved with addressing the applications of the theory to the development of significant policies and programs in the United States. He served as a consultant to U.S. presidents on domestic policy and educational matters and was influential in the creation of the Head Start program. In 1970, Brofenbrenner served as chair of the White House Conference on Children. He additionally served as an advisory board member for the National Council for Families and Television and National Resource Center for Children in Poverty. These areas of service stand as examples of his ability to translate theory to practice, a skill for which he received much recognition. In addition to receiving several honorary doctorate degrees, he was nominated for the National Medal of Science in 1989, and received the G. Stanley Hall Award from the American Psychological Association (APA) in 1985. In 1996, the APA renamed the Lifetime Contribution to Developmental Psychology in the Service of Science and Society as the Bronfenbrenner Award, providing further evidence of his influence in the social sciences. Over the course of his lifetime, Bronfenbrenner published more than 300 papers and 14 books. His 1979 book The Ecology of Human Development was considered groundbreaking and established his prominence in the field. One of his final publications, Making Human Beings Human (2004), provides a series of papers written as the evolution of the bioecological theory emerged and serves as an important contribution to the field. Implications for Families Bronfenbrenner’s theory has been used as a framework for intervention programming and policy decisions about children and their families. During his long career as a theorist and researcher, Bronfenbrenner emphasized the practical, applied, and public policy implications of his work. One of his legacies is that practitioners, policymakers, and family scientists take into consideration the

148

Brown v. Board of Education

complex and interconnected social systems that affect human development. Using Bronfenbrenner’s theoretical lens, family advocates frequently work across multiple levels of social systems. Researchers have also been influenced by Bronfenbrenner’s theory because they realized that they must account for systemic factors at multiple levels to gain more complete assessments of a given developmental phenomenon. Over the last three decades of the 20th century and continuing to the present, studies based on Bronfenbrenner’s bioecological systems theory have become common, with researchers investigating such phenomena as daycare effects on children, the effects of growing up in impoverished neighborhoods and communities, and how health care is delivered to children and families. Bronfenbrenner’s influential ideas helped move scientists and practitioners interested in child development out of laboratories and offices and into homes, schools, neighborhoods, and communities. Tara Newman Stephen F. Austin State University See Also: Day Care; Ecological Theory; Head Start. Further Readings Bronfenbrenner, Urie. Ecological Systems Theory. London: Jessica Kingsley Publishers, 1992. Bronfenbrenner, Urie, ed. Making Human Beings Human: Bioecological Perspectives on Human Development. Thousand Oaks, CA: Sage, 2004. Bronfenbrenner, Urie, and Stephen Ceci. “Nature– Nurture Reconceptualized in Developmental Perspective: A Bioecological Model.” Psychological Review, v.101/4 (1994).

Brown v. Board of Education The U.S. Supreme Court’s landmark decision Brown v. Board of Education (1954) marked a turning point for the civil rights movement. In Brown, the Supreme Court unanimously reversed the ruling of Plessy v. Ferguson (1896), which allowed separate but equal public educational facilities. Brown

held that racial segregation in public schools was unconstitutional. The ruling stated that racial segregation was a violation of the Equal Protection Clause of the Fourteenth Amendment of the U.S. Constitution. Unfortunately, the case did not succeed in completely desegregating public schools, and the practice informally continued beyond the late 1960s. The Brown decision on May 17, 1954, is venerated as one of the most prominent rulings for equality in U.S. history. The outcome demonstrated resilience and persistence by the African American community and pushed the United States toward achieving freedom and equality for every race and ethnicity. Brown marked the beginning of the modern civil rights movement. The early 1950s saw many perpetual political and legal hardships for African Americans living in the United States, evidenced by the lack of equality spread throughout the public school system. At the time, all school districts in the south were required to have both white and black schools to service their communities (in other states, educational separation was either illegal or not practiced). These schools supposedly adhered to the doctrine of “separate but equal,” but in practice, black schools were frequently inferior to white schools in terms of the resources they had to draw on and the advantages they could provide to students. This helped perpetuate racism. From 1868 to 1954, the Equal Protection Clause had not made racial segregation entirely unconstitutional. Further, racial segregation was not uncommon in many public places. With regard to public schools, the Supreme Court interpreted the equal protection clause according to the ruling in Plessy v. Ferguson, which allowed schools to be “separate but equal.” Plessy ultimately set a precedent compelling public schools to provide equal opportunities for blacks and whites, by authorizing separate facilities for each race. But Plessy did little to uphold the ideals of the Declaration of Independence and instead deprived blacks their due process of law. Plessy’s ruling remained the standard doctrine concerning racial segregation in the United States until the Supreme Court repudiated it in 1954 with the Brown decision. In 1951, Linda Brown, a third grader, was attending a black elementary school in Topeka, Kansas. Her father, Oliver Brown, made several unsuccessful attempts to transfer Linda to a white



elementary school that was closer to home. After the white school’s principal denied Linda enrollment, Oliver Brown sought help from the National Association for the Advancement of Colored People (NAACP). Linda’s rejection caught the attention of other black families in similar situations, and many came forward, joining the fight for equality. Similar cases entered the courts in Virginia, Delaware, and South Carolina. When Brown decided to file suit, 13 different families representing 20 children from Topeka joined in his fight against the Board of Education. The NAACP decided to make Brown the representing plaintiff because of the timing of his daughter’s segregation. The case was filed as a class action suit and the NAACP sponsored the plaintiffs; thus, the NAACP filed an injunction to prevent racial segregation of blacks. The case was heard in the U.S. District Court for the District of Kansas in June 1951. After each party’s arguments were heard, the request for an injunction was denied, and the court ruled in favor of the Board of Education. The court relied on the ruling of Plessy, and agreed with the defendant that the segregation in public schools was not directly harming black children. The defense argued that as long as the education was equal in the separate classrooms, black children would still have the opportunity to achieve the same goals as white children. The District Court’s ruling continued to spur many controversial interpretations of the equal protection clause. Brown appealed to the Supreme Court of the United States, and in 1952, the case was granted a writ of certiorari, a mandate that allowed it to be heard before the Supreme Court. Joined by other plaintiffs from Kansas, Brown and the NAACP brought their case before the Supreme Court on December 9, 1952, for the first time. Chief counsel Thurgood Marshall, who would later be appointed the first black justice of the Supreme Court, represented the plaintiffs in their fight to eliminate segregation in public schools. Brown took approximately two years to reach a verdict. On May 17, 1954, Chief Justice Earl Warren delivered the court’s unanimous decision in Brown’s favor. The court stated that the public schools’ racial segregation of children was a direct violation of the equal protection clause. The court’s opinion delineated that the prior ruling of Plessy,

Brown v. Board of Education

149

and the archaic interpretation of an amendment formulated in the 1860s, did not fully represent modern equality in the United States for blacks. Chief Justice Warren spoke of present-day equality and the importance of a child’s public education in the 20th century. He stated that education was a major component in a citizen’s life and a lack of a decent education could hinder a person’s socialization and ability to succeed. Ultimately, the court declared that public education was a guaranteed right that must be uniformly given to both blacks and whites. The decision spread controversy across many segregated states. Resistance movements were prevalent in Kansas, Virginia, Arkansas, and Kentucky. In 1955, following the decision in Brown, a case that came to be known as “Brown II” ordered states to comply with the ruling in Brown with “all deliberate speed.” However, other public facilities would not become entirely desegregated until the Civil Rights Act of 1964, and the effect of Brown on school integration fluctuated. From the 1960s to 1980s, the number of black students attending white schools significantly rose. By the early 1990s, integration levels quickly dropped when federal court sanctions were lifted, allowing schools to return to racial segregation, and when whites fled from cities to suburbs, creating underfunded segregated neighborhoods across the country. The notion of “separate and unequal” continues to be seen today in black and Latino communities. On a larger scale, Brown aided the movement toward desegregation by placing pressure upon the political and judiciary systems, and its legacy captured the attention of the entire nation. The revival and spread of the equal protection clause beyond the New Deal’s ideals of democracy spread past race. Brown’s impact on society aided other movements such as feminism, gay rights, advocacy by people with disabilities, religious, and other minority rights. Brown’s effect on American society arguably has demonstrated that the American judicial and political systems, along with the values embedded in the U.S. Constitution, do bolster the advancement of personal rights and freedom over time. Patrick Koetzle Georgetown University Law Center

150

Budgeting

See Also: African American Families; Civil Rights Act of 1964; Civil Rights Movement. Further Readings Klarman, Michael J. Brown v. Board of Education and the Civil Rights Movement. New York: Oxford University Press, 2007. Martin, Waldo E. Brown v. Board of Education: A Brief History With Documents. New York: Bedford/St. Martin’s, 1998. Patterson, James T. Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy. New York: Oxford University Press, 2002.

Budgeting How families make decisions regarding their use of money and resources depends on the household budget. A budget is a plan for how to make and spend money to remain solvent while providing for the necessities of life. A budget is a financial tool that accounts for how bills will be paid and money will be saved over a period of time. Typical household budgets for American families are based on income received in exchange for work. Most families in the United States gain income from employment. Other sources of income include agriculture, owning a business, investments, retirement pensions, savings, and public assistance. The amount of income needed to successfully budget depends on the size of one’s family, an understanding of essential needs, and numerous other factors that relate to socioeconomic standards of living and income expectations. Ideally, household budgets are designed to meet daily living needs while mitigating risk from catastrophic loss. This means that a well-designed budget plans for expenses and savings. Some families and individuals are very good with budgeting, but just as many people have difficulty. Typically, most families budget for housing, transportation, food, clothing, energy, and savings. How much they budget varies across households and time. Historical Trends in Household Budgets The Bureau of Labor Statistics began tracking American household budgets and expenditures in

the late 1890s. Originally similar to the census survey, the Bureau of Labor Statistics tracked and published their findings in 10-year intervals until technology improved and allowed for more frequent data collection, analysis, and publishing. The most recent information about household budgets comes from the Bureau of Labor Statistics for 2011. Since 1900, household incomes and expenditures have exponentially risen. In 1901, the average household in the United States had an annual income of $750, with household expenses of $769. Typical families budgeted 23 percent of their income on housing, 42.5 percent on food, and 14 percent on clothing. Few families owned their homes; 80 percent of all families rented. As incomes increased through the 1950s, discretionary income rose. Families allocated 27.2 percent of their income on housing, 29.7 percent on food, 11.5 percent on clothing and 31.6 percent on other items. Though housing remained a fairly stable percentage of the budget, the increase in “other items” reflects the growing changes to the American lifestyle that required households to own automobiles and homes to have indoor plumbing, electricity, and telephones (none of which were standard in 1900). For most American families, incomes dramatically increased; from roughly $1,518 per year in the 1920s, to $4,237 per year in the mid-1950s. Modest gains in household incomes continued from the 1950s through the 1970s. This time period also saw increases in home ownership; less than 20 percent of families owned their homes in 1930, but by 1970, this figure jumped to 62.9 percent. As income slowly rose, household budgeted expenses also increased because of inflation and new expenses that emerged each year. Whereas family expenses had historically centered on housing, food, and clothing, newer expenses involved transportation, communication, and utilities. By the 1980s, incomes jumped 70 percent over 1960 levels to a median of $21,237, with average yearly expenditures at $21,975. In addition to housing costs, credit debt began emerging as a household budget item. This reflected the growing trend of families buying goods and services on credit and paying for them over time, sometimes at steep interest rates. By 2000, household incomes had doubled six times since 1900, to an average of $50,302 a year.



These vast increases are attributed to improvements in salaries, changes in the labor force, education and technology advancements, and a greater prevalence of two-income households. As income continued to increase, household budgets expenditures also increased to $40,748. What appears like less spending and more savings in the household budget may actually reflect better management of expenses into long-term debt, and an increase in public awareness of needing retirement savings. During the first decade of the 21st century, income plateaued and at times slightly dropped. Household expenses remained modestly stable, but many families experienced increased expenses in healthcare and energy. Housing expenses accounted for about 32.8 percent of household budgets, and 68.9 percent of Americans owned their homes. Though there appears to be an increase in the percent of the household budget used for housing, the actual expense of purchasing or renting a home is relatively stable, with much of the increase attributed to the hidden expenses of fuels, utilities, and household operation supplies such as lawnmowers, vacuum cleaners, furniture, and large appliances. Household Budgets There are many budget models that individuals and families use to manage household expenses. One model suggests that household budgets be organized around housing expenses. As a general guide, no more than 25 to 33 percent of the household income should be allocated to housing. Thirty percent of the income should be allocated to household expenses that include food, utilities, clothing, gifts, and repair and maintenance of the home. Discretionary expenses need to be kept within 10 percent of the budget, and an additional 10 percent of income should be devoted to savings. All reoccurring monthly debt, including mortgage and car payments, needs to be kept at or below 40 percent of the monthly income. Debt-to-Income Ratio The debt-to-income (DTI) ratio is the percentage of income to debt; it is a figure that is used to determine the financial well-being of a household. The higher the DTI, the less economically stable the household budget. A DTI of 36 to 40 percent

Budgeting

151

is considered a workable budget. However, those who have DTI ratios over 40 may have problems with managing household needs with available resources. It is not uncommon for households with high DTI ratios to operate beyond their means by using credit cards to support or sustain a higher standard of living than they can reasonably afford. DTI ratios are calculated by taking the minimum due payments of re-occurring monthly debt and dividing it by the monthly gross income. For example: mortgage $700; car payment $350; personal loan $125 month; minimum credit card payments $75—this adds up to a debt of $1,250 per month. If a person’s monthly gross income is $3,000, to DTI is calculated by dividing $1,250 by $3,000 to obtain a rate of 41.6 percent. This examples illustrates a higher than desired DTI ratio but one that could be supported for a limited time. Often when families carry too much credit debt they sacrifice saving money or reduce other expenditures that can reduce the overall well-being of family members. Sometimes in an attempt to reduce expenses a family will cut out discretionary activities that help members relax and re-energize. Though acceptable for short time periods, an unusually extended period of austerity can affect physical, mental, and emotional health. Deborah Catherine Bailey Central Michigan University See Also: Credit Cards; Dual-Income Couples/DualEarner Families; Earned Income Tax Credit; Standard of Living. Further Readings Himmelweit, Susan, Christina Santos, Almudena Sevilla, and Catherine Sofer. “Sharing of Resources Within the Family and the Economics of Household Decision Making.” Journal of Marriage and Family, v.75 (2013). U.S. Bureau of Labor Statistics. 100 Years of U.S. Consumer Spending: Data for the Nation, New York City, and Boston. http://www.bls.gov/opub/uscs (Accessed September 2013). U.S. Department of Commerce. Historical Census of Housing and Home Ownership Rates. http://www .census.gov/historic/ownrate.html (Accessed September 2013).

152

Bulimia

Bulimia For over a century, families have been implicated in both the development and maintenance of eating illnesses. Bulimia nervosa (BN), an eating illness identified by clinicians in the 1970s, is a serious disorder with both psychological and physical consequences that affects males and females of all ages. Recent studies support many factors, inside and outside the family, that correlate to the development of BN. Diagnostic features of BN include (1) recurrent episodes of binge eating (e.g., eating within a two-hour period more food than what most individuals would eat under similar circumstances, and a loss of control over eating), and (2) recurring compensatory behaviors to prevent weight gain (e.g., vomiting diuretics). The behaviors must occur on average at least one time per week for three months, and not within the context of an episode of anorexia nervosa (a related disorder that typically does not include binge eating), and an individual’s self-perception must be unduly influenced by weight and shape. The oneyear prevalence of BN diagnoses is 1 to 1.5 percent for young adults. BN affects 10 times as many women as men, and the peak age of onset is adolescence to young adulthood. BN can endure several years, with patterns of remission. Over time, symptoms tend to abate naturally, though treatment is associated with better prognosis. Mortality rates for BN are difficult to determine, in part because of difficulties in diagnosing BN as unique from any other disorders. Most evidence supports that mortality as a result of complications related to BN is likely to be only slightly or not at all elevated. Bulimia and the Family Many precipitating factors have been identified for developing eating illnesses in adolescence or adulthood. Those factors include things like genes involved in weight regulation, neuropsychological profiles, gestational difficulties, early feeding or gastrointestinal problems, comorbid mood- and anxiety-related syndromes, and early trauma, like sexual abuse. Many of those factors, however, predict other eating illnesses in addition to BN. For example, young women with BN report higher levels of conflict in the family and a lack of family cohesion, though mixed findings and a lack of comparisons to other psychiatric groups puts into question whether those factors are general to psychological difficulties or specific to

BN. It is more challenging to isolate specific factors that predict specific bulimic symptoms. Researchers now theorize that general individual factors interact with more specific contextual factors to result in pathology that emerges as eating disorders. Those personal factors can include things like a negative self-evaluation, being overanxious, and weight or shape concerns. Several factors that are codetermined by genetics and the environment, such as childhood obesity and early physical maturation, are related to the risk of BN. Family factors that are specific to BN include having a family with dysfunctional rules and attitudes about food and eating, for example encouraging a child to “clean her plate,” rather than encouraging a strategy of moderation. In addition, individuals with problematic eating are more likely than others to report a parent who is critical of their weight and shape, having less contact with their parents, and exhibiting more hostility toward their parents. Being teased or criticized for weight can be particularly detrimental to how a child views himself or herself, precipitating feelings of shame and low self-esteem that can lead to problem eating. There is some evidence that parents who experience obesity, or desire to lose weight themselves, are more likely to encourage children to diet. Parental disorders like maternal depression and paternal substance use are also related to the development of BN. Studies of the development of eating pathology within families are unable to account for all of the precipitating factors, often explaining significantly less than half of symptom variability. Family Prevention of BN Some authors have proposed that in order to prevent the onset of an eating disorder, families can adopt healthy attitudes about body image and eating and engage in more family meals, which are correlated to greater family cohesion and better problem solving. Parents might also consider discussing changes related to puberty with girls, becoming educated as a family about nutritional needs, discussing media pressures for thinness, and helping youth attend to cues of hunger and satiety to determine when to start and stop eating. Family Role in Treatment Many treatments exist for BN, though studies of efficacious treatments involving the family are small in number. Evidence shows that family therapies work

Bullying



for some, but not all, young people with BN. Family members tend to be more involved in adolescent and young-adult treatments. Evidence supports that both family-based therapy (FBT) for BN and cognitive-behavioral therapy (CBT), including familyfacilitated or self-guided CBT, work for some portion of young adults and may be more efficacious at long-term follow-up than individual therapies. Family-based treatments can promote emotional communication and improve emotional literacy within families. For younger patients, family involvement may reduce attrition as all family members learn to better negotiate differences and learn about specific attitudes (e.g., rigid rules for eating) that relate to problem eating. For adults with BN, CBT is the goldstandard of treatment. Evidence supports that individuals with comorbid psychiatric diagnoses (e.g., depression) or parents who have pathology (e.g., a father with substance dependence, or a mother with depression) are likely to have worse long-term outcomes associated with BN. As with most disorders, there is no one-size-fitsall treatment, so consideration of individual and family resources and available treatment options (e.g., providers, research studies, and treatment teams) is critical. Preliminary studies suggest that caregivers of individuals with BN experience more difficulty and distress than those who care for individuals with other psychiatric conditions. Family members of a person with BN should educate themselves on the illness, seek help for reducing stresses associated with caring for an ill family member, and support the struggling family member while not blaming themselves for the disease. Families can consult the Academy of Eating Disorders Web site to find additional informational materials and references to qualified professionals. Further research of the context within which BN develops, and contextual and individual factors that interact, will hopefully provide additional avenues for treatment. Shannon Casey California School of Professional Psychology Alliant International University Danielle Colborn Stanford University See Also: Anorexia: Family Counseling: Parenting; Mental Disorders.

153

Further Readings Academy of Eating Disorders. http://www.aedweb.org (Accessed November 2013). Couturier, J., M. Kimber, and P. Szatmari. “Efficacy of Family-Based Treatment for Adolescents With Eating Disorders: A Systematic Review and MetaAnalysis.” International Journal of Eating Disorders, v.46/1 (2013). Konstantellou, A., M. Campbell, and I. Eisler. “The Family Context: Cause, Effect or Resource.” In A Collaborative Approach to Eating Disorders, J. Alexander, J. Treasure, eds. New York: Routledge/ Taylor & Francis, 2012. Le Grange, D. and J. Lock. Treating Bulimia in Adolescents: A Family-Based Approach. New York: Guilford Press, 2009. Treasure, J., G. Smith, and A. Crane. (2007). SkillsBased Learning for Caring for a Loved One With an Eating Disorder: The New Maudsley Method. New York: Routledge/Taylor & Francis, 2007.

Bullying The behaviors that are considered bullying today have existed for centuries, and examples of bullies in popular culture have been prevalent, but academic research on this topic was relatively rare, prior to the pioneering work of Dan Olweus in the 1970s. Research on bullying continued in the 1980s and 1990s, before high-profile school shootings and teen suicides in the late 1990s made it the subject of increased popular and scholarly attention. Since that time, experts in a wide variety of disciplines have written thousands of books and academic journal articles on the topic. It is estimated that 30 percent of young people, totaling some 5.7 million youth, experience bullying each year, either as bullies, victims, or bully victims. A national report on indicators of school crime and safety found that 19 percent of victims said that they had been made fun of, 15 percent said that they were the subject of rumors, and 9 percent said that they were pushed, shoved, tripped, or spit on. Definitions and measures vary across research studies, but bullying is typically defined as repeated exposure to intentionally negative actions, such as verbal, physical, or emotional abuse, by one or

154

Bullying

more individuals, in which there is an imbalance of power. While researchers have typically focused on bullying in schools, recent research has also focused on bullying in other settings, such as on the Internet (cyberbullying) and bullying among family members or coworkers. Interactions with parents and siblings can be defined as bullying, even if they are not commonly perceived that way. A family’s socioeconomic background can also influence the likelihood of victimization, with adolescents from less affluent families reporting a higher prevalence of bullying. The majority of research on bullying has been conducted by psychologists, though researchers in a wide variety of disciplines have made contributions to the understanding of its causes and the responses of victims, schools, and lawmakers. Although some dismiss bullying as a normal part of growing up, researchers note that its consequences reveal the importance of viewing it as a social problem. For example, the National Education Association has estimated that 160,000 students skip school each weekday to avoid bullies, and the United States Secret Service found that 71 percent of school shooters between 1974 and 2000 had been the target of a bully. Victims of bullying can also suffer from social isolation, academic issues, problems with physical health, and mental health problems such as depression, anxiety, and suicidal ideation. Problems are not limited to victims, however, because those who bully can also experience issues with school adjustment, mental health, and integration. Negative effects for those who were bullies, victims, and both have been found to last into adulthood. Bullying also affects the families of victims, although these effects are not as heavily researched as those on bullies and victims. Possible negative effects include tension in, and withdrawal from, family relationships. Parents of victims may also adopt more controlling behaviors in an attempt to limit their children’s exposure to bullying. Types of Bullying Movies and television shows depict bullies beating up victims and taking their lunch money, but bullying extends far beyond these stereotypical depictions. Bullying is categorized as a particular form of aggression that can be either direct or indirect. Direct bullying includes physical (e.g., pushing, shoving, or kicking) and verbal (e.g., teasing or name calling) attacks that take place face-to-face, whereas indirect

bullying includes actions such as exclusion, spreading rumors, and name calling behind one’s back, either in person or online. Behaviors such as exclusion and spreading rumors have also been called relational aggression because they use interpersonal relationships as a means to harm the victim. Each of these forms of bullying can be used in various circumstances, and researchers have focused on gay bashing, racial harassment, “slut shaming,” and sexual harassment, in addition to general bullying. Each of these forms can occur in person or online, where perpetrators are provided a level of protection by a lack of adult oversight and the ability to remain anonymous. Gay bashing is one of the most prevalent forms of bullying and includes attacks against people who identify as gay, as well as male heterosexuals who have a perceived lack of masculine qualities. A number of school shooters, including Eric Harris and Dylan Klebold of Columbine High School in Colorado, were victims of gay bashing. Similarly, slut shaming occurs when female victims are called names for deviating from traditional gender expectations, which can include engaging in sexual behaviors, dressing in sexually provocative ways, using birth control, or even being raped or sexually assaulted. Sociologist Jessie Klein notes that students in many schools may not use the term bullying to describe all of these forms of harassment, instead preferring labels such as “drama” that help situate them as a normal part of adolescent behavior. Other forms of aggression that conform to the definition of bullying have also been normalized and are less likely to be defined as problems. A number of interactions between adults and youth, for example, involve repeated attacks and an imbalance of power. These include some interactions with parents, coaches, and even teachers. Hazing is another example of a behavior that fits the definition of bullying but is typically not considered bullying by participants. Similarly, some interactions between siblings can be considered bullying. Although these interactions are not typically defined as bullying, they are often related to bullying in school. For example, 57 percent of school bullies and 77 percent of bully victims also bullied their siblings. Bullying Roles While the general public may think about bullying in terms of bullies and victims, researchers recognize



that there is a continuum of bullying roles, including bullies, bully-victims, victims, and bystanders. Researchers have studied the effects of age, sex, and race on the distribution of these roles. Typically, bullying increases during elementary school, peaks during early adolescence, and declines during high school. In addition to older bullies who attack younger victims, there is a considerable amount of bullying between same-age peers. It is also estimated that 25 percent of violence in schools is committed by girls, though there are mixed results in terms of the types of bullying that girls are most likely to use. In some studies, relational aggression has been shown to be more prevalent among girls than boys, with boys typically engaging in more direct physical and verbal abuse. In other studies, boys were more directly aggressive, and girls were more prosocial, but there was no difference in relational aggression. There are also mixed results regarding bullying and race, with some studies finding that African American students are more likely to be seen as aggressive by their peers and other studies finding that race had no effect on the frequency of bullying or being bullied. Psychologists Dorothy Espelage and Susan Swearer suggest that the division of bullying across racial or ethnic groups may be less important than the ways that the racial dynamics of a classroom, school, or community affect the content of the bullying. Beyond age, sex, and race, psychologists have identified anger and a positive attitude toward aggression as factors that are associated with bullying. Anxiety has been associated with bullies, victims, and bully-victims, and victims of bullying have been found to experience depression. Past research suggested that aggression is the result of difficulty solving social problems but more recent research suggests that some bullies have a keen understanding of social situations and use this to their advantage. Researchers have also started emphasizing the importance of the social contexts surrounding bullying, finding that the behaviors of one’s friends and family influence the frequency of bullying. For example, children whose parents have high levels of conflict, engage in aggressive behaviors at home, and use aggression to solve problems are more likely to bully others. In addition to the influence of one’s family, researchers have paid particular attention to the role of bystanders, who may encourage or inhibit

Bullying

155

bullying. Researchers have found that 85 percent of bullying incidents are observed by others, with an average of four peers viewing each. Some of these bystanders may take on roles as sidekicks or reinforcers, who actively support the bullying behavior through assistance, laughter, or other positive feedback; others may adopt outsider roles and passively observe the behavior. Bystanders actively attempt to help victims in only 12 percent of cases. Bullying and Social Status The roles that bystanders take on are even more important in light of research examining the connection between social status and bullying. Social scientists describe social status as inexpansible, which means that there is a limited amount of it. Not all students can have high status, so one student’s status must decrease for another’s to increase. Bullying is one way that students attempt to decrease the status of others and increase their status in the eyes of their peers. Supporting this idea, the peak of bullying coincides with transitions between elementary school, middle school, and high school, when students are often brought together with those from other schools and must renegotiate their social status positions. Regarding the struggle for status in schools, Robert Faris and Diane Felmlee found that aggression increases as one moves up the social status hierarchy. More popular students bully others more than less popular students bully others. Because bullying is used in an attempt to increase one’s status, those at the top of the status hierarchy are an exception and engage in comparatively little bullying. Other research explores the effects of status hierarchies on insults, finding that insults are often directed at others with equal or lower status, but are rarely directed at those with higher status. Adding to the challenge of negotiating the social status hierarchy in middle school is the relative lack of extracurricular activities compared to high school. The activities with the highest profiles are typically those that reinforce stereotypical gender norms, especially for males. Stereotypical norms of masculinity for male adolescents have been linked to the reinforcement of heterosexism and homophobia among students. These norms of masculinity have also been found to contribute to school shootings, as low-status victims of bullying and gay bashing turn to violence in order to assert

156

Bullying

their masculinity. This association between violence and power is reinforced by the media, peers, and family members. Anti-Bullying Policies In the wake of numerous school shootings and teen suicides, many schools have adopted antibullying policies, sometimes under state mandate. In contrast to European approaches, which tend to focus on providing counselors for students and addressing the needs of the entire school community, policies in the United States tend to focus on punishment, security, and control. Zero-tolerance policies are one example of this tendency. These policies automatically and severely punish students who violate school rules. While these policies treat bullying seriously, they have been criticized for leading to unnecessarily harsh punishments, such as an 8-year-old student who was suspended for pointing a chicken finger at another student and saying, “bang.” The implementation of anti-bullying policies does not necessarily make students feel safe; twothirds of students report that their schools respond poorly to bullying. These statistics are supported by the fact that negative behaviors in schools are often ignored or dismissed by adults. Even when students report high levels of violence in a school, school personnel do not think that there is a big problem with violence. This was evident in Jessie Klein’s study of school shootings, in which she concluded that adults in the lives of shooters often ignored warning signs, and referred to bullying as normal behavior. Because the ways that schools deal with bullying are often ineffective, students are reluctant to report incidents to adults or intervene on behalf of others. Those who do so often risk greater abuse. One reason that anti-bullying interventions in American schools are ineffective is that they often fail to address the underlying causes of bullying. Programs intended to strengthen students’

resiliency in the face of abuse, for example, imply that students should expect to be abused, without attempting to curb this abuse. Researchers have advocated prevention approaches closer to those used in Europe, targeting bullies, victims, schools, families, and communities. In contrast to zerotolerance approaches, programs focused on the school culture, sometimes called “whole school” approaches, aim to create positive school and classroom climates in order to create a sense of community among students. By changing the norms of interaction between students, programs that improve the school culture are also thought to be more successful at reducing cyberbullying, which provides additional challenges to punitive policies because it typically takes place outside of school and involves anonymity. Brent Harger Albright College See Also: Adolescence; Childhood in America; Delinquency; Education, Elementary; Education, Middle School; School Shootings/Mass Shootings. Further Readings Espelage, Dorothy L. and Susan M. Swearer. “Research on School Bullying and Victimization: What Have We Learned and Where Do We Go From Here?” School Psychology Review, v.32/3 (2003). Faris, Robert and Diane Felmlee. “Status Struggles: Network Centrality and Gender Segregation in Same- and Cross-Gender Aggression.” American Sociological Review, v.76/1 (2011). Klein, Jessie. The Bully Society: School Shootings and the Crisis of Bullying in America’s Schools. New York: New York University Press, 2012. Milner, Murray. Freaks, Geeks, and Cool Kids: American Teenagers, Schools, and the Culture of Consumption. New York: Routledge, 2004. Olweus, Dan. Bullying at School: What We Know and What We Can Do. Oxford: Blackwell Publishing, 1993.

C Camp Fire Girls The Camp Fire Girls, now simply Camp Fire, is a national youth organization established in 1910. It trains girls ages 12 to 20 for modern womanhood by blending homemaking and service with new opportunities in careers, outdoor activities, and civic life. The Camp Fire Girls added Blue Birds, a program for younger girls, in 1913. Its founders, a group of progressive reformers and educators influenced by psychologist G. Stanley Hall’s theories on child and adolescent development, designed the organization to bridge the perceived gap between family life, where gender socialization was historically centered, and public institutions such as schools, where youth education was increasingly taking place. The Camp Fire program adapted to a variety of cultural and political shifts throughout the 20th century. The Camp Fire Girls was the most popular girls’ organization in the United States until 1930, when the Girl Scouts surpassed it. During the 1950s, a time when youth organizations experienced their greatest popularity and institutional support from schools and civic leaders, Camp Fire Girls boasted a membership of nearly half a million girls. Along with other early 20th-century youth organizations such as Boy Scouts and Girl Scouts, the Camp Fire Girls aimed to socialize youth for genderspecific citizenship. Organizers sought a feminine alternative to the Boy Scouts, one that they believed

the Girl Scouts, with their masculine scouting experiences, failed to provide. The Camp Fire Girls adopted American Indian imagery to harness what founders Luther and Charlotte Gulick believed were timeless feminine roles. Gulick selected the Camp Fire name to invoke women’s ancient responsibility of tending the family and community hearth. Through the selection of American Indian–style names and by decorating Indian–style ceremonial gowns, girls could tap what he called a female “race history.” Over time, the Camp Fire Girls decreased their appropriation of Indian imagery, but it remains a distinctive feature of the organization. The Camp Fire Girls reveres motherhood and domesticity, simultaneously portraying these roles as scientific and quantifiable, and romanticizing them through sentimental descriptions. Camp Fire activities included earning honor beads, awards for mastering modest tasks in categories ranging from handicrafts to business and outdoor skills. Girls used these honor beads to decorate vests and ceremonial gowns in symbolic pictures representing their values. Although the introduction of camping and the exploration of careers subtlety pushed at the boundaries of accepted female decorum for the organization, outdoor activities are considered as much a preparation for healthy motherhood as they are a pathway to exploration or athleticism. In the organization’s early years, Camp Fire Girls appreciated the outdoor adventures, even though 157

158

Camp Fire Girls

A group of Camp Fire girls gather near the home of early organizers Luther Gulick and his wife Charlotte Vedder Gulick on Sebago Lake, South Casco, Maine, between circa 1915 and circa 1920. Today, the group is called Camp Fire and is inclusive, open to youth of any race, creed, religion, or gender.

camping excursions took up only a few weeks at most of a Camp Fire girl’s time each year. Camps offered a safe space where girls could try new things and form lasting friendships. At the same time, Camp Fire Girls earned most of their honors in activities related to homemaking. A girl might be rewarded for learning to bake three different kinds of cakes, rather than for than setting up a shelter and sleeping outdoors. The Camp Fire Girls’ founders envisioned clubs of adolescent girls led by young women who had just finished college. While this was a difficult model to sustain, even in its earliest years, by the late 1950s the overwhelming majority of leaders were mothers with daughters in the organization. This shift reflected the decade’s emphasis on the mother’s constant

availability to her children, even as it reflected women’s connection to their communities through civic involvement outside the home. In 1910, the organization adopted a policy of inclusion, accepting members regardless of race, ethnicity, class, religion, and disability. However, the Camp Fire Girls membership was predominantly white and middle class, and most girls participated in homogenous groups tied to their schools, churches, or synagogues. Organization officials did not directly challenge segregation, but on the heels of World War II, the Camp Fire Girls promoted antiprejudice instruction, published multicultural images, and urged tolerance. Girls’ citizenship responsibilities following World War II were broadened to include an international perspective. Girls cultivated friendships with those from other cultures through pen pal programs, provided international aid through food drives, and learned about the United Nations and world affairs. Camp Fire’s leadership kept the gender-specific goals of the organization in mind and reminded girls that service to the world extended their service to their homes and communities. The mid-20th century presented other challenges, but until the second wave feminist movement, the Camp Fire Girls emphasized homemaking as the center of girls’ worlds. As World War II drew to a close, Americans worried that a shortage of men would force girls to find careers instead of husbands. As those fears gave way to the realities of a marriage and baby boom, the Camp Fire Girls taught young women that homemaking would remain the first priority for the vast majority of them but that they could add civic responsibilities and feminine careers such as nursing to this primary identity. In the 1970s, youth organizations faced dwindling interest among their traditional audiences. Camp Fire Girls stayed relevant to American youth by increasing efforts to reach out to minority communities and by becoming a coeducational organization. In doing so, the leaders adopted the mainstream liberal feminist goal of promoting equal partnerships between boys and girls. In 2012, Camp Fire served a nearly equal number of boys and girls, providing 30 million program hours to over 300,000 youth and 70,000 families in 31 states. Jennifer Helgren University of the Pacific

See Also: Boy Scouts; Gender Roles; Girl Scouts; Hall, G. Stanley. Further Readings Buckler, Helen, et al. Wo-He-Lo: the Story of Camp Fire Girls, 1910–1960. New York: Holt, Rinehart and Winston, 1961. Helgren, Jennifer. “‘Homemaker Can Include the World’: Internationalism and the Post World War II Camp Fire Girls.” In Girlhood: A Global History, Jennifer Helgren and Colleen Vasconcellos, eds. New Brunswick, NJ: Rutgers University Press, 2010. Miller, Susan A. Growing Girls: The Natural Origins of Girls’ Organizations in America. New Brunswick, NJ: Rutgers University Press, 2007.

Caregiver Burden Caregiver burden, also known as caregiver strain or caregiver stress, is a high level of stress that may be experienced by people who are the primary caregivers for another person (usually a family member) with an illness or disability. Caregiving is a multidimensional construct that includes physical, emotional, social, and financial issues. Burden typically refers to the management of tasks; stress refers to the strain felt by the caregiver. Measurement of burden is difficult and based on individual perceptions. The term caregiver burden was first used around 1985 in the work of Robert Maiden, and was also often used by Stephen Zarit as research increased into the area of caring for the elderly. In the 1970s, gerontological researchers were looking at caregiver stress. This area became a focus of research that tried to address the needs of the caregiver. Though the term is relatively new, the burden of care has always existed. It was not always defined as a burden, however, but part of the expectation for the care of individuals by their family. Family has long provided the primary care for the aging population. The physical challenges of caregiving can be very demanding. Caregivers may develop physical health problems due to the strain. Some research has indicated that those providing long-term care have a lower life expectancy and suffer a reduction in their quality of life because of advancing health problems. In addition to their roles as caregivers, these

Caregiver Burden

159

individuals must also maintain their regular duties with regard to work, housework, and child-rearing. The emotional challenges of caregiving include the change in relationship between the caregiver and the care receiver. In a relationship where a child is the caregiver for an aging parent, for example, the traditional roles may be reversed. The parent may now have to rely on the child, and the child may find herself in the unfamiliar, and perhaps not entirely welcome, position of making decisions for the parent and managing everything from business matters to hygiene for the parent. As a patient suffers increasing cognitive issues or continues to decline, the caregiver’s relationship with other family members may change, particularly if the caregiver perceives that others do not value his or her contributions, or that others are not willing to assist as needed. Social issues related to caregiver burden can include the stress of combining a job, housework, and regular family obligations on top of caregiver duties. Dealing with other family members’ expectations related to caregiving is also part of the social issue. Individual cultural backgrounds are significant in dealing with social issues. In cultures where extended caregiving is an expectation of family life, the caregiver may experience less caregiver stress. Caregivers may also suffer financial burdens because of the costs of providing for the patient’s personal needs, living expenses, medications, and medical care. This may be compounded by the caregiver’s loss of income from reduced employment in order to fulfill the caregiver role. Financial burdens may also include the cost of hiring professional caregivers in order to allow the primary caregiver to do other things. The effects of the demands on caregivers can greatly vary depending on multiple factors. Those who care for individuals with cognitive issues face higher rates of burden, and are more likely to need assistance for their depression and anxiety. Those who care for patients with incontinency issues also report more burden because of the increased need for physical care. Caregiver burden is viewed as the extent to which caregivers perceive their emotional burdens as a result of caring for the patient. This includes how they view their emotional health, physical health, social life, and financial status. The assessment of burden has become a challenge because of cultural, ethical, religious, and other personal values that

160

Caring for the Elderly

may influence the perception and meanings of burden, as well as its consequences. Defining caregiver burden requires determining which components of the concept to measure, deciding whether to use objective or subjective measures of approach, and defining the unit of measure for primary caregivers, primary or secondary caregivers, and family. Multiple instruments have been developed to evaluate the level of caregiver burden, but no unified approach has been agreed upon by researchers because there is no clear agreement on what burden means because it is based on the caregiver’s perceptions. However, S. Zarit and colleagues noted several predictors of caregiver burden, including caregiver’s response to caregiving and the patient’s symptoms, the social support available to the caregiver, the quality of relationship between patient and caregiver before the onset of disease, and the severity of symptoms of the patient. The Montgomery Burden Scale is a two-dimensional measure of subjective burden that uses a social context approach. The variables in this examination of caregiver burden include objective descriptors, sentiment variables, and external variables such as systems culture and community resources. Vitialiano developed a screen for caregiver burden (SCB) in which burden is defined as the biopsychosocial response to stressors. It is an appraised distress in response to the caregiver experience, and the degree of distress induced by challenging events. Burden can be measured as exposure to stressors plus vulnerability, divided by psychological resources plus social resources. Kosberg and Cairl’s cost of care index was developed to analyze burden. Their belief is that there are many variables related to caregiver burden that can be classified into six categories: caregiver characteristics, caregiver formal support, character vote in formal support, caregiver function, consequences of care, and patient functioning. The caregiver burden inventory (CBI) developed by Novak and Guest, is a multidimensional instrument that measures five factors of caregiving: timedependence burden, developmental burden, physical burden, social burden, and emotional burden to measure caregiver burden. Caregivers have often been described as the “hidden patient” because of the problems they can suffer as the result of high levels of stress. To reduce

the level of burden, caregivers can take advantage of several resources. Social support services can help reduce the feeling of burden, as can becoming educated about the patient’s diagnosis. Caregiver skills and management of behavioral issues can directly impact the level of burden. Implementing coping strategies, utilizing time management techniques, and relying on stress relief techniques have been found successful. Caregivers who use problemfocused strategies such as confronting issues and seeking information tend to have less burnout. Support groups are also a popular resource for aiding with the isolation and loneliness that sometimes plague caregivers, but many caregivers have trouble finding time for them. Caregivers suffering a high level of burden that goes untreated are at increased risk of committing elder abuse. Janice Kay Purk Mansfield University See Also: Caring for the Elderly; Elder Abuse; National Center on Elder Abuse; Nursing Homes. Further Readings Kuei-Ru Chou, Chu Hsin, Tseng Chu-Li, and Lu Ru-Band. “The Measurement of Caregiver Burden.” Journal of Medical Science, v.23/2 (2003). Savundranayagam, Marie Y., Rhonda J. V. Montgomery, and Karl Kosloski. “A Dimensional Analysis of Caregiver Burden Among Spouses and Adult Children.” Gerontologist, v.51/3 (2011). Zarit, Steven, Karen Reever, and Julie Bach-Peterson. “Relatives of Impaired Elderly: Correlates of Feelings of Burden.” Gerontologist, v.20 (1980).

Caring for the Elderly In the United States, about 12 million people need some assistance with daily living. Only about 9 percent of adults between 65 and 69 years old need assistance, but 43 percent of those more than 85 years old need help with daily tasks. These activities of daily living (ADL) can include eating, bathing, dressing, getting to and from the bathroom, getting in and out of bed, and walking. For others, care needs may be minor, and include help with shopping,



financial management, transportation, light housekeeping, taking medication, and running errands; these tasks are called instrumental activities of daily living (IADLs). Children are most likely to provide care for their aging parents, followed by the patient’s spouse. These family members and friends are considered informal caregivers. An informal caregiver is an unpaid individual involved in assisting others with ADLs and IADLs. As of the 2010s, 29 percent of the U.S. population—65.7 million people—care for someone who is ill, disabled, or aged. Formal caregivers are paid workers who provide care in one’s home or in a daycare, residential, or other facility. History Families have always been the mainstay of caring for the elderly in the United States, preforming the role of informal caregiving in the past and today. Differences existed in shorter average duration of caregiving. The earliest institutional forms of care were county homes, poorhouses, poor farms, or almshouses. These were supported through public taxes. By the mid-1800s more specialized homes arose as “homes for the aged.” These homes were established by religious or ethnic organizations for members of their subgroups, from which traditional nursing homes developed. These early nursing homes were board and care facilities, often located in private homes. As the social environment changed in the early 20th century, care for the older adults also changed: society devalued the elderly in the transition from agriculture to industry, there was a reduction of extended family, and there was a transition to a wage economy. Other formal caregiving also began in the early 1800s in the form of visiting nurse programs. Health care insurance providers also offered nursing care at home in the early 1900s to support older adults. With the introduction of social insurance, there was a growth in types of formal caregivers and services. Informal Caregiving Spouses provide a significant part of caregiving for older adults and are usually involved in caregiving for a longer period of time than others. The average age of spousal caregivers is 75 years old, and both sexes provide equal amounts of care. Women are more likely to handle the daily tasks of care and hire assistance with outdoor household tasks, whereas men are more likely to hire help for assistance with

Caring for the Elderly

161

personal care and indoor household tasks. Men have been found to take on caregiving roles with a more positive approach to caring during the first year of care. Women have been found to look at their later years as time for personal opportunity and growth, and may at first resent returning to a caregiving role. The most stressful caregiving for a spouse occurs when the one being cared for has significant personality changes, such as with Alzheimer’s disease, Parkinson’s disease, or a cerebella vascular accident (CVA, or stroke). Research on spousal caregiving has also found positive aspects related to the process, including increased closeness in the relationship. In situations where the elderly are divorced or have lost spouses, the burden of care often falls to children and other relatives. Daughters tend to be the primary caregivers, followed by daughters in law. There are cultural differences on who provides care, but overall, children maintain the largest burden of care. While caregivers are found at most ages, the average age of caregivers is between 50 and 64 years of age. Daughters, sisters, and daughters-in-law consistently provide more hours of care than their male counterparts. Sons devote less time to caregiving and rely more on help from their spouses; however, many men feel obligated to care for their parents. Additionally, many women in the workforce have less time available for caregiving than in previous generations. Women who work and are also caregivers for older adults experience significant stress and work versus family conflicts. Working caregivers tend to employ professional caregivers and handle their additional responsibilities in several ways. Multiple studies have examined working caregivers and found that many take time off without pay. Those with lower-paying jobs and less education were more likely to be primary caregivers. Caregiving for the elderly has been linked to higher job absenteeism. This absenteeism costs the U.S. economy an estimated $25.2 billion in productivity each year. Two-thirds of caregivers are no longer working, and about one-third are employed full time, in addition to their caregiving duties. Those who are not working are usually older themselves; this may allow them more time to care for the elderly, but often are in the age group of older adults. This has created an older adult population caring for even older elderly parents, which creates a concern for their physical well-being. Those caring for people

162

Caring for the Elderly

over the age of 65 are an average age of 63, and about one-third are reported to be in fair to poor health. Those caring for the oldest population provide more hours in the role. Caregiver burden is also part of the caring process. Burden refers to the management of tasks; stress refers to the strain felt by the caregiver. The degree of stress felt by the caregiver partly depends on the coping skills that they have developed to deal with other life events. Contrary to what one might expect, women who are not employed outside the home report the highest level of stress. This may be related to their increased age and dwindling physical abilities, or it may be related to the increased demands of caring for someone full time. One of the most disturbing concerns related to stress is an increased level of elder abuse among caregivers. The majority of caregivers (approximately 72 percent) live within 20 minutes of the care receiver’s home. Those who provide caregiving assistance are more likely to rate their level of physical strain as higher than the general population. They are also more likely to have lower incomes and report that their physical health has deteriorated. Research on caregiving now focuses on the entire family system. Caregiving not only affects the emotional well-being of the caregiver but also other family relationships. The relationship between the caregiver and the elderly parent can take on many forms. Older adult children, especially daughters, are more likely to feel stress when the care receiver is demanding, critical, and unappreciative of their efforts. Children are particularly distressed when they feel that no matter what they do, it will never be enough. Relationship tension may arise between the primary caregiver and siblings and spouses who serve as supportive caregivers. In such cases, caregivers’ relationships with their siblings can become adversarial. Although sibling conflict can be a normal process, it can be detrimental to the caregiver’s mental health process. Sibling support, on the other hand, can reduce the caregiver’s perceived sense of burden and lead to a higher quality of caregiving. The caregiving process may also have a negative effect on the caregiver’s marital relationship. The role of caregiving may increase stress between husband and wives, who now have less time for each other. Women may be too worn out performing caregiving duties to spend quality time with their spouse, or they may worry about whether they are meeting

the demands of their marital relationships. The positive side for those in a marriage is that caregivers may be effective and thus gain the esteem of others that is beneficial for their marital relationship. Spousal support can also provide a release for tension though their interaction with the care receiver, as well as with the caregiver. Grandchildren may also become caregivers when their parents are not available. In such cases, they may face changes in their relationship with their grandparent. The grandparent’s focus changes to becoming a care receiver, which leaves less time for their traditional grandparent–grandchild relationship. Studies have found that this arrangement can create resentment for both the grandchildren and the care receiver. Formal Caregiving for Elderly Many frail elderly patients who might otherwise need to leave their homes are able to remain in their homes with access to community home health services. The most common types of community-based services include personal care such as bathing, feeding, dressing, grooming, housekeeping, and cleaning. Home health services can also help with preparation of meals, laundry, shopping, transportation to medical appointments, and paying bills, as well as case management services. There has been an increasing demand for expanding arrays of options for home care and community-based services as the U.S. population ages. Limited support for these services are provided through private insurance, as well as Medicare and Medicaid. Much of this care needs to be privately paid, increasing the burden on the older adult and family members. Support for caring for the elderly can be included in private long-term care insurance policies. Those with enough funds may have been able to afford long-term care insurance, which has considerably risen in cost since its beginning in the mid1980s. Most long-term care insurance policies cover custodial care, which includes meals and help with daily activities such as bathing and dressing. Employers are increasingly including long-term care insurance as an option. Caring for an older adult may also require institutional care, such as assisted living or nursing home placement as a long-term care option or as a last resort. Although nursing homes provide adequate and in some cases exceptional care for the older adult, quality care continues to be a problem

Catholicism



in many facilities. One-third of the nation’s nursing homes operated at a substandard level, according to a report by the U.S. Government Accountability Office. Problems include high staff turnover in long-term care settings. About 91 percent of nursing home aides are women. They are disproportionately members of minority groups and average yearly income is just slightly higher than minimum wage. Other issues within nursing home care include adjusting to nursing home life for the resident; the daily life of a nursing home often includes repetitive daily routines that lack stimulation. Issues also arise over patient abuse; many elderly are subjected to verbal or physical abuse. This abuse typically does not stem from intentional behavior but rather results from staff that is overworked and underpaid. Families of those in institutions are still involved in caregiving and face caregiver stress. Some added stress may be caused by factors unrelated to the nursing home, such as travel to and from the nursing home, giving up other activities to visit, or additional costs not covered by the basic nursing home fees. Residents of nursing homes who have regular visits from family are actually found to receive higher levels of care, because the staff knows that family members will be visiting. The long-term care system in the United States consists of family care, home care, and nursing home care. At each level, there exists the possibility that unmet needs will result in family struggles. These continued struggles are complicated by a shortage of supportive services. Janice Kay Purk Mansfield University See Also: Assisted Living; Caregiver Burden; Elder Abuse; Medicaid; Medicare, Nursing Homes. Further Readings Edwards, Douglas. “Caring for Today’s Elderly—And Preparing for Tomorrow’s.” Behavioral Healthcare, v.26/2 (2006). Family Caregiver Alliance: National Center on Caregiving. http://www.caregiver.org/caregiver/jsp/ home.jsp (Accessed December 3, 2013). Fort Cowles, Lois Anne “Social Work in Nursing Homes.” In Social Work in the Health Field: A Care Perspective. London: Haworth Press, 2003.

163

Grindel-Waggoner, Mary “Home Care: A History of Caring, A Future of Challenges.” Nursing, v.8/2 (1999) Singleton, Judy. “Women Caring for Elderly Family Members: Shaping Non-Traditional Work and Family Initiatives.” Journal of Comparative Family Studies, v.31/3 (2000).

Catholicism The Catholic Church was founded on the teachings and works of Jesus Christ, who Catholics believe is the Son of God. Peter, one of the disciples of Jesus, was given the responsibility to share Jesus’s teachings and to lead believers: “You are Peter, and on this rock I will build my church” (Matthew 16:18). Early Christians were also Jews (although the two religions became incompatible over time), and Catholicism has its roots in Judaism. Through the work of missionaries and the expansion of political empires, the Catholic Church grew throughout history. Today, Catholicism’s presence through churches, hospitals, or schools can be found in most countries in the world, and the church has over 1 billion members worldwide. The United States currently has approximately 70 million Catholics, making Catholicism the largest Christian denomination in the country. The Catholic Church encompasses many denominations, usually ethnic orthodoxies (e.g., Greek Orthodox Church), and members vary in their adherence to the teachings and practices. Thus, Catholics are a large and diverse group, both worldwide and in the United States. Catholicism is interesting to history and family scholars because it has influenced U.S. culture, society, history, and family life. The focus of this entry is on the intersection of Catholicism and the American family. As with most religions, the Catholic Church is very much involved in family life because its teachings, practices, and the institution are relational in nature. The Holy Family Catholics believe that the Holy Family, comprised of Jesus, his human mother Mary, and her husband Joseph, can be the blueprint for family life. In the words of Pope Leo XIII, “[God] established the

164

Catholicism

Holy Family in order that all Christians in whatever walk of life or situation might have a reason and an incentive to practice every virtue, provided they fix their gaze on the Holy Family.” Jesus, Mary, and Joseph, in their individual lives and in their relationships with each other, provide Catholics an example of living the church’s teachings, both privately (in the home) and publicly (in the community). Catholics are encouraged to model their lives and families on this example. Marriage, Intermarriage, and Divorce Catholics believe that marriage and child-rearing is a vocation, a path toward holiness and living out Jesus’s teachings. Thus, entering into marriage is entering into a lifelong holy relationship with one’s spouse and God. It is typically witnessed by extended family and the community, who pledge to support the relationship. Spouses vow to uphold

certain responsibilities, including loving and serving each other and God, and raising children in the Catholic faith. Because of the holy nature of marriage, the church as an institution requires all couples to undergo some type of counseling or education prior to their union. The marriage rate for Catholics is slightly higher than the U.S population in general. Intermarriage rates are approximately 22 percent, which is higher than for Mormons and Hindus, but lower than for Jews and Protestants. This may be because of church teachings and its obligations to the community and home. Catholics are also somewhat less likely to divorce than the general population, perhaps because of the strong support of marriage and the stigma of divorce found in church teachings and the Catholic community. Because the marriage relationship is bound by and to God, it cannot be easily dissolved. The Catholic Church does not recognize

The Holy Family by Francois le Fond, part of a collection of religious paintings in Palestine. Joseph is known for the faith that he showed by marrying Mary after her immaculate conception. Jesus, their firstborn son, learned carpentry from his earthly father Joseph, as was the norm during biblical times. The Holy Family of Jesus, Mary, and Joseph is the model for family life in the Catholic faith.



legal divorce. Instead, partners who wish to end their marriage in the eyes of the church must request an annulment, which invalidates the marriage. To be granted an annulment, partners must demonstrate insufficient judgment, psychological incapacitation, or unwillingness to fully enter the marriage promises at the time of the marriage. The Catholic Church frequently uses marriage metaphors to describe the nature of the relationship between God and the church. God is described as the groom, and the church as a bride. Both partners are obligated to each other, to love each other, be faithful to each other, and have expectations of communication and communion with each other through prayer, service to others, and the Eucharist. This metaphor is also used for those who are ordained as priests and nuns; they are “married” to Christ, and they promise to uphold responsibilities similar to a spouse, including obedience and fidelity. Fertility, Contraception, and Abortion Historically, Catholic families in the United States had high rates of fertility. This rate has decreased in recent decades such that there is no longer a significant difference between Catholics and the general population in the number of children living at home. Because the church teaches that sex is holy and serves dual purposes—first, pleasure and intimacy with one’s spouse; and second, to conceive children if God wills it—contraception use is prohibited. Many Catholic individuals, however, do not conform to this teaching. Because Catholics believe that life begins at the moment of conception, the Catholic Church has a commitment to protect the lives of unborn children. The church has been active in organizing social protests and legal campaigns to end abortion practices. The Catholic Community and the Family Catholicism has been characterized as both a public and private religion because its teachings and rituals obligate Catholics to practice in both the family and social spheres. This is in contrast to other religions, where main practices and rituals only take place in the home or at a place of worship. In the home, Catholic families are encouraged to pray individually and as a family and to complete home rituals (e.g., lighting an Advent wreath or fasting on Fridays). The family is also incorporated into the Catholic and larger community through attending Mass on

Catholicism

165

Sundays and on certain holy days (days that mark important events in Catholic history); serving the church through their time, talent, or treasure; serving those in need (e.g., many churches organize opportunities to serve others in the larger community, such as food banks or homeless shelters); and completing sacraments. Sacraments are ceremonies that mark important events in the life of a Christian (e.g., baptism, first communion, confirmation, matrimony, holy orders, reconciliation, or anointing of the sick) and are moments of communion with God. All sacraments involve a celebration of an individual’s new relationship with God, a ritual in which individuals directly experience God, and a showing of community support. In baptism, for example, when sins are erased and individuals are formally initiated into the church, individuals (or their parents) choose a set of godparents from the Catholic community, whose role is to support the family, provide a positive example of Catholic life, and help guide the individuals on their spiritual journeys. This provides the family a built-in connection to the Catholic community. American Catholics as a whole are fairly involved in their communities. Most Catholics report attending Mass once a month or more (with almost half reporting attending once a week or more) and praying daily. Most report volunteering or having their child attend religious education classes. Despite this, Catholicism suffers the greatest net loss of members of any Christian denomination. About two-thirds of former Catholics who are now unaffiliated report leaving the church because they do not believe the teachings of the religion. These former Catholics were also less likely than those who remained Catholic to attend weekly Mass. Despite the loss, members are added through conversion and primarily through immigration, with Latino immigrants composing an increasingly large percentage of U.S. Catholics. Catholicism in Society The first Catholic missionaries settled in the area that would become the United States as early as the 16th century. The goal of the missionaries was to spread Catholicism; this took the form of teaching language, providing safety from enemies, and introducing new animals, plants, and agricultural practices. These had social and political implications

166

CD-ROMs

for Native Americans and the missionaries’ home countries, and early missions have been criticized for using conversion to disguise imperialism. Despite this criticism, the Catholic Church, with its teachings about social justice, has taken an active role in providing services to individuals and families at the community level throughout American history, particularly in the areas of education, health care, and social services. Catholicism was among the first religions in the United States to organize Sunday schools (used both for the religious and general education of children) and to provide education for both boys and girls. Catholic seminaries and convents served as hospitals and almshouses for the sick and poor. Over time, modern Catholic hospitals were developed alongside secular hospitals, and were expected to serve the larger community. Currently, there are over 600 hospitals in the Catholic hospital system, which makes it the largest nonprofit hospital system in the United States. It serves approximately 11 percent of U.S. patients. The Catholic educational system consists of over 200 colleges, universities, and seminaries, educating approximately 1 million students. There are currently over 7,000 parochial schools, which serve almost 2 million students annually. Finally, Catholic Charities, one of the world’s largest volunteer organizations, provides a myriad of services to individuals and families, such as adoption, therapy, food security, and immigration transition. Ashlie Lester University of Missouri See Also: Christianity; Christmas; Church of Jesus Christ of Latter-day Saints; Godparents; Islam; Judaism and Orthodox Judaism; Religious Holidays; Religiously Affiliated Schools; Saints Days. Further Readings Pew Forum on Religion and Public Life. U.S. Religious Landscape Survey. Washington, DC: Pew Research Center, 2008. “St. Joseph in Scripture.” Oblates of St. Joseph. http:// www.osjoseph.org/stjoseph/scripture (Accessed June 2013). U.S. Conference of Catholic Bishops (USCCB). Catechism of the Catholic Church. 2nd.ed. Washington, DC: USCCB Communications, 2006.

CD-ROMs The idea for CD-ROM or a compact disc containing read-only memory first surfaced in 1969 through the work of Dutch physicists Klass Compaan and Piet Kramer. By 1972, a prototype was available. In the mid-1970s, the CD-ROM was first introduced in the United States, but it was not until 1982 that CD-ROMs began to be produced on a large scale. Sony of America introduced the first compact disc recording, Billy Joel’s album 52nd Street, that same year. Within two years, the technology for using CD-ROMs to store and retrieve data was introduced, although for many years floppy drives were standard features on home computers instead. The first CD-Rs, which allowed users to record directly onto CD-ROMs, were released in 1990, but it was not until about 1994 that they became common in the home computer market. The term edutainment was coined to describe the vast potential of the CD-ROM for combining education and entertainment across a wide range of activities. Just as Sony had transformed the music industry with CD-ROMs, Virgin Games transformed video gaming with the introduction of 7th Guest, the first CD-ROM interactive game, released in 1993. By the turn of the century, almost all computers sold in the United States contained CD-ROM drives, and many had CD-R drives with recording capabilities. Subsequently, the newer technology of DVDs, which offered movies, videos, and entire seasons of popular television shows, began replacing CD-ROM drives on most computers. Blu-ray, which offers high-definition quality and enhanced storage capacity, proved to have even greater potential than the DVD, and together these new technologies have made CD-ROMs virtually obsolete. Background In contrast to the floppy disc, which held only 1.5 megabytes of data, CD-ROMs could hold from 650 to 700 megabytes of data and had multimedia capabilities. CD-ROMs offered reliable storage that was not as vulnerable to damage as a floppy disc. Because it could hold thousands of pages of text, a single CD-ROM could contain encyclopedias, dictionaries, cookbooks, years of weekly magazine articles, or the entire works of William Shakespeare. CD-ROMs cost little to produce and



were easily affordable for home use. The inclusion of CD-ROM drives on home computers offered unlimited creative opportunities for families. Desktop publishing allowed families to personalize and print greeting cards and calendars, design clothing, and provided access to a wide range of crafting skills. Once CD-RWs became available, families were able to create photo albums that could inexpensively be copied and distributed among family members. Families wholeheartedly embraced the entertainment offered by CDROMS, which offered hours of music and gaming enjoyment. Reaching Families By the 1990s, computer manufacturers were routinely including CD-ROM drives on both desktop and notebook computers. In 1993, approximately 8.8 million computers with CD-ROM drives were sold in the United States. Most of those were sold to businesses and home users. Schools rarely had more than one CD-ROM drive for the entire school, and poorer schools had none at all. In 1994, the sale of computers with CD-ROM drives skyrocketed to 57.1 million; and for the first time, U.S. families purchased more computers than televisions. That same year, the sale of software sold on CD-ROMs climbed to 22.2 million units for total sales of $648 million. The following year, Microsoft introduced Windows 95, which offered a user-friendly operating system that had wide appeal to American families. By the 2000s, families had also discovered the attraction of disc-based gaming systems, such as Xbox and Playstation, they provided countless hours of entertainment, particularly to young males, who became the target demographic for video game developers. Computer CD-ROM drives were also able to play music compact discs with enhanced quality. At the same time, the DVD-ROM began to replace the standard CD-ROM on newer computers. Because it was backward compatible, CDROMs could be played in the new drives. By the end of the 1990s, more than 200 million CD-ROM and DVD-ROMs had been manufactured. Learning Tools Hailed as state-of-the-art learning tools with almost unlimited potential, CD-ROMs were embraced by a variety of disciplines. Once

CD-ROMs

167

computers with CD-ROM drives were available in schools, educators used CD-ROMS to teach young children how to read and to build the vocabulary of young readers. CD-ROM storybooks offered sound effects, pronunciation guides, and graphic animations, and they kept young readers involved on multiple levels. CD-ROMs proved to be well suited for many subjects, including math and science, particularly because they allowed students to proceed at their own pace. The medical professions developed CDROMs to help students learn how to diagnose disease, prevent disease, and assist patients in handling both diseases and chronic conditions. Psychologists used CD-ROMs to help individuals with behavioral problems learn coping skills. Relegated to the Past From the beginning, many computer manufacturers and programmers considered the CD-ROM a transitional technology, and they were right. The DVD-ROM replaced the CD-ROM in the late 1990s. The DVD-ROM has since been been replaced by Blu-ray drives on many computers. The advent of Apple’s iTunes, the rise of Internet-based access systems, and the popularity of cloud computing have made CD-ROM drives on computers unnecessary in the second decade of the 21st century. Even though CD-ROMs are now considered outdated, CD-ROM technology was essential in expanding the understanding of technology in American homes and in making new technologies more accessible to family members of all ages during a brief time in the 1990s. Elizabeth Rholetter Purdy Independent Scholar See Also: Books, Children’s; Genealogy and Family Trees; Music in the Family; Personal Computers in the Home; Video Games. Further Readings CD-ROM: Revolution Maker. Merton Grove, IL: COINT Reports, 1986. Feldman, Anthony. CD-ROM. London: Blueprint, 1987. Gillis, Anna Marie. “Delivering the New Goods— CD-ROMs.” BioScience, v. 45/9 (1995). Herther, Nancy. “CD-ROM at 25.” Online, v. 34/6 (2010).

168

Cell Phones

Sherman, Chris, ed. The CD-ROM Handbook. New York: McGraw-Hill, 1994.

Cell Phones Cell phones, also known as mobile phones, were first manufactured in 1973 by Martin Cooper of Motorola. In 1977, AT&T presented them to 2,000 Chicago customers, but it was 20 years before they were broadly marketed to the average consumer. The earliest version of cell phones provided only voice calls over a wireless system. Short messaging service (SMS) in the United States was first provided in 1996 by Omnipoint Communications, and developed into multimedia messaging service (MMS) later on. In 2007, Apple released the iPhone, the first wildly successful commercial smartphone (i.e., a phone that combines the talk and text features of a cell phone with Internet capability, an MP3 player for music, a camera, and other features) that operated over a wireless application protocol (WAP). The rise of the smartphone altered the digital landscape so that millions of people have the power to talk, text, take video, and search the Internet almost anywhere. Such services allow users to be in constant and instant communication with their family, friends, and colleagues despite their physical location. In 2013, according to a Pew Research Center study, 91 percent of adults in the United States owned a cell phone and 55 percent of adults owned a smartphone. The latest generation of cell phones supports complex multitouch input, gesture-based interaction, advanced soft keyboards, enhanced connectivity, and a great number of dedicated special-purpose applications. The cell phone’s convenience, along with its dramatically reduced manufacturing cost, has led to the rapidly increasing penetration rate of cell phones into the consumer market. More people now have cell phones than landline telephones or television. By 2012, there were more than 6 billion mobile subscribers across the globe, and over 300 million in the United States alone. The pervasiveness of this communication tool has created fundamental changes in people’s daily lives.

Because cell phones do not require a high degree of technical proficiency to use, they enable users to develop unique communication patterns that were not possible in previous eras. Compared to landline telephones, cell phones offer more mobility, flexibility, and freedom for users. Children can now call their parents to let them know where they are, and workers no longer need to use company phones for personal calls. Texting offers an unobtrusive alternative to phone calls, allowing people to check in with each other in spare moments and carry on conversations at a sporadic pace. The trend of Web-based cell phone usage in the United States is clear. Younger people and people living in households making less than $30,000 annually are increasing their mobile Web use at particularly fast rate. Given that African American and Hispanic populations have traditionally been on the wrong side of the digital divide, that their increasing rates of cell phone use are promising to narrow this divide. A Pew study reported that in 2008, 19 percent of cell phone–using Americans accessed the Internet via phones for news, weather, sports, or other information. By 2013, that percentage had risen to 55 percent. Cell Phones and American Family Life Cell phones have the potential to enhance family ties because of their distinguishing characteristics. Cell phones are relatively inexpensive, portable, and have a good battery life This makes them generally more convenient than landline telephone and laptops. More importantly, a cell phone’s most basic feature is instant two-way voice communication between any geographical location with cellular service. Additionally, short messaging service (SMS), a means to send and receive up to 160-character text messages over acell phone, offers a platform where messages can automatically arrive. SMS is the most frequently used communicative feature of cell phones; on average, U.S. users send 357 text messages, compared to 204 calls, per month. A growing body of research examines the role of cell phones in the family relationship. Explored areas include how the cell phone could help maintain the social presence of family members and strengthen family unit, and how the cell phone is used by parents to monitor and supervise their children’ behavior. Another line of research identifies issues related to the usage of cell phones among family members,



Cell Phones

169

for the stress of modern life or to deal with long separations. The ability to connect instantly, easily, and often is particularly useful for families with school-age children. By staying in touch throughout the day, families can enhance their solidarity and synchronize their schedules using cell phones.

Martin Cooper, the inventor of the cell phone, with the DynaTAC prototype from 1973. This photo was taken at the Taipei International Convention Center in 2007.

such as over-reliance on cell phones, privacy issues, security concerns, bullying by text, emotional domestic violence, and blackmail on cell phones. The popular press features stories claiming that cell phones are making Americans more isolated and ruining their face-to-face relationships, as more people become addicted to obsessively checking their messages or searching the Web in the presence of others. In contrast, researchers report that this generation of American families is as tightly knit as the last generation because of the widespread use of cell phones. Instead of cell phones being primarily a tool to extend the workday, research indicates that individuals use their cell phones to maintain connections with family and friends by preventing calls from invading their personal time and developing deeper contacts with loved ones. Family members use voice calls, text messages, and other forms of communication boosted by cell phones to compensate

Cell Phones and Parent–Child Communications In 2013, about 78 percent of American teens owned a cell phone, which is usually purchased by a parent; about half of these are smartphones with access to the Internet. One in four teens prefers using their cell phones to other devices, such as laptop, to surf the Internet. More than 50 percent of teen cell phone owners send more than 50 text messages a day. Text messaging has surpassed other communication activities, such as emailing, among American teens. However, voice calls remain the preferred way to stay in touch with friends and family members. Most American parents think it is reasonable for children at any age group to have their own cell phones to contact their parents. A survey conducted in 2009 showed that 70 percent of American teenagers talked on the phone with their parents at least once a day, primarily to ask permission for something, but also to obtain advice or support, as well as to share information and news with their parents. The frequent interaction between teens and their parents may improve their relationship, as well as the teens’ images of their parents. On the other hand, calls by parents to monitor their teenager children’ activities, supervise their homework, or tell them unpleasant news might lead to tension. Cell phones also allows for more spontaneous and frequent communication between partners or spouses, especially those in long-distance relationships. Cell phone technology fulfills the need to be connected by creating a face-to-face facsimile of communication by using the voice and video features of smartphones. Cell phone use, especially when focusing on building an intimate relationship, has the advantage of encouraging self-disclosure by allowing partners to share images, videos, and short messages. Problems With Cell Phones The term nomophobia, meaning fear of being out of mobile phone contact, was first introduced by British researchers in 2008, and is a portmanteau for

170

Center for Missing and Exploited Children

“no-mobile-phone phobia.” Americans between the ages of 8 and 18 spend on average 7 hours and 38 minutes a day on handheld devices and digital technology. Research conducted in 2012 shows that three out of five cell phone owners in the United States do not go for more than one hour without checking their devices. Over-reliance on cell phones and other handheld electronic devices can impact interactive activities inside the home. A cell phone ringing at the dining table might interrupt a family meal, or teenage children may use cell phones to avoid face-toface communication with their parents. When people enjoy the convenience and immediacy provided by cell phone technology, they may accept as inevitable that their personal space and time is consumed or occupied by such communications. As cell phones became smarter and more userfriendly, security issues emerged. The most discussed security issue related to cell phone use is privacy. Cell phone usage can be easily monitored, and exchanged information can be overheard by a third party, or messages can be hacked, forwarded, or read by those other than for whom they are intended. For young users, another safety issue relates to bullying, as youth increasingly use electronic means to harass others and disseminate false information far beyond the school grounds. A growing number of parents express their concern about the risks of text bullying, as more and more U.S. teenage children report that they have been picked on or harassed via text messaging. In 2013, approximately one in five teens reported being victimized by text bullying. Some parents want to keep their young children away from cell phones to prevent such incidents. Meanwhile, a group of smartphone apps has arisen to assist parents in addressing text bullying on their child’s cell phone. The most popular anti-bullying apps include Bully Stop, which protects children from bullying texts, calls, and picture messages. BullyBlock provides instant reporting features that allow the user to e-mail or text abusive behavior to parents, teachers, and law enforcement. Cell phones might help victims of domestic violence avoid potential attacks from their intimate partners by offering an additional way to contact police or friends. Alternatively, stalkers also use cell phones to follow domestic abuse victims because many cell phone carriers offer location information to authorized users as a charged service. If the stalkers have a bit of cellular expertise, they can track another user

without a carrier’s cooperation as long as the stalker has the target’s phone number. If a domestic violence survivor has left his or her partner, the abuser can continue to make harassing phone calls and send threatening text messages because most cell phone carriers do not provide service to block specified incoming telephone numbers. Yuanxin Wang Temple University See Also: Adolescence; Bullying; Digital Divide; Domestic Violence; Internet; Primary Documents 2009; Telephones; Texting. Further Readings Brenner, Joanna. “Pew Internet: Mobile.” Pew Internet and American Life Project. http://pewinternet.org/ Commentary/2012/February/Pew-Internet -Mobile.aspx (Accessed December 2013). Hjorth, Larissa, Jane Burgess, and Ingrid Richardson, eds. Studying Mobile Media: Cultural Technologies, Mobile Communication, and the iPhone. New York: Routledge, 2012. Ling, Rich. The Mobile Connection: The Cell Phone’s Impact on Society. San Francisco: Elsevier, 2004. Ling, Rich and Jonathan Donner. Mobile Phones and Mobile Communication. Malden, MA: Polity Press, 2009.

Center for Missing and Exploited Children The National Center for Missing and Exploited Children (NCMEC) is devoted to finding missing children, especially those abducted by strangers, or family members. The NCMEC is a public-private partnership that has many ties to federal, state, and local law enforcement. The center has raised awareness throughout the United States regarding child abduction, and has helped reunite many children with their parents. History In 1981, 6-year-old Adam Walsh disappeared from a shopping mall near his home in Hollywood, Florida,



Center for Missing and Exploited Children

and was subsequently murdered. At that time, there was no local or national system in place to help find missing children. Adam’s parents, John and Revé Walsh, founded the Adam Walsh Outreach Center for Missing Children to provide a national database for information on missing children, to aid law enforcement agencies in such cases, and to help victimized children and their families. In 1982, Congress passed the Missing Children’s Act. It established a section for missing and exploited children in the FBI crime database so that state, local, and federal law enforcement would have instantaneous access to information on missing children all over the country. In 1984, Congress enacted the Missing Children’s Assistance Act, and chartered the National Center for Missing and Exploited Children, which merged with the Adam Walsh Outreach Center in 1990. Operations The NCMEC is based in Alexandria, Virginia. It is a unique public-private partnership, funded partly by Congress and partly through the private sector. The NCMEC is authorized by federal law to perform 19 specific tasks, including operating a resource center that houses information on missing and exploited children, working with law enforcement to reduce the number of sexually exploited children, disseminate preventative literature, operate an identification program for victims of sexual exploitation, operate a 24-hour toll free hotline (1-800-THE-LOST), act on abductions as early in the process as possible and seek a quick return of the child, and operate a Web site (www.cybertipline.com) to allow a safe and efficient way to report suspicious Internet activity involving sexually exploited children. The NCMEC has developed many ways to notify the public of missing children, including pictures on milk cartons, posters, mailers, and most recently through the mass media. Current methods include AMBER Alerts, Project ALERT, and Team Adam. An AMBER Alert (America’s Missing: Broadcast Emergency Response) is a national child abduction alert system that notifies law enforcement agencies and the general public as soon as a child is determined to be missing. Notifications are disseminated in multiple ways via a partnership between law enforcement and the media. Project ALERT is a group of retired law enforcement professionals who aid in helping with missing child investigations.

171

Much like Project ALERT, Team Adam is a program within the NCMEC staffed by retired law enforcement professionals who are usually first responders in missing child cases. They provide technical assistance and make sure that new alerts that reach the national network. Statistics The NCMEC states that approximately 800,000 children are reported missing annually; the majority of these are runaways. About 25 percent of these cases involve abductions by a family member, and 7 percent of the cases involve abductions by a non–family member. Less than 1 percent of these cases involve a stranger or acquaintance who abducts the child with the intention of keeping the child permanently, demanding ransom, or murder. In such cases, however, the NCMEC states that the first three hours are the most crucial; 76 percent of abducted children who are murdered are killed within three hours of their abduction. The NCMEC reports it has found over 188,000 missing children since its founding. Since AMBER Alerts began in 1996, 642 children have been found as a direct result of its notifications. The Cyber Tipline has received 1.7 million reports since its launch in 1998. The Child Victim Identification Program has analyzed over 80 million child pornographic images in hopes of identifying and rescuing the children. International Child Abduction In 1988, the United States joined the Hague Convention on the Civil Aspects of International Child Abduction, a treaty aimed at parents or other family members in child custody disputes who unilaterally relocate a child from one country to another. From 1995 to 2008, the NCMEC served as the central authority in dealing with the U.S. government’s responsibilities in “incoming” Hague cases; that is, for children unliaterally brought to the United States from other countries. The NCMEC’s international division received applications for the return of children held in foreign countries, and worked with law enforcement to locate children whose location was unknown or unconfirmed. It assisted and counseled the applicants, negotiated with the abductors for a voluntary return, and recruited a network of qualified lawyers who could represent the applicants in the places where the children had been taken. Once the cases went to court, the NCMEC worked with

172

Central and South American Immigrant Families

the applicants, lawyers, and counterpart in other countries, and sent information to judges on how to decide the cases in accordance with the treaty and federal law. For “outgoing cases”; that is, abductions from the United States to other countries, the NCMEC provides the same support, triage, and networking that it provides to all other parents of missing children. The NCMEC remains, along with local law enforcement agencies, courts, and the U.S. Department of State, one of the very first calls that any parent should immediately make when a child has disappeared. John Crouch Independent Scholar Tiffany Ashton American University See Also: AMBER Alert; Child Abuse; Child Safety; Internet Pornography, Child; Runaways. Further Readings Hague Convention of 25 October 1980 on the Civil Aspects of International Child Abduction. http://www.hcch.net/index_en.php?act=text .display&tid=21 (Accessed March 2014). National Center for Missing and Exploited Children, http://www.missingkids.com (Accessed March 2014). Office of Juvenile Justice and Delinquency Prevention http://www.ojjdp.gov/pubs/childismissing/ch1.html (Accessed March 2014).

Central and South American Immigrant Families There is a long history of a presence of Latinos (the term Latino will be used here to reflect both men and women who have Spanish cultural heritage) in the United States. Indeed, Latinos composed a large segment of the population of North America long before the arrival of European immigrants. The majority of Latinos in the United States consists of immigrants from Mexico. For example, according to the Pew Research Center’s Hispanic Trends Project, in 1860, Mexicans made up 81 percent of the

Hispanics living in the United States. However, the Migration Policy Institute reported that by World War II, Latino Americans represented less than 10 percent of U.S. immigrants. Since the 1940s, there has been a resurgence of immigrants from Latino countries to the United States. Indeed, by 2012, Latinos represented 17 percent of the total U.S. population. Although Mexicans are still the largest Latino immigrant group in the United States, there is currently much more diversity, including immigrant Latino groups from Central and South America. This entry focuses on the immigration patterns and experiences of U.S. Latinos from Central and South America. It briefly reviews the history of immigrants from Central and South America, highlights some aspects of families and youth from these countries that serve as protective and risk factors, and presents recommendations to address gaps in understanding. Immigration Trends of Central and South Americans Table 1 shows the largest immigrant populations to the United States from Central and South America. It is interesting to note that, in some cases, the numbers reported by the International Organization for Migration (IOM) are sharply different from U.S. Census estimations. For example, the difference between these two sources for Guatemalans is more than 395,000 people. But even more interesting is the case of Brazilians, for whom the IOM reported 1,240,000 immigrants in 2008, but the U.S. Census estimated 321,544 people by 2012. This difference might be partly attributed to the thousands of undocumented immigrants to the United States, who are not reflected in the official records. Nonetheless, because of these discrepancies, interpretations should be conducted with caution. As can be seen in Table 1, the two largest immigrant populations are both from Central America— Salvadorans and Guatemalans. Low education levels characterize both Salvadoran and Guatemalan immigrant groups. For example, the percentage of people without a high school diploma for these populations is 29.15 and 38 percent, respectively. At the same time, only 4 and 1 percent, respectively, of the immigrants from these countries have a bachelor’s degree. The relatively low socioeconomic status characteristic of these immigrant groups may partly account for why they are the largest groups from Central and South

Central and South American Immigrant Families



173

Table 1 Immigrants from Central and South America in the United States Country

Estimated population

Year

Main destination is the United States

Percent Distribution of immigrants

Education level

Salvador Source: Pew Hispanic Center, 2013

2,000,000 1,992,754a

2011

Yes

California (35%); Texas (14%); Virginia (7%)

Incomplete high school (29.15%); High school diploma (13.25%); some college (8.85%); Bachelor’s degree or more (4%)

Guatemala Source: International Organization for Migration (IOM), 2013

1,637,119 1,241,560a

2010

Yes

California (29.07%); New York (10.38%); Texas (8.30%); Florida (6.35%)

Elementary School (38%); High school diploma (11%); Bachelor’s degree or more (1%)

Brazil Source: IOM, 2010

1,240,000 321,544a

2008

Yes

Florida (22%); Massachusetts (17%); California (11%); New York (10%)

11 to 16 years of study

Colombia Source: IOM, 2013

972,000 1,039,923a

2010

Yes

Florida (33.95%); New York (15.43%); New Jersey (10.18%); Texas (5.24%)

High school diploma (86.29%); some college (16.97%); Bachelor’s degree or more (20.26%)

Ecuador Source: Pew Hispanic Center, 2013

645,000 662,633a

2011

No

New York (40.31%); New Jersey (19.22%); Florida (10.70%); California (7.28%)

Incomplete high school (17.82%); high school diploma (17.82%); some college (15.50%); bachelor’s degree or more (12.24%)

Peru Source: IOM, 2012

531,358 594,418a

2010

Yes

Florida (19.2%); California (16.8%); New Jersey (14.7%); New York (13.2%)

Elementary school (31.4%); High school diploma (28.0%); Bachelor’s degree or more (28.7%)

Nicaragua Source: IOM, 2013

348,202 405,601a

2010

No

No data

No data

Argentina Source: Pew Hispanic Center, 2013

242,000 248,823a

2011

No

Florida (24%); California (23%); New York (11.15%)

High school diploma (15.70%); some college (16.53%); Bachelor’s degree or more (27.27%)

Chile Source: IOM, 2011

113,394 140,045a

2010

No

Florida (19.7%); California (19.4%)

Elementary school (7%); High school diploma (36%); Bachelor’s degree or more (57%)

Bolivia Source: IOM, 2011

99,210 103,296a

2010

No

No data

No data

Paraguay Source: IOM, 2011

20,023 20,461a

2010

No

No data

No data

Uruguay Source: IOM, 2011

13,278 60,178a

2000

No

New Jersey (24.6%); Florida (24%); New York (11.7%); California (6.2%)

High school diploma (45%); Incomplete High school (29.7%); Bachelor’s degree or more (8.2%)

Sources : a U.S. Census, 2012 estimated b International Organization for Migration (IOM)

174

Central and South American Immigrant Families

Figure 1 A model of latino family and youth adjustment Source: Adapted from G.Carlo and M. de Guzman. “Theories and Research on Prosocial Competencies Among U.S. Latinos.” In Handbook of U.S. Latino Psychology: Developmental and Community-Based Perspectives, by F. A. Villarruel, et al., eds. Thousand Oaks, CA: Sage, 2009.

America to settle in the United States. Despite the recent economic crisis, economic opportunities in the United States remain attractive relative to the conditions in these countries. However, the state of the U.S. economy has impacted migratory movements. After the Great Recession of 2008, the U.S. imposed more stringent migratory restrictions. As a result, immigration by some groups was reduced. Indeed, Karla Borja showed that remittances from the United States to Central American countries declined 10 percent in 2009. In contrast, from 2000 to 2008, remittances to Central America experienced an average annual growth of 21 percent. However, immigration patterns do not simply result from economic circumstances. Immigrants from Central and South America have also been motivated to relocate to the United States for several other reasons. Civil wars and military conflict caused political and economic instability in some countries in Central America, compelling

thousands of people to emigrate. For example, it has been reported that Salvadorans left their country in the early 1970s because of civil war, labor market conditions, and poverty. Analyzing data from the Guatemala-Mexico frontier migration survey, Guillermo Paredes noted that in Guatemala after the 1960s and 1970s, economic globalization, large landholdings, and political violence forced many Guatemalans to immigrate to the United States. The combined elements of economic hardship, civil unrest and violence, and relative proximity to the United States led to the mass exodus of families from these countries. Many of those who fled sought and were granted political asylum in the United States. In stark contrast, the immigration of Latinos from Brazil and Columbia (the third- and fourth-largest immigrant populations to the United States, respectively) show a much different pattern. For example, the average education level of immigrants from Colombia is relatively high, and



Central and South American Immigrant Families

full-fledged civil war has not occurred in Colombia or Brazil in recent years. Moreover, 50 percent of the countries in Table 1 do not report the United States as the main destination of their emigrants. Some of these countries are Nicaragua, whose emigrants’ main destination is Costa Rica; and Ecuador, where the main emigration destination is Spain. These nuances highlight the complexity of migration, in that the primary destination country can be the result of a number of considerations including economic factors, civil wars, geographical proximity, language barriers, and sociopolitical conditions. Eight out of 12 countries for which there is available information report that one of the most frequent destinations within the United States is California. In fact, California is the main destination for Salvadorans and Guatemalans (the two largest immigrant populations from Central and South America). This tendency coincides with the fact that in California, 35 percent of the state’s population identify themselves as Latino. Indeed, by 2014, it is estimated that the Latino population will be the largest ethnic group in California (39 percent, compared with 38.8 percent of white non-Latinos). Immigrants from Columbia, Peru, and Argentina mostly settle in Florida, whereas Ecuadorians and Uruguayans primarily immigrate to the east coast (mostly in New York and New Jersey). Interestingly, for Central and South Americans neither the northeast, the southwest, nor the Midwest are common destinations. For example, New Mexico, which is 47 percent Hispanic (and 39.8 percent white non-Hispanic), is not a common destination for Central and South Americans. However, these three regions of the country are primarily populated by immigrants of Mexican, Puerto Rican, and Dominican descent Although new immigrants generally tend to move to specific cluster communities within the United States, there is a high mobility rate once people have migrated. Often mobility is from economic and educational opportunities. For example, in 1990, the 100 largest counties by Hispanic population represented 83 percent of all Hispanics. However, by 2000, the same counties contained 79 percent of all Hispanics, and this percentage dropped to 71 percent by 2013, according to the Pew Research Center. Thus, there are migratory trends across the United States after initial immigration.

175

With regard to education levels among the largest two groups of immigrants, average education levels tend to be low. This could posit challenges to their labor integration and put them at higher risk of social and economic difficulties. In contrast, among other South American groups (e.g., Colombians, Argentinians, and Chileans), the average education level of immigrants is relatively high. Among college-educated immigrant groups, Venezuelans are the most educated immigrants, whereas Guatemalans and Salvadorans have relatively low levels of college education. Understanding U.S. Latino Family and Youth Adjustment  Immigrant families have to deal with several challenges that could represent a threat to their adjustment to a new society. However, many Latino families and youth have access to personal and social resources and assets that can enhance well-being and health. Furthermore, challenges and crises can help immigrant families and youth develop resiliency and talents to effectively deal with such situations. Figure 1 presents a broad conceptual model of U.S. Latino family and youth adjustment as proposed by Gustavo Carlo, Marcela Raffaelli, Maria de Guzman, and their colleagues. The model incorporates evidence-based aspects that influence health and well-being outcomes. As can be seen in Figure 1, the receiving community characteristics configure the first stages of the adjustment process. Some of these characteristics will be incorporated into the immigrants’ identity; therefore, their conception of in group and out group will likely change. As discussed by Seth Schwartz and collaborators, if immigrants feel comfortable and welcome, their definition of the in group could be expanded to include people from the receiving community. This new conception could change the way that they interact with their new neighbors, as well as the way that they adapt and contribute to the new culture. The background characteristics of the family also play an important role in this process. For Latino immigrants, family composition is a fundamental resource to facilitate their adjustment. Within families, cooperative relationships are established in order to help other family members (e.g., to meet their communication needs). English proficiency among Latinos has been investigated as one of the

176

Central and South American Immigrant Families

components of the adjustment process. Latino parents who do not speak English need their children to serve as “language brokers.” In a study by Charles Martinez and colleagues, these parents were characterized by less positive involvement with their adolescents and less monitoring. Furthermore, fathers (but not mothers) report higher levels of depression compared to Latino parents with higher levels of English proficiency. Other factors that play important roles in the adjustment process are the life events and changes that Latino immigrant families experience. For example, Gustavo Carlo and Maria de Guzman reported that many Latino children and adolescents assert that language barriers, family stress related to poverty, and cultural barriers (discrimination and negative stereotyping) are common difficulties. Because children spend many hours at school, the school context is also a key element during the adjustment process. Some variables that have been determined to have a positive impact on academic achievement and satisfaction among Latino youth are: friend, parent, and teacher support; parent monitoring of education; and whether the child receives a free lunch. Especially important for children’s success is for Latino parents to establish positive connections with their children’s teachers. Moreover, some cultural values such as bien educado (well-educated, cultured) and respeto (respect and consideration for authority and others) can positively contribute to children’s academic benefits. A key aspect of the model presented in Figure 1 is that the impact of the antecedent elements on Latino family and youth adjustment will be processed by family members and their youth. Thus, how family members and their youth interpret observed and experienced events influences their adjustment. Among the many personal qualities that can affect Latino family adjustment, ethnic identity and cultural values are two key culture-specific constructs. For example, an individual’s attachment to their Latino ethnicity (ethnic identity) is linked to higher levels of some forms of prosocial behaviors (i.e., actions that benefit others). Familism can also serve as a protective factor for drug use among Latino teens. However, low identification with one’s heritage and low familism have been found to be risk factors for problem behaviors among Latino teens. Finally, youth who strongly endorse the values of

familism are more likely to experience support and family cohesion, which can protect against mental health problems (e.g., depression). Family members’ and youth’s perceptions of acculturative stress is another mechanism that predicts adjustment and well-being of immigrants. Acculturation is defined by Margaret Gibson as “the process of culture change and adaptation that occurs when individuals with different cultures come into contact.” According to this definition, different patterns of acculturation are expected when different cultural groups come into contact. Acculturative stress, then, is the resulting physiological, cognitive, and emotional response to the demands and taxing elements of that change. The acculturative process is complex because it involves changes for both the receiving and the immigrant communities. These changes can be positively interpreted (as a challenge), or negatively interpreted (as a threat) and result in stress. Depending upon the individual’s interpretation and the conditions, the resulting stress can be debilitating or enhancing. For example, chronic exposure to discrimination and prejudice could be interpreted as threatening, and subsequently result in maladjustment. Alternatively, relatively mild forms of cultural stress (e.g., English-language fluency) could be interpreted as a challenge, and result in resilience. It has been reported that, although high levels of acculturative stress showed less selflessly motivated altruistic behaviors, such stress was a more frequent expression of other forms of prosocial behavior, such as helping in crises and emotionally evocative situations. It should be noted that acculturative stress could also result from the individual who reconciles their ethnic identity. For example, John Berry suggests that when an individual rejects their own culture of origin and adopts the new culture’s values, mores, and habits, assimilation has occurred. However, this can result in increased conflict and difficulties with family members and members of their culture-oforigin group. Some of the negative aspects among Latino immigrants characterized by assimilation are relatively high rates of illicit drug use, drinking, and smoking; unhealthy dietary patterns; and relatively unhealthy birth and perinatal outcomes. Similarly, stress and difficulties can arise when an individual rejects both their culture of origin and the new receiving culture (marginalization). Sometimes,



Central and South American Immigrant Families

marginalized individuals are attracted to deviant groups (such as gangs). In contrast, research suggests that health and adjustment is manifested when individuals conserve their origin cultural values and habits (separation), or when individuals internalize both the origin and receiving culture (integration). In the latter situation, individuals learn to navigate both cultures, and can therefore benefit from both the assets and resources afforded by both cultures. Conclusions and Recommendations There is growing recognition of the need to research the rapidly growing Central and South American segment of the U.S. population. However, most existing research on Latinos in the United States still focuses on Mexicans, Cubans, and Puerto Ricans. The extent to which the findings of studies of these Latino subgroups can be applied to Latino populations from Central and South America remains to be seen. Although Latino populations share important cultural characteristics, there are unique aspects of each nationality (as well as within nationalities) that are imperative to consider. Heterogeneity across historical events, political issues, religion, economic status, immigration patterns, physical characteristics, languages (including indigenous dialects), and cultural rituals and traditions (e.g., foods and music) present challenges to understanding Latino families and youth and their experiences and adjustment to the United States Following these observations, one can pose specific recommendations: First, there is a need to use more specific terms to label Latino groups. For example, the adjective “U.S.” should be placed in front of specific Latino nationalities to distinguish immigrant groups in the United States. The traditional term “American” that is typically inserted after identifying the nationality is inappropriate because Latinos who originate from Central and South America (as well as from North America) are all “Americans” (e.g., rather than use the term Columbian American or Mexican American, the terms U.S. Columbians and U.S. Mexicans are most accurate. Second, there is a great need to conduct a large-scale, national longitudinal study of Latino immigrant families and youth across the United States. This study should be funded by major research institutes (such as the National Institutes of Health, National Science Foundation, or the Institute of Education Sciences) and sample a wide range of Latino nationalities, beyond U.S. Mexicans, Puerto

177

Ricans, or U.S. Cubans. Because of the expected continued increase in this population, the importance of such an undertaking cannot be underestimated. Another recommendation concerns the main focus of literature about the Latino population. It is necessary to shift the primary attention from negative and pathological aspects of Latino immigrants to a more positive, resource-oriented paradigm. For example, much of the literature about immigration to the United States highlights the possibility that immigrants take jobs away from nationals, but the contrary seems to be true. Graeme Hugo and colleagues have shown that immigrants have a complementary function, filling gaps in some areas in which nationals are not eager to work. Another negative aspect commonly discussed is the economic cost that the immigrants represent for the receiving country. However, migrants can help fast-growing economies meet their labor market needs. Additionally, immigrants pay taxes for the services that they use, and contribute to the destination country’s economy. In fact, their per capita net contribution is often greater compared to nonimmigrants because their education and training was paid by their origin country. Finally, several scholars have noted that traditional social science research on Latino groups tends to focus on pathology and deficit models—thereby reinforcing negative stereotypes and leading to an unbalanced understanding of Latino families and youth. Theories and research are needed to examine well-being and health to gain an understanding of the predictors of such outcomes and the protective and resilient aspects of Latino families and youth. Gustavo Carlo Luis Diego Conejo University of Missouri See Also: Acculturation; Ethnic Enclaves; Immigrant Families; Immigration Policy; Latino Families. Further Readings Armenta, B. E., G. P. Knight, G. Carlo, and R. P. Jacobson. “The Relation Between Ethnic Group Attachment and Prosocial Tendencies: The Mediating Role of Cultural Values.” European Journal of Social Psychology, v.41/1 (2011). Berry, J. W. “Immigration, Acculturation, and Adaptation.” Applied Psychology, v.46/1 (1997).

178

Child Abuse

Brook, J. S., D. W. Brook, M. De La Rosa, M. Whiteman, and I. D. Montoya. “The Role of Parents in Protecting Colombian Adolescents From Delinquency and Marijuana Use.” Archives of Pediatrics and Adolescent Medicine, v.153/5 (1999). Brown, A. and M. H. Lopez. “Mapping the Nation’s Latino Population, by State, County and City.” Pew Research Center Hispanic Trends Project (2013). http://www.pewhispanic.org/packages/latinos-by -geography (Accessed April 2014). Carlo, G. and M. de Guzman. “Theories and Research on Prosocial Competencies Among U.S. Latinos/as.” In Handbook of U.S. Latino Psychology: Developmental and Community-Based Perspectives, F. A. Villarruel, et al, eds. Thousand Oaks, CA: Sage, 2009. Gibson, M. A. “Immigrant Adaptation and Patterns of Acculturation.” Human Development, v.44/1 (2001). Hugo, G. J., C. Aghazarm, and G. Appave. Communicating Effectively About Migration. Geneva: International Organization for Migration, 2011. Lara, M., C. Gamboa, M. I. Kahramanian, L.S. Morales, and D. E. Hayes Bautista. “Acculturation and Latino Health in the United States: A Review of the Literature and Its Sociopolitical Context.” Annual Review of Public Health, v.26 (2005). Lopez, M. H., A. Gonzalez-Barrera, and D. Cuddington. “Diverse Origins: The Nation’s 14 Largest HispanicOrigin Groups.” Pew Research Center Hispanic Trends Project (2013). http://www. pewhispanic .org/2013/06/19/diverse-origins-the-nations-14 -largest-hispanic-origin-groups (Accessed April 2014). Raffaelli, M., G. Carlo, M.A. Carranza, and G. E. González-Kruger. “Understanding Latino Children and Adolescents in the Mainstream: Placing Culture at the Center of Developmental Models.” New Directions for Child and Adolescent Development, v.109 (2005). Schwartz, S. J., M. J. Montgomery, and E. Briones. “The Role of Identity in Acculturation Among Immigrant People: Theoretical Propositions, Empirical Questions, and Applied Recommendations.” Human Development, v.49/1 (2006).

Child Abuse Child abuse occurs when a parent or other caregiver engages in acts or omissions that result in harm, potential harm, or the threat of harm to a

child. Although child abuse has occurred throughout history, only in recent decades has it drawn the attention of policymakers, the media, and the general public. All jurisdictions have laws and regulations defining child abuse and the processes by which a child may be removed from his or her family to assure a safer environment. Child abuse can take place within a child’s family, but it may also occur within his or her school or other organizations. Child abuse may consist of neglect, physical abuse, emotional or psychological abuse, or sexual abuse. Historically, child abuse was seldom discussed, and rarely were steps taken to protect a child if it was discovered that he or she was being abused. There were several reasons for this apparent lack of concern. In many jurisdictions, children had few rights with regard to protection from violence inflicted against them by their parents. U.S. courts traditionally looked the other way when parents abused their children. Although parents still have a wide degree of latitude regarding disciplining their children, state legislatures and other agencies have created protections for children who live in abusive homes. Corporal punishment, by which a parent or other caregiver uses physical force to inflict pain upon a child, is still permissible in all 50 states and the District of Columbia. However, corporal punishment in the home runs counter to the international trend, in which more than 30 nations have outlawed parents’ ability to spank or otherwise use force with their children. Corporal punishment that occurs in a school setting, however, has met with much less support in the United States. Permissible in all jurisdictions except New Jersey as recently as 1970, corporal punishment in the schools remained legal in only 18 states in 2013. Changing perceptions of how children should be treated, as well as growing awareness of the permanent harm child abuse that can cause, have increased the protections afforded children in all settings. Of the roughly 74 million children in the United States who are 17 or younger, approximately 700,000 children are estimated to have experienced some form of child abuse. The four main types of child abuse involve neglect or physical, sexual, or emotional mistreatment. Nationally, neglect accounts for 78 percent of all reported



child abuse cases, while physical abuse accounts for 18 percent, sexual abuse 9 percent, and emotional or psychological abuse 8 percent. These represent only cases of child abuse where an alleged occurrence was reported to authorities, investigated, and found to have occurred. It is believed that numerous cases of child abuse are never reported. Child abuse cases involving neglect are often the result of family poverty, which has its roots in other societal problems. Neglect Child neglect occurs when a parent or other adult responsible for a child fails to provide the necessary clothing, food, medical care, shelter, or supervision to the extent that the child’s health, safety, and well-being are at risk. Neglect is also considered to involve the lack of attention, love, and nurture that permits a child to thrive. Neglect may cause children to experience physical and psychosocial developmental delays, and may also damage neuropsychological functions. Neuropsychological functions that may be impaired include the child’s attention, executive function, language, memory, processing speed, and social skills. Because these impairments may be of long-lasting or permanent duration, it is imperative that neglect be reported and corrected as soon as possible. Neglect is often reported by teachers or other adult authority figures who observe certain signs in a child. Signs that might indicate neglect include frequent school absences, asking for or taking money from others, asking for or taking food from others, consistently being unclean or unkempt, lacking adequate clothing or footwear for the weather, or unmet dental or medical needs. Suspected neglect should be reported to the state or local agency responsible for investigating such claims, and if proven, will result in actions being taken to protect the child. These actions may include therapy for the child and caregiver, supervision of the situation by social workers, or removal of the child from the home, either temporarily or permanently. Children who have experienced neglect will often have emotional or behavioral reactions to foster caregivers or others that reflect their belief that these individuals are not a source of safety. Instead, children who have experienced neglect often demonstrate an increase in emotional or hyperactive

Child Abuse

179

behaviors that impair or disrupt the development of secure attachments with foster or adoptive parents. The effects of neglect are often manifested in disorganized attachments and a need for the child to control his or her environment as much as possible. Children who have experienced neglect frequently appear to be glib and self-sufficient, and are sometimes described by others as manipulative and deceitful. These behaviors are often results of the neglect, which forced the child to become self-­ reliant at an early age. The early lack of attachment also frequently causes children who have experienced neglect to have difficulty maintaining friendships and romantic relationships later in life. Physical Abuse Physical abuse occurs when an adult inflicts physical aggression on a child. Most laws and regulations involving physical abuse are triggered when a caregiver takes actions that places the child at risk of injury or death, or in those situations where the adult deliberately inflicts serious injuries upon the child. Physical abuse often results in a child being bruised, burned, scratched, or suffering broken bones or lacerations. While most children suffer occasional injuries because of accidents or other mishaps, children who experience physical abuse frequently suffer repeated occurrences of injuries or other problems. Suspicions of physical abuse arise when a child suffers repeated injuries or has a variety of bone fractures at different stages of healing. Teachers, nurses, doctors, and others sometimes observe marks on a child’s buttocks or torso, which may indicate physical abuse. Similarly, burns on limbs or bruises in the shape of various household implements are also markers of a child who has experienced physical abuse. If allegations of physical abuse are found true, children are often removed from the home either temporarily or permanently. Children who have experienced physical abuse sometimes become extremely upset when they witness another child crying. Using excessive violence and exhibiting fear around parents or caregivers, or expressing a desire not to go home, are also manifestations of a child who is physically abused. Sexual Abuse Sexual abuse is defined as when an adult or adolescent uses a child for sexual stimulation. Sexual abuse involves the participation of the child in

180

Child Abuse

a sexual act that results in physical gratification or financial profit. Acts of sexual abuse vary, and include indecent exposure of one’s body, sharing pornography with a child, soliciting sexual behavior from a child, actual intercourse or other physical contact with the child, or using the child to produce pornography. Studies have found that 15 percent to 25 percent of women and 5 to 15 percent of men report having been sexually abused as children. Over 90 percent of those who sexually abuse children already know the child: 60 percent of these are acquaintances such as babysitters, family friends, or neighbors; and 30 percent are family members, such as fathers, mothers, brothers, uncles, or cousins. Only 10 percent of sexual abuse occurs at the hands of a stranger. Over one-third of those who sexually abuse children are also minors. Children who have been sexually abused often express guilt and self-blame, believing that if they had acted differently the perpetrator would not have violated them. Even after they have been removed from a sexually abusive situation, children may manifest addictions, chronic pain, fear of things associated with the abuse, flashbacks, insomnia, self-esteem issues, self-injury, and suicidal ideation. Children who have experienced sexual abuse may also undergo psychological counseling, both to ameliorate current mental disorders and prevent future problems. Emotional Abuse Emotional or psychological abuse occurs when a child is criticized, ignored, verbally abused, humiliated, or degraded, and can produce psychological or social deficits in the development of the child. Children who have experienced emotional abuse respond in a variety of ways. These ways include the child distancing him or herself from the abuser, fighting back by attacking or otherwise insulting the abuser, or internalizing the abusive words, which can seriously impede the victim’s self-esteem and self-image. Those who emotionally abuse children often suffer from personality disorders. They exhibit poor self-control, experience sudden and drastic mood swings, have a high degree of suspicion and jealously, and are often overly aggressive. Children who have experienced emotional abuse often experience abnormal or disrupted attachment. This makes

it difficult for those children to later form attachments to foster or adoptive parents, make friends, or experience fulfilling romantic relationships. The self-esteem and self-image of children who have experienced emotional abuse is also damaged, and may require therapy to alleviate. A tendency to blame oneself for the emotional abuse can result in learned helplessness and overly passive behavior. Treatment & Prevention Children who have experienced abuse may undergo a variety of treatment options. Typical symptoms of those who have been abused include anxiety, depression, and post-traumatic stress disorder (PTSD). While pharmaceuticals may alleviate some symptoms, most children who have experienced abuse also need therapy to help them recover and adjust. Cognitive behavioral therapy focuses on dealing with the thoughts and feelings associated with the abuse. Flashbacks, nightmares, or other experiences brought on by the abuse can often be dealt with and alleviated by cognitive behavioral

A naval officer helps a 4-year-old child tie a blue ribbon to signify child abuse prevention to the antenna of his vehicle at Pearl Harbor, Hawai’i, March 30, 2009. The ribbon served as a symbol for a Child Abuse Prevention month project.



therapy. The goal of therapy is to permit individuals to become less fearful around specific stimuli that causes a negative response. Over time, those who have been abused will hopefully gain a certain degree of control over their feelings. In cases where a child is not permanently removed from the home, child-parent therapy is often helpful in improving the child-parent relationship after experiences of abuse. Such therapy targets common symptoms of child abuse, such as anxiety, depression, and PTSD. Other treatment options that have proven successful include art therapy, group therapy, and play therapy. Each of these options can be effective with children who have experienced abuse. Art therapy and play therapy help children to acclimate to and benefit from the process through engaging in activities that they enjoy. Group therapy is effective in controlling costs and permitting children who have experienced abuse to understand that they are not alone. Those responsible for children’s well-being have also had a high degree of success in preventing child abuse through certain actions. Programs in schools can assist children in learning which behaviors by adults are inappropriate and provide them with tools to report abuse. School programs designed to prevent child abuse often contain role playing and instruction regarding how best to avoid potentially harmful situations. Support group programs for parents can also provide assistance to caregivers who lack the parenting skills to properly discipline or control their children. Visits to the home by social workers, nurses, or other school personnel can augment the support groups, and provide parents and caregivers suggestions about how best to work with their children. Some believe that children born from unwanted pregnancies are at an increased risk for child abuse, and that children born into larger families are more likely to be abused than those in smaller ones. To that end, some recommend increased access to contraceptive services as a means of alleviating potential child abuse. Public campaigns have also increased awareness of child abuse, and that resulted in a decline of documented cases of child abuse by over 60 percent between 1990 and 2010, according to research by David Finkelhor. April has been designated Child Abuse Prevention Month, during which special activities and programs are undertaken to educate the public about child abuse. Although

Child Advocate

181

hotlines to report potential abuse are popular, some believe that they encourage false reports. These false reports place a strain on child protection services because each report must be investigated. Stephen T. Schroth Jason A. Helfer Knox College See Also: Adolescent Pregnancy; Bettelheim, Bruno; Bullying; Child Advocate; Child Safety; ChildRearing Practices; Family Counseling; Incest; Internet Pornography, Child; National Center on Child Abuse and Neglect. Further Readings Finkelhor, David, Lisa Jones, and Anne Shuttuch. “Updated Trends in Child Maltreatment, 2010.” University of New Hampshire, Crimes Against Children Research Center. http://www.unh.edu/ ccrc/pdf/CV203_Updated%20trends%202010%20 FINAL_12-19-11.pdf (Accessed December 2013). Fontes, L. A. Child Abuse and Culture: Working With Diverse Families. New York: Guilford Press, 2005. Gil, E. Helping Abused and Traumatized Children: Integrating Directive and Nondirective Approaches. New York: Guilford Press, 2006. Howe, D. Child Abuse and Neglect: Attachment, Development and Intervention. New York: Palgrave Macmillan, 2005. Wolfe, D. A. Child Abuse: Implications for Child Development and Psychopathology, 2nd ed. Thousand Oaks, CA: Sage, 1999.

Child Advocate Child advocates consist of child and family professionals and individuals who work for organizations that promote child growth and development in formal and informal environments. “Child advocate” became a catchphrase in the 1970s to describe those who promote justice and safety for children, work with individual children or stakeholder groups, or collaborate with policymakers and key players within the legal system. Advocacy may imply a broad range of roles and contexts that often fall within particular domains of child

182

Child Advocate

well-being and safety. Examples of such domains may include legal protection; physical and mental health; prevention of abuse, maltreatment, or neglect; education; and labor. Historical Context Childhood as a specific period in life, and the study of this period, is a recent development. In Western culture, childhood as a phenomenon has existed for less than 500 years. Throughout the 5th through 15th centuries, children’s roles and responsibilities changed little. In some cultures, children were considered infants until they reached the age of 7 or 8 years old, at which point they were initiated into adulthood. In other cultures, children were treated as miniature adults as soon as they could walk and talk. The only right that some children had in some cultures was the right of primogeniture. This is a condition in which the first-born male child inherited the family land and assets. However, this was often polarizing and caused distress among families. While the first-born male was granted inheritance or estate, younger siblings or first-born females in the family were treated as though they had no status or voice in important matters. Overall, however, it was common practice for all children to be thrust into adult responsibilities at young ages so they could contribute to family life and learn to survive. Multiple explanations rationalize the need to prepare children for adulthood at such a young age. The most relevant examples were the high mortality rate of children throughout the Middle Ages, their need for survival skills, and the constant economic struggle of families in providing for their basic needs. There were no advocates for children during this time. The 16th century ushered in an acknowledgement of the unique nature of children, which evolved through the 17th century. Instead of being considered incomplete or deficient adults, children were given special clothes, toys and games, and even education. Although there was recognition of childhood as a special period in life, there was also a focus on strictly enforcing religious principles, providing moral guidance, and instilling in them a fear of God and adults. During this time, children were seen as property of their parent or caregiver, and had little to no self-determination. Physical force or even violence was often a reaction to children who attempted to be autonomous or self-governing. At

this point, educators took on a rudimentary role as child advocates. Contemporary Context Social concern for child well-being in the United States grew through the 19th and 20th centuries as urban industrialization created negative conditions for vulnerable children. Many children were homeless and lived on the streets or resided in orphanages with little protection from adults. In addition, many of these children found low-wage employment in factories in positions that put them at risk of injury or illness. Despite these negative circumstances, it was also during this time that multiple advancements in medicine and public health were made. These developments extended the life expectancy of children and decreased infant mortality. All of these phenomena contributed toward a growing progressive movement in protecting children, which included the first federal declaration of support by President Theodore Roosevelt in 1909. Shortly afterward, President Taft created the Children’s Bureau in 1912 to investigate the well-being of children and their mothers. Notwithstanding the federal support and influx of grassroots social groups, little progress in meeting the needs of children through formal structures or broad-based service networks was made. Advocates during this time were primarily educators and vocal activists who made the needs of children known to policymakers and government officials. During the mid-20th century, the U.S. government took on more responsibility for meeting the needs of children, roles that had originally been prescribed to parents. Policies in the areas of education, physical and mental health, law, and economics were enacted. Additionally, new professional associations, child advocacy centers, and lobbying groups continued to form during the civil rights movements of the 1960s. These groups were able to offer not only a voice for children but also direct services to children and their families. Despite the measured history of child advocacy, it is evident that society has taken an increasing interest and cause in the rights of children. Past views of children have often been unrealistic or contradictory in determining what exactly children require to develop into healthy adults. Additionally, many of the laws that directly affect children have reflected adult conflicts that are projected onto children,

Child Care



such as issues regarding religion, race, and socioeconomic status. Whatever the case, the success in meeting the needs of children has been greatly influenced by the increasing presence of child advocates. Morgan E. Cooley Florida State University See Also: “Best Interests of the Child” Doctrine; Child Labor; Child Safety; Children’s Aid Society; Children’s Bureau; Children’s Defense Fund; Children’s Rights Movement; Foster Care. Further Readings Archard, David. Children: Rights and Childhood. London: Routledge, 2004. McDermott, John, William Bolman, Alfred Arensdorf, and Richard Markoff. “The Concept of Child Advocacy.” American Journal of Psychiatry, v.130 (1973). Thompkins, James, Timothy Thompkins, and Benjamin Brooks. Child Advocacy: History, Theory, and Practice, 2nd ed. Durham, NC: Carolina Academic Press, 1998.

Child Care The use of child care has been a common practice among families since colonial America. Child care encompasses a wide variety of arrangements by which children are cared for by someone outside of the immediate family. Child care arrangements vary according to a host of factors, including location, relationship to parents, structure type, and size. For instance, childcare can consist of kinship care in the children’s home with only two or three siblings, or nonfamilial care by a dedicated childcare provider in a building designed to accommodate numerous classrooms, teachers, and children. The choice of a specific type of child care often depends on family need, practical considerations, and quality of care. The need for and availability of government support also impact childcare decisions. Currently, many states provide financial vouchers to low-income families to send their children to center-based care regulated by state policies. Child care in the United States has not always looked like this. At various

183

times in history, methods of child care emerged that reflected the social and cultural contexts of the time. Historical Overview In colonial America, the overwhelming majority of families lived on farms, and children were cared for by their mothers as part of their household duties. In the cities, however, children as young as 5 years old may have spent their days as an apprentice to a mentor learning a trade. While this type of care was tied to boys learning a trade in order to contribute to the financial well being of the family, it also acted as a means of child care. Often, apprentices worked with mentors for a decade or more learning the mentor’s trade and proper moral development. In many cases, mentors served as surrogate fathers to the boys as they mastered the trade and guided them toward becoming a positive contributing member of society. Other early forms of child care in America developed on plantations and resulted in several models of care provided by people other than the parents. Much of plantation child care was designed so that slave mothers would have fewer child-rearing responsibilities. The “child nurse” model involved using slightly older children to care for the younger children. Unlike babysitting as it is practiced today, the care that the older children provided was substantial, and included nearly all of the typical care responsibilities of a parent, such as feeding, changing, disciplining, and soothing. Another model used was based on group care and somewhat resembles modern center-based care. Group care in these settings typically involved grouping children together after they were about 3 months old and having one or more adults responsible for their safety and wellbeing. In some cases, this resembled more of a communal style of living than group care. Another model of child care was to use a paid or unpaid nanny or nurse, who was often a slave, for child care. Often, these women were responsible nursing for infants and tending to the needs of all children, including the children of plantation owners. As immigration began to more rapidly increase through the end of the 19th century, government programs were designed to support new immigrant workers. These programs also served to keep children from wandering the streets during the day. Parents were provided with free or inexpensive child care in the form of center-based group care. Indeed, the ubiquitous child care center of today can be traced

184

Child Care

back to the 19th century policies related to welfare reform that made it possible for parents to work. It was believed that this type of institutionalized care, often at settlement houses, would solve the social desire to help new immigrants and other eligible parents participate in the labor force without leaving their young children at home to fend for themselves. Trends in child care then shifted toward solutions that espoused strict regimented care in the early 20th century. During this time, behavioral psychologists argued that children would benefit from structured affection-free care, and parents conformed by sending their children to boarding and military schools, convents, and other extended-term facilities. These child care models were designed to remove children from the parents’ homes for extended periods of time, which could last from several months to several years, and place them in the care of providers who would raise them in a highly controlled environment that was low on affection. Around the time of World War II, the emphasis in child care returned to the idea of governmentsponsored care reminiscent of the settlement house model. With many mothers needing to enter the labor force at that time, child care became a necessity for a significant number of families, and the federal government responded by supporting for center-based group care. Federally sponsored center-based child care was further expanded in the 1960s with the creation of Head Start. Head Start was initially designed to provide children from low-income homes with the skills they would need prior to entering elementary school. It quickly became an important and integral part of child care in the United States for low-income families with preschool-age children. In the 1990s, Head Start services were expanded to provide care for infants and toddlers in programs that became known as Early Head Start. Characteristics of Child Care The characteristics that define types of child care include location, relationship to caregiver, structure type, and size. Location of child care means care that takes place in the home or care that takes place out of the home. In-home care offers many advantages for parents, including convenience, time-savings, security, and familiarity. Parents who use in-home child care do not need to take their children to another location, which is convenient and saves them substantial time that would otherwise be devoted to

transportation. Furthermore, children cared for at home remain in a familiar environment and may derive a sense of security from that fact. While out-of-home care may not have these advantages, it offers parents opportunities to expose their children to new people, places, and learning experiences unavailable at home. A number of parents use a form of in-home care known as self-care. School-age children with no other means of after school care return to empty homes and care for themselves until their parents return home from work. While there is technically no care provider in the home and therefore no true child care taking place, this is a very common and inexpensive in-home care method used by parents to address their needs when formal out-of-home after school care is unavailable or too costly. ccording to the 2010 U.S. census, almost 5 million children between the ages of 5 and 14 in the United States provide their own in-home self-care. Relationship to caregiver is another consideration parents make when choosing child care. Generally speaking, caregivers can be categorized as to whether or not they have a familial relationship to the parents. Familial child care can range from older siblings providing care, to adult family members such as grandparents or aunts and uncles providing care. Whether care is furnished in the children’s home or in a close family member’s home, familial child care utilizes a personal relationship, which often allows parents significant input into the specifics of their children’s care. This can range from serving certain types of food, allowing naps of a certain length, or providing certain activities. Parents who use familial child care may also feel a stronger sense of trust that their children will be safe because of their bond with the provider. Ultimately, parents’ choice to use familial care is greatly influenced by the proximity of family members and the family members’ willingness to provide care. The last two descriptors of child care are structure type and size. Structure type has to do with whether the child care facility is located in a commercial building, generally labeled center-based care, or in a private residence, generally labeled home-based care. The primary difference between these two structure types is the organization of the setting. Home-based care has facilities that one would expect in a home. Many home-based care providers augment their home to accommodate child care but retain their



“homey” feel. These facilities tend to include a smaller number of children and only one or two adults who provide the care. On the other hand, center-based care is in settings specifically designed for child care. Center-based care often includes expanded facilities with several bathrooms, play areas, nap areas, craft areas, and outdoor play areas.There may also be a host of resources and materials designed to promote learning that may not be available in home-based care. Some center-based care facilities can accommodate hundreds of children, whereas others can only accommodate 10 to 20 children. Small child care operations tend to be able to provide more nuanced care for each child. Large child care operations are able to provide children with opportunities to socialize with many children and to utilize many different materials and educational resources. Reasons for Choosing Care A variety of factors impact parents’ decisions to choose a specific type of child care. The most influential factors tend to be family need, parental desires, practical considerations, and perceived quality of care. Family need may vary from simply finding someone to watch their children for several hours a few times a week while they engage in other activities to needing care for their children everyday for nine or more hours while they work. Parents may also have the desire to inculcate specific behaviors or skills in their children, but do not have the means or education to do so. In these cases, parents may chose care options that provide their children with experiences that they believe the children need to learn the desired skills. These behaviors or skills might range from navigating social situations, to learning science or technology, to providing a foundation in art, music, or sports. Parents also select child care based on practical considerations, including cost, location, hours of operation, and flexibility of care options. For example, the ability to afford child care may lead parents to use center-based group care instead of an inhome nanny. Similarly, if one type of care is more geographically convenient (i.e., close to home), parents may select that type of care over other options. Another consideration is parents’ perceptions of quality. When other factors do not dominate child care selection (e.g., parents are not limited in child care selection because of financial constraints), most parents select a child care provider that they believe

Child Care

185

offers their children the highest quality of care possible. Compared to the other reasons for selecting child care, quality of care tends to be the most abstruse and often unattainable even when understood. Quality of Child Care Quality of child care pertains to the degree to which child care providers optimize the development and learning experiences of the children they serve. As such, high-quality child care provides children with an array of experiences, resources, and activities that will enrich their development and stimulate learning. In contrast, low-quality child care providers are limited in their ability to promote growth and development and may lack the resources and expertise to stimulate learning or scholastic interest. While it is not complicated to understand what quality of child care is from a learning and development perspective, this perspective provides little guidance for understanding and evaluating the indicators of high and low quality prior to enrollment. As a result, it has become common practice for quality of child care to be viewed as a set of characteristics that have been linked with positive development and learning. The attributes of a child care provider that are used to indicate high or low quality can vary by the entity that is defining quality of care. However, there are several characteristics on which most regulatory bodies agree. These characteristics include child– staff ratios, number of children grouped together (e.g., in a classroom), training and education of the caregivers, amount of physical space, type and condition of equipment and facilities, and staff stability. Current policies regarding child care quality generally stipulate the minimum “values” for each of the specified characteristics in order for a child care provider to obtain a license to operate and serve children. While more positive developmental and learning outcomes have been associated with characteristics such as smaller child–staff ratios, smaller numbers of children per group, and staff stability, these characteristics are merely linked to higher quality care and do not themselves denote high-quality care. Experts who evaluated the number of high- and low-quality centers in the United States at the beginning of the 21st century found that very few child care centers could be considered high quality. One of the differences between how experts and regulatory bodies view high-quality child care is the consideration of intangible and subjective

186

Child Custody

factors, such as staff–child interactions and the warmth and responsiveness of caregivers. The characteristics considered by regulatory bodies are objective and quantifiable, but do not necessarily identify high quality child care providers. Parents who solely rely on licensing as a means for identifying high-quality child care may not be getting the care for their children that they believe they are getting. Independent accreditation organizations, such as the National Association for the Education of Young Children (NAEYC), can also provide parents with information about child care quality. In-home and center-based care providers can elect to have one of these organizations evaluate their programs, and if they reach the minimum standards set forth by the organization, they can obtain an accreditation of standards by the organization. The standards of these independent organizations are generally more stringent than state or local regulatory bodies, and include evaluations of a host of factors, including the more intangible and abstract factors identified by experts as important for high-quality care. Unfortunately, obtaining independent accreditation can be costly, resulting in many child care providers not electing to become accredited. Without clear and easy indications of highquality child care, parents are faced with making decisions about their child care provider with little guidance or support. Confounding these decisions are the facts that high-quality child care tends to be very expensive, fairly uncommon, geographically inconvenient for families who will benefit most (e.g., children from low-income families), and difficult for children to enroll in because of high demand. Parents also face decisions about the degree to which expert definitions of high quality match their personal definitions of high quality. Parents often seek caregivers who can provide their children with experiences and resources that they cannot provide. Quality ratings of child care are not specific to interest or type; rather, they evaluate the whole center in terms of the set of standards identified as important by the evaluators. Therefore, parents who seek child care that will enhance their children’s social skills will not benefit from these general ratings of quality, and will find themselves needing to make decisions about child care based on limited information and surface characteristics, such as cleanliness or maintaining a licensed facility.

Additionally, parents tend to emphasize safety, provider warmth, nurturing behaviors, and caregiver-parent relationships as important characteristics of high-quality child care. Aside from safety, which can be quantified in terms of features and measures taken within each facility, the other characteristics parents are most concerned with are intangible, abstract features that are not typically measured by regulatory bodies. Furthermore, the high-quality characteristics that parents emphasize require extended observation for many hours, often on multiple days. Unfortunately, most parents do not have the time to conduct such extended observations and must rely on visits as short as 10 or 15 minutes to make a decision about a child care provider. Louis Manfra Amanda Coggeshall Christina Squires University of Missouri See Also: Child-Rearing Practices; Child-Rearing Experts; Child-Rearing Manuals. Further Readings Blau, David M. “The Child Care Labor Market.” Journal of Human Resources, v.27/1 (2012). Pluess, Michael and Jay Belsky. “Differential Susceptibility to Rearing Experience: The Case of Childcare.” Journal of Child Psychology and Psychiatry, v.50/4 (2009). Vandell, Deborah Lowe, et al. “Do the Effects of Early Child Care Extend to Age 15 Years? Results From the NICHD Study of Early Child Care and Youth Development.” Child Development, v.81/3 (2010).

Child Custody The methods of determining child custody in the United States have greatly changed over time. In the 21st century, custody decisions are typically outlined in parenting arrangements that describe routines, visitation schedules, financial obligations, and communication arrangements. When such plans cannot be amicably developed between the parents or through some form of mediation, determination of child custody requires legal action in the courts, in



which judgments consider the child’s best interests. When parents agree to an arrangement, the court generally defers to the parents’ wishes, unless the arrangement is seen as grossly unaligned with the child’s best interests. There is no uniform standard for determining custody across states. However, considerations of the best interests of the child commonly include adherence to the child’s safety and developmental needs, as well as the maintenance of consistency in the child’ life. Using the legal system to establish, govern, and maintain children’s welfare reflects a societal investment and emphasis on children’s development. Historically, this viewpoint and related court-based intervention was not common. Property Rights Early in American history, children were viewed as an essential part of the workforce. They provided manual labor or served as apprentices, and little emphasis was placed on what was best for them. Historically, intervention on behalf of children in the labor force did not occur in situations of neglect, abuse, indenture, slavery, or illegitimacy. Typically, children of slaves were considered property of the slave owner, and legitimate children were considered property of their fathers (mothers held no legal rights to their children). In the rare event of divorce, the father retained all rights to the children because he “owned” them. This emphasis on paternal property rights was also evident in parental death. When a father died, he could will his children to a guardian, completely bypassing the mother. It was this viewpoint, accompanied by corresponding legal standards, that granted men absolute rights to their children and served as the primary criteria applied to child custody determinations from the colonial period into the early 1800s. This viewpoint was strongly aligned with the way that marital property was seen in this historical period. Property was not shared between husbands and wives, because all marital assets belonged to husbands. Because children were viewed as property, they also fell under their father’s control in a principle taken directly from English common law, which was adapted from earlier Roman law. Tender Years Doctrine English influence on American policy was again experienced in the 1839 passage of the Custody of Infants Act in England, which completely shifted

Child Custody

187

the legal standard of paternal preference. The Tender Years Doctrine presumed that children under the age of 7 needed a maternal figure to properly care for them in ways that fathers could not. Furthermore, the Tender Years Doctrine emphasized the importance of maintaining contact between mothers and children who were older than 7 years (in situations where the marriage did not dissolve due to her adulterous behavior). This major shift in English common law was echoed in American case law in the 1800s, but greater flexibility was seen in the way judges made custody decisions. Although in many cases paternal preference remained, more emphasis was placed on granting mothers custody, especially of daughters (it was widely viewed that same-sex parent–child pairings were in the best interest of the child’s development), of children under the age of 7 (during the tender years), and in circumstances where fathers were at fault for the divorce (only fault-based mechanisms were available for divorce at this time). Toward the end of the 19th century, the use of paternal preference waned and was replaced with the ideology of “tender years.” The societal implications of this shift toward adherence to the Tender Years Doctrine remains today, despite legal changes emphasizing a more gender-neutral approach to custody decisions. Generally, mothers are designated as primary caregivers, and following divorce, they usually provide the primary residence for the child. Family Law Changes in the Late Twentieth Century The Tender Years doctrine would not remain the predominate method of deciding custody. Coinciding with the adoption of no-fault divorce laws in the latter half of the 20th century, laws governing custody determinations would also change. The best interests of the child standard became prominent during this time with many states moving away from custody determinations primarily based on gender. New considerations such as the economic, health, and safety needs of children would be considered in making judgments about custody. Further, states began to adopt laws with preference, or at least options for joint custody; the majority of states would adopt custody laws explicitly specifying preference or presumption of joint custody. However, as custody laws are governed by state-level statutes, progress would be gradual and inconsistent from state to state.

188

Child Health Insurance

Child Custody Decisions Today Individual states have a great deal of latitude in the type of requirements that they impose, or that the criteria judges may consider in determining child custody arrangements. Some states impose additional mandatory or discretionary mechanisms when joint custody is requested, or when custody is contested between parents. Despite the reality that most parenting arrangements are amicably settled without legal intervention, many custody situations require state intervention. Common types of statutory mandates or considerations for judicial discretion include forced mediation and drafting, approving, and implementing a parenting plan. The changing standards for child custody determinations largely reflect advances in researchderived knowledge about child development in divorce situations. Because financial contributions and economic stability affect children’s adjustment in both the short and long term, guidelines governing child-support contributions and enforcement exist to ensure compliance. Many states have instituted offices to manage these policies, and compliance rates have increased. To limit unnecessary transitions and establish residential stability, courts have introduced dispute resolution processes (e.g., mediation) to reduce interparental conflict (reducing coparental conflict post-divorce can help to promote nonresidential parent involvement), a major issue in post-divorce child adjustment. Types of Custody In the United States, child custody addresses both physical and legal custody. Physical custody refers to where the child resides, the nature of the visitation schedule (if applicable), and other aspects of the tangible living situation and routine (e.g., who transports the child to school). Legal custody is more abstract, but typically addresses the rights, responsibilities, and obligations that parents have. Physical and legal custody are not synonymous. In many instances, parents may share joint legal custody (e.g., they share in decisions regarding where the child attends school, and the nature of his or her religious upbringing), but only one parent has physical custody (known as sole custody). Research is mixed with regard to what kind of custody arrangement is ideal, with advocates for both more shared and more sole custody arrangements. Research indicates that in situations where

sole physical custody is granted, the nonresident parent is generally less involved, and over time transitions out of the child’s life (this usually represents a negative transition and substantial discontinuity for the child). The variation that exists within these two types of custody arrangements highlights the historical shift in decisions by the courts based on fathers’ property rights to children’s best interests. Thus, parental preferences are considered but the best interests of children are paramount in child custody determinations across states and situations. Anthony J. Ferraro Florida State University See Also: Adoption Laws; “Best Interests of the Child” Doctrine; Child Advocate; Coparenting; Custody and Guardianship; Divorce and Separation; Shared Custody; Social History of American Families: 1790 to 1850; Tender Years Doctrine. Further Readings Abramowicz, S. “English Child Custody Law, 1660– 1839: The Origins of Judicial Intervention in Paternal Custody.” Columbia Law Review, v.99 (1999). Einhorn, J. “Child Custody in Historical Perspective: A Study of Changing Social Perceptions of Divorce and Child Custody in Anglo-American Law.” Behavioral Sciences & the Law, v.4 (1986). Mason, M. From Father’s Property to Children’s Rights: The History of Child Custody in the United States. New York: Columbia University Press, 1994.

Child Health Insurance Health insurance coverage is critical to the growth and success of children because health affects every aspect of a child’s life. It influences a child’s self-esteem, ability to learn, achieve skill sets, and develop social networks. Moreover, healthy children grow into healthy adults. The healthy attitudes, behaviors, and habits developed in childhood carry over into adulthood. Healthy development requires periodic screenings and an emphasis on preventive care. Although the United States is one of the wealthiest countries in the world, estimates indicate



that approximately 8 million American children, ages birth to 18, are uninsured. Of these uninsured children, 39 percent are white, 37 percent are Hispanic, 16 percent are black, and the remaining 7 percent are Asian Pacific Islander, American Indian, or multiracial. Children are uninsured because their families cannot afford to buy health insurance. Estimates from 2010 indicate that a family policy costs an average of $13,770 annually, and an individual policy costs around $5,049. This expense makes coverage impossible for many low- and middle-income families. Eighty-one percent of uninsured children live in families who earn 300 percent or below the federal poverty level. Low-income children whose families meet federal poverty guidelines can receive health care coverage through Medicaid. Children whose families exceed the Medicaid poverty guidelines, but cannot afford private insurance, can obtain health coverage through the Child Health Insurance Program (CHIP). It is predicted that outof-pocket expenses will decrease as the Affordable Care Act takes effect. Medicaid Since the late 1960s, Medicaid, also known as Title XIX, has provided comprehensive and preventive health care to low-income children through a benefits program known as the Early Periodic and Screening, Diagnostic and Treatment (EPSDT) program. Enacted by the Social Security Act Amendments of 1967, the program was originally created to decrease the high rejection rates of military draftees, many of whom suffered from untreated but preventable childhood illnesses. Since then, EPSDT has provided comprehensive health coverage to low-income children. In 1989, EPSDT became a statutory requirement mandating all states participating in Medicaid to provide services to children and youth up to age 21, pregnant women, and children under age 6 with incomes at or below 133 percent of the poverty level. The program was further expanded by the Omnibus Budget Reconciliation Act of 1990, which included coverage for children ages 6 to 18 with annual incomes less than 100 percent of the federal poverty level. The program now covers children birth up to age 21. Children receive standard mandatory Medicaid benefits that include (1) inpatient and outpatient hospital services; (2) physician services; (3) laboratory and X-ray services;

Child Health Insurance

189

and (4) the EPSDT program. The EPSDT program provides services to approximately 25 million children, or one in four children in the United States, and half of all Medicaid enrollees. Child Health Insurance Program Families who exceed Medicaid poverty guidelines can apply for the Child Health Insurance Program. In 1997, the Balanced Budget Act was passed, which created managed care plans through the State Children’s Health Insurance Program (SCHIP). State Medicaid programs viewed managed care as a way to reduce costs and expand services. In 2009, the program was reauthorized through the Children’s Health Insurance Program Reauthorization and Improvement Act, which also provides funding for outreach, enrollment, retention, and grants to Indian tribes. States that participate receive federal matching funds. States are given the flexibility to select their program design, which includes: (1) expansion of Medicaid, (2) a separate CHIP program, or (3) a combination of the two. Approximately 8 million children receive health coverage through CHIP. It is important to note that CHIP varies by state and type of program, but all states are required to provide the standard Medicaid benefits package. EPSDT Both CHIP and Medicaid recipients receive health services known as EPSDT, which is the standard Medicaid package benefit. It provides comprehensive and preventive health services to children and young adults under the age of 21. The goal of the program is to detect health problems early and treat acute and chronic health conditions. Screening services are provided in four areas: (1) dental, (2) vision, (3) hearing, (4) and medical, including mental health. Medical screenings include a comprehensive physical exam, comprehensive developmental history, immunizations, laboratory tests, and health education as needed. States are given flexibility in setting the frequency and timing of screenings. After a medical condition has been diagnosed, Medicaid must pay for the necessary medical treatment to correct the problem. Equally important, the EPSDT program provides more comprehensive services with children with disabilities than private insurance plans, rendering it a highly valuable program for low-income families with special needs.

190

Child Labor

Underutilization Given the pivotal role that health care plays in positive health outcomes for children, the importance of health coverage for low-income children cannot be overstated. However, access and utilization have been problematic; the program is not fully utilized by eligible children. Obstacles to utilization are identified as either program related or beneficiary related. The two main program-related obstacles are low provider participation in Medicaid and barriers in the eligibility process. Beneficiary related issues are numerous. Low participation among pediatricians, dentists, and mental health providers is caused by low reimbursement rates and excessive paperwork, making access to care difficult for Medicaid beneficiaries. In addition, Medicaid parents find Medicaid eligibility a complex process. Specifically, the paperwork is burdensome and complicated, creating barriers to participation for eligible beneficiaries. When beneficiaries become enrolled in Medicaid, they face more obstacles, including unreliable transportation, inflexible work hours, language barriers, lack of continuity in primary care services, and geographic maldistribution of network providers. These factors are exacerbated by the fact that children enrolled in Medicaid who receive services provided by managed care organizations often receive a lower quality of care than their privately enrolled counterparts in the same managed care organization. Lorenda A. Naylor University of Baltimore See Also: Medicaid; National Center for Children in Poverty; Obesity. Further Readings Children’s Defense Fund. “Who Are the Uninsured Children, 2010: A Profile of America’s Uninsured Children.” http://www.childrensdefense.org/child -research-data-publications/data/data-unisured -children-by-state-2010.pdf (Accessed December 2013). Rosenbaum, Sarah. “The Proxy War—SCHIP and the Government’s Role in Health Care Reform.” New England Journal of Medicine, v.358/9 (2008). U.S. Department of Heath and Human Services, Centers for Medicare and Medicaid Services. Medicaid State Manual. http://www.cms.gov/ Regulations-and-Guidance/Guidance/Manuals/

Paper-Based-Manuals-Items/CMS021927.html (Accessed December 2013). U.S. Department of Health and Human Services, Centers for Medicare and Medicaid Services. Medicaid Early and Periodic Screening and Diagnostic Treatment Benefit Overview. http:// www.cms.hhs.gov/MedicaidEarlyPeriodicScrn (Accessed December 2013).

Child Labor Social attitudes and practices regarding child labor have widely varied over time. Once considered acceptable and even necessary for the economic well-being of the household, most forms of child labor are now regarded as exploitive and generally harmful to child well-being. The shift away from child labor was the result of a new attitude toward childhood that emerged more than two centuries ago, which identified children as different from adults and in need of protection. Today, labor market participation among children who are age eligible is intended to be a positive socializing force in their lives. That is not to say that all egregious aspects of child labor have disappeared from contemporary society. Outside the North American and Western context, child labor can be characterized as both commonplace and hazardous. Although relatively rare in comparison, child labor also persists in the United States, generating greatest concern when it results in injury or death. Child Labor for Survival Throughout much of history, children played a vital economic role in the household in their role as laborers. In agrarian societies, life was short and difficult. High rates of child mortality made it necessary for households to produce numerous offspring, and children who reached the age of 6 or 7 were treated no differently from adults. Thus, from a very early age, children would have worked alongside adults, performing the many tasks needed to ensure the survival of the household. The Industrial Resolution shifted the economic engine of Western society from agriculture to manufacturing, and in so doing, also transformed the world of work. Technological advances in farming



methods reduced the need for agricultural workers, sending laborers in droves to the manufacturing mills that quickly sprang up in the cities. While it might be anticipated that this would diminish demand for child labor, rates were highest in the early stages of industrialization. Child labor was especially prevalent in industries such as textiles and mining, where children’s small hands and nimble fingers could complete tasks much more efficiently than adults. The Decline of Child Labor As industrialization progressed, ushering in substantial increases in wealth and living standards, the health of the population began to improve and child mortality rates began to fall. As their survival into adulthood became more certain, children began to be seen in a new light. According to sociologist Viviana Zelizer, parents began to shift their fertility preferences, such that instead of producing a great number of children (pursuing quantity), they began to shift their efforts toward investing in their child’s future (achieving greater quality). In this process, children were no longer valued for their economic contribution to the household; instead, their worth was measured in terms of their future potential as

The N. Y. Button Works on West 19th Street in New York searches for young male workers in 1916, the same year the KeatingOwen Act was enacted to attempt to regulate child labor.

Child Labor

191

adults, making them emotionally priceless. This new view of children as vulnerable and in need of protection and guidance fueled social reforms to remove children from the labor market and launched a new system of compulsory education for all children. Funneling children into the education system made it possible to fully sequester the world of children from the world of adults. Introduction of Child Labor Laws After the mid-19th century, child labor rates began to drop as laws were enacted to protect children. The earliest child labor laws prohibited the employment of very young children, whereas subsequent laws gradually imposed greater restriction on the work hours of older children. These changes occurred in a piecemeal and protracted fashion. For example, several states enacted laws in the 1840s that prohibited children under the age of 12 from working more than 10 hours per day in factory work, but it proved much more difficult to pass legislation restricting children from night work. Indeed, legislation prohibiting night work for children under the age of 16 failed five times in New Jersey, before finally passing in 1910. Similar struggles ensued at the federal level, with an initial attempt at regulating child labor in the 1916 Keating-Owen Act ruled unconstitutional by the U.S. Supreme Court. It was not until 1938 that the Fair Labor Standards Act, which established 16 as the minimum age of employment in sectors other than agriculture, that child labor was regulated at the national level. Child Labor Today In the 21st century, employment during adolescence is viewed as a valuable experience that socializes young adults for their future roles as workers. More than 80 percent of U.S. adolescents report paid employment at some point during their high school years. Laws regulate the amount of hours that adolescents can work, with limits varying by age. Rather than the factory jobs of the industrial period, adolescents today are most likely to be engaged in the retail and service sectors, employed as cashiers, store attendants, maintenance workers, and food servers. These jobs tend to be held on a part-time basis, to not interfere with their schooling, and because they are entry level jobs, few pay much more than the minimum wage.

192

Child Safety

This is not seen as problematic because these earnings are generally not needed to maintain the financial well-being of the household; rather, employment provides an opportunity for adolescents to learn how to manage money and appreciate the value of hard work. Nonetheless, in households that are poor, headed by a single parent, or have numerous children, the earnings of adolescents may be used to supplement household income. In such situations, adolescents may feel pressure from their families to increase their work hours at the cost of finishing their education, with the sometimes unforeseen consequence that they become trapped in the low-wage labor market long term. Thus, work hours are considered an important predictor of determining whether employment confers a net benefit during adolescence, with risk for poor outcomes increasing when work hours exceed 20 per week. Beyond this, there is little empirical research that links work conditions and experiences during adolescence to the youths’ positive development and a successful transition to adulthood. Ongoing Concerns Not all of the oppressive or detrimental aspects of child labor have been relegated to history. Although laws prohibit child labor in North America and other parts of the world, the practice persists. In 2008, the International Labor Organization (ILO) estimated that more than half of the world’s 215 million child laborers (defined by the ILO as children who engage in work that exceeds light duties or their age appropriate abilities) encounter hazardous conditions such as night work; excessive hours of work; exposure to physical, psychological or sexual abuse; work with heavy machinery; and work in confined spaces. Issues around child labor also exist in the United States, where it is estimated that as many as 300,000 children under the age of 18 work illegally on an annual basis. Children who work illegally are not only more poorly paid than those who are legally employed, but are also more vulnerable to exploitation and injury. Nearly half of work-related deaths among U.S. children occur in the agricultural industry, with the majority of these deaths occurring on family farms that are exempt from child labor laws. There are ongoing concerns about the need to better measure instances of illegal child labor and find

ways of reducing work-related accidents among child laborers. Lisa Strohschein University of Alberta See Also: Fair Labor Standards Act; Family Farms; Industrial Revolution Families; National Child Labor Committee; Primary Documents 1916 and 1943. Further Readings Hindman, Hugh D., ed. The World of Child Labor: An Historical and Regional Survey. Armonk, NY: M. E. Sharpe, 2009. Rosenberg, Chaim M. Child Labor in America: A History. Jefferson, NC: McFarland, 2013. Zelizer, Viviana. Pricing the Priceless Child: The Changing Social Value of Children. Princeton, NJ: Princeton University Press, 1985.

Child Safety According to the Centers for Disease Control and Prevention, more than 9,000 children die annually (about 25 per day) from accidents. Almost 9 million children are treated in hospital emergency rooms annually. Death rates from accidents are highest for males and poor children. Males aged 15 to 19 have the highest rate of ER visits, hospitalization, and deaths. Children less than 1 year old primarily die from unintended suffocation or accidental strangulation. Drowning is the main cause of injury death among children 1 to 4, and traffic accidents (car, bike, motorcycle, and pedestrian) for those 5 to 19. There is a very high rate of accidental deaths in the United States, far greater than Great Britian and other Western countries. The costs from the injuries that children received in accidents were more than $200 billion in 2000. Child safety entails a variety of actions, each intended to keep children away from harm and physically, cognitively, and emotionally thriving. To assure the safety of their children, parents and other family members install adequate safety equipment, teach appropriate behaviors, and keep them away from dangerous objects and substances. As the knowledge



of risks to children has become more well known, child safety has come to be seen as the responsibility of teachers, counselors, and other adults who regularly interact with children, in addition to parents and family members. Better understanding of the risks to children’s safety, coupled with evolving technology, had worked to produce an environment that is safer for children than ever before. Home Child safety begins in the home, as parents and other caregivers remove risks even before a child is born. Baby cribs, long a source of potential harm to infants, are now built so that little or no room exists between the mattress and sides of the crib, eliminating the risk that an infant could become trapped and suffocate. Electrical cords around the house should be placed out of the way to prevent strangulation. Electrical outlets should be protected with outlet guards, which prevent children from electrocution. Medicines, poisons, cleaning fluids, and other dangerous substances should be locked away so that they are not mistaken for food items. All homes should have smoke detectors with their batteries changed regularly, and all parents should have a home evacuation plan in place. Kitchen utensils and knives should be kept out of children’s reach, as should access to stools or ladders that permit children near stovetops. Bathrooms are also dangerous, and parents should never leave infants, toddlers, and young children unsupervised there because of the high risk of drowning, scalding, and other injuries. Almost 40 percent of American households contain some type of firearm, either for recreation or protection. Firearms that are kept for protection are more likely to be unlocked, remain loaded, and stored within children’s reach. Children’s risks can be reduced if they are taught to assume that all guns are loaded and are not to be touched. Caregivers are encouraged to keep firearms unloaded, in safety position, and locked out of children’s reach. Ammunition should be kept in a separate location. Toys can prove harmful to children. Small children should not be permitted to play with toys that have sharp edges or points, or small pieces that can be placed in their mouths or swallowed. Parents should follow the recommended age guidelines for all toys, and prevent younger children from playing with an older sibling’s toys. Marbles, magnets, balls, and toys with parts smaller than 1¾ inch can cause

Child Safety

193

choking if swallowed, and should be kept away from children under 6 years of age. Children should be discouraged from playing on stairways or near windows, because of the many accidents that occur in these locations. All stairs should be barricaded with gates with narrowly spaced slats. Windows should be checked to ensure that children cannot fall out of them, and open windows with screens should be off limits. Alcohol, matches and lighters, plastic bags, and other dangerous objects should be kept in locked containers away from children. When correctly and consistently used, safety equipment can protect children from many injuries outside the home. Children are best protected from injury when they use safety equipment, but sometimes this also makes them more likely to take risks. Close adult supervision is required to keep these behaviors to a minimum. When bicycling, skating, skateboarding, and riding scooters, children should always wear appropriate safety equipment, including properly fitting helmets that meet current Consumer Product Safety Commission regulations, elbow pads, and knee pads. Younger children are particularly at risk of injury because they are still developing the cognitive and motor skills necessary to perform these activities well. Many deaths are caused by head injuries that could have been prevented with a properly fitting helmet. Collisions with automobiles are also common, so teaching children to be safe near driveways and roads is critical. Drowning is the leading cause of accidental death among children ages 1 to 4. Many drownings occur in residential swimming pools; however, other sources of open water are also dangerous. A small child can even drown in a five-gallon bucket of water. To reduce the risk of drowning, parents of children of all ages should learn cardiopulmonary resuscitation (CPR), teach children to swim, and avoid being distracted when children are playing in or near water. Swimming pool owners need to take extra precautions to ensure that their pools and pool areas are safe and meet current safety regulation. Another popular item in many yards is a trampoline. The American Academy of Pediatrics has discouraged the use of trampolines since 1977. Trampolines are associated with nearly 100,000 injuries annually, more than 3,000 of which require hospitalization. Although some injuries are minor, others can result in permanent paralysis. These injuries can even occur

194

Child Safety

when children are monitored, trampolines are assembled and anchored correctly, and all safety features are in place. Children’s playsets and other equipment also pose risks for children. Caregivers should read safety guidelines and checklists when purchasing and installing playsets to help keep injuries to a minimum. It is also important to periodically check equipment to ensure that it stays in good condition. Automobiles Motor vehicle accidents cause a death every 12 minutes in the United States. The risk of injury to children can be greatly reduced if certain steps are taken. All passengers, regardless of age, should wear seat belts at all times. Children 10 and under should be secured in a child safety seat, which are made in four sizes to adequately protect children at each stage of development. For newborns and infants who weigh 30 pounds or less, group 0 seats are rear-facing and should be used at least for the first 12 months. Group 1 seats are intended for children who weigh between 20 and 40 pounds, and can be either front- or rearfacing. Group 2 seats are for use with children who weigh between 33 and 55 pounds, and are usually installed front-facing. For children between 48 and 76 pounds, a group 3 seat serves as a booster seat that makes the car’s seatbelt more comfortable and restrains the child properly. Safe driving behaviors always benefit any child in the car, along with the driver and other passengers. This means not traveling over the speed limit or driving aggressively and not driving while impaired by drugs or alcohol. Other Steps Adults should take a variety of other precautions to assure that children are as safe as possible. Before a child is born, proper prenatal care and an alcohol-free pregnancy will greatly increase the chances of an uneventful birth. Newborns should be breastfed if possible, which reduces the chances of contracting certain conditions and improves children’s immune systems. Throughout childhood, providing children with nutritious food and opportunities for adequate exercise will both improve the child’s present state of health and reduce the risk of many health problems in the future. Regularly scheduled doctor and dentist

appointments are also vital to maintain health and to detect possible problems. Regular check-ups also assist parents in determining whether their child is making appropriate physical and cognitive progress, and allows for early intervention if problems are detected. Children should also receive recommended vaccinations and immunizations. Parents should learn their families’ health history to better understand the health risks that their child may face and to take preventative measures. At all times, proper hand washing and hygiene procedures should be taught and enforced because these will greatly reduce the chances of contacting a variety of diseases and other conditions. If children spend time with caregivers other than their parents, the parents should assure themselves that these individuals possess the parents’ contact information so that they can be reached in the event of an emergency. Caregivers should also be alerted to any conditions that a child has that may require special attention. Parents who are knowledgeable about their children’s friends, school experiences, and other interests and hobbies are more likely to notice when something is amiss. Ensuring that children have adequate sleep also will greatly increase their health. Stephen T. Schroth Knox College See Also: “Best Interests of the Child” Doctrine; Child Abuse; Child Advocate; Child Custody; Child-Rearing Practices; Domestic Violence; Foster Care; Health of American Families; National Center on Child Abuse and Neglect; Parenting. Further Readings Bailey, R. and E. Bailey. Safe Kids, Smart Parents: What Parents Need to Know to Keep Their Children Safe. New York: Simon & Schuster, 2013. Howe, D. Child Abuse and Neglect: Attachment, Development and Intervention. New York: Palgrave Macmillan, 2005. Marotz, L. R. Health, Safety, and Nutrition for Young Children (8th ed.). Belmont, CA: Wadsworth, 2011. Sorte, J., I. Daeschel, and C. Amador. Nutrition, Health, and Safety for Young Children: Promoting Wellness, 2nd ed. Upper Saddle River, NJ: Pearson, 2013.



Child Study Association of America In the early 19th-century United States, cruelty to children was largely ignored. Local communities and kin networks were responsible for protecting at-risk youth, though national organizations such as the American Society for the Prevention of Cruelty to Animals investigated cases of child abuse as early as 1873. Only in the last quarter of the century did societies specifically designed for the prevention of cruelty to children begin to appear. The first two decades of the 20th century, however, gave birth to modern theories of childhood and adolescence, advanced under the auspices of the professionalizing scientific discipline of psychology. As a result, children were increasingly discussed as a protected class. Academic studies encouraged the growth of so-called child cruelty prevention societies. They also urged education for parents across the nation related to the problems presented by raising and interacting with their children. By the 1920s, this national parent education movement became widespread across the United States. Among the several bodies involved in the parent education movement, the Child Study Association of America (CSAA) was the best known. The CSAA had first been established in 1888 as a nonsectarian organization by five wealthy New York City mothers, who were influenced by pioneering child and adolescent psychologists like G. Staley Hall and Felix Adler. The organization was originally called the Society for the Study of Child Nature, and was renamed the Federation for Child Study in 1908. By 1912, its mainly Jewish members were educated in Freud’s psychoanalysis, especially his thoughts on sex education. The organization formally incorporated as the Child Study Association of America in 1924 under Bird Stein Gans’ leadership, and began to educate parents (mainly mothers) and interested professionals on child-rearing, child development, child psychology, modern parenting techniques, and family life. Such knowledge transfer was made possible through CSAA’s nationwide publications, conferences, symposia, and small parent-education groups. It mainly focused on areas ranging from psychology, psychiatry, and sociology, and dealt with topics like schools, sex education, money, leadership,

Child Study Association of America

195

recreation, health, and working mothers. The CSAA largely worked with well-off mothers, who enrolled in small study groups to discuss their experiences with other parents, seek advice on their relationships with their children, or learn about the latest findings in psychology and psychiatry. Only in the second half of the 1920s did a few chapters open their study groups to parents of both sexes. Mothers, however, continued to be the CSAA’s target audience. Other than the CSAA’s headquarters in New York City, study groups met at different places such as members’ private homes, churches, and even settlement houses and housing developments in New York and other urban centers across the country. The organization possessed an international dimension as well, with affiliated chapters in Canada, Great Britain, China, and Japan. Furthermore, CSAA’s Program Advisory Service helped the association publicize its activities through radio talks, symposia, and conferences, and by helping organize effective parent–child events across the country and abroad. The CSAA also instituted separate children’s art and literature, bibliography, publications, and radio committees, whose staff and volunteer members reviewed books for children, held galleries and exhibits, and circulated pertinent publications. One such publication was Child Study, a parent education periodical regularly published from 1923 to 1960. In addition, the CSAA published books on various topics, including Helping Parents of Handicapped Children—Group Approaches (1959), Children of Poverty—Children of Affluence (1967), You, Your Child and Drugs (1971), What to Tell Your Child About Sex (1974), and Television: How to Use It Wisely With Children (1976). During the middle decades of the 20th century, the CSAA increasingly shifted its attention to professional training and consultation services, and by the 1970s, almost entirely focused on professional training. Family agencies, social workers, medical doctors, and nurses all benefited from CSAA programs funded by third parties, such as the Family Service Association of America and the U.S. Children’s Bureau. One such CSAA program, started in 1969 in the South Bronx, New York City, had trained sociologists, social workers, and medical doctors educating parents about health services. The association was by this time working to become an accredited educational institution. Yet,

196

Child Support

that plan failed in the face of financial difficulties. In 1972, the association formally dissolved, and turned its assets over to Wel-Met Inc., an organization established by the Metropolitan League of Jewish Community Associations in 1935. In 1973, CSAA merged with Wel-Met, and the merger resulted in diversification of Wel-Met’s regular programs. Further financial problems, however, meant a second merger with New York City’s Goddard-Riverside Community Center in 1985. After almost a century, this second merger put an official end to CSAA’s original educational philosophy and curricula. The scope of CSAA’s activities, however, seems not to have been as far-reaching and popular as its numerous activities throughout much of the 20th century might suggest. Not unlike its origins among affluent New York City women, the CSAA remained by and large an upper-class organization that did not often succeed in relating to parents from other socioeconomic backgrounds. Furthermore, the organization was not as popular among fathers as it was with mothers. This fact can be attributed as much to the nation’s slow move toward embracing modern fatherhood and Americans’ selective adherence to 19th-century rules of domesticity as to the organization’s gendered membership rules and its failure to attune its programs to the needs of both parents. Mahshid Mayar Bielefeld University Brian Rouleau Texas A&M University See Also: Adolescence; Books, Children’s; ChildRearing Experts; Family Counseling; Freud, Sigmund; Hall, G. Stanley; Parent Education; Parenting. Further Readings “Child Study Association of America Records.” Social University of Minnesota. Welfare History Archives. http://special.lib.umn.edu/findaid/xml/sw0019.xml (Accessed August 2013). Eisenmann, Linda. Historical Dictionary of Women’s Education in the United States. Westport, CT: Greenwood Press, 1998. “Guide to the Child Study Association of America Collection.” Bankstreet College of Education. http://bankstreet.edu/archives/special-collections/ csaa-collection (Accessed August 2013).

Jones, Kathleen W. Taming the Troublesome Child: American Families, Child Guidance, and the Limits of Psychiatric Authority. Cambridge, MA: Harvard University Press, 1999. LaRossa, Ralph. The Modernization of Fatherhood: A Social and Political History. Chicago: University of Chicago Press, 1997. Pearson, Susan J. The Rights of the Defenseless: Protecting Animals and Children in Gilded Age America. Chicago: University of Chicago Press, 2011.  Sealander, Judith. Private Wealth & Public Life: Foundation Philanthropy and the Reshaping of American Social Policy From the Progressive Era to the New Deal. Baltimore, MD: Johns Hopkins University Press, 1997.

Child Support In an intact family, there is no particular duty of child support. As long as there is no abuse or neglect, parents can provide (or not provide) for their children as they see fit, without state interference. However, if a family is not intact (a child with parents residing together), the issue of child support arises. Child support has been interpreted as a child’s right to receive support from his or her parent(s) beginning at birth, even before the determination of paternity. However, a support order cannot be established for a child born to unmarried parents until paternity has been established. Once paternity is legally established, a child has legal rights and privileges, such as inheritance, medical and life insurance benefits, and social security and/ or veteran’s benefits. Through statutorily adopted formulae, the amount of child support owed is determined in conjunction with whether a parent has child custody (or parental time-sharing), the amount of time spent in each parent’s custody and care, and the number of children. Early America Colonial America adopted the English “paternal preference” rule for custody determinations. As a result, from the colonial period through the early 19th century, mothers seldom won custody of their children in divorce cases. By the 1850s, the presumption in custody cases had changed, and “maternal



preference” became the rule. However, in the 19th century, newly divorced mothers nearly always fell into poverty, whatever their predivorce social standing By the 1920s, the Tender Years Doctrine, whereby upon divorce children from birth through age 7 were routinely placed into their mother’s sole custody, was the majority rule for child custody among the then-48 states. Historically, the duty of child support arose from the English law dictating the father’s right to the custody and services of the child. Thus, the duty of support ended if the father lost custody upon divorce. In English common law, there was no paternal duty of support postdivorce. Despite the English common law precedent, early in the 19th century, U.S. courts began to hold that fathers had a legal duty to support their children. In addition, as an early precursor to child support, U.S. states adopted portions of the Elizabethan Poor Law of 1601, which created a duty for parents to provide for their minor and adult children if those children were otherwise to become “paupers.” By the late 1800s, at least 11 states had criminalized paternal abandonment or nonsupport of minor children by their fathers. Modern Law Under modern law, the duty of child support is independent of custody rights. Child support is owed until a child reaches 18 years of age, or graduates from high school. The exception to this general rule is mentally or physically disabled children, who are incapable of self-support. In those cases, child support is indefinite (and often for the life of the disabled child). Although the states are divided on the issue, some states require parents to pay post-majority support for education-related tuition (such as for college, graduate school, professional school, college preparatory school). Child support paid during a tax year is generally tax deductible from the payor parent’s income, and is reported as taxable income to the payee parent. Typically, child support payments are not dischargeable in bankruptcy. In common law, a stepparent had no legally enforceable support obligation for a stepchild during the marriage to the child’s parent. If a relationship or obligation arose during the marriage, it was terminable at will by the stepparent by divorcing or separating from the child’s natural parent. Thus historically, upon divorce from a child’s parent, stepparents had no duty of continued child support. Today, with the

Child Support

197

increase in the number of stepfamilies, some states have statutorily imposed limited support obligations. The Guidelines In the United States, family law rules and regulations have been state driven, rather than federally mandated. The calculation and collection of child support is no exception; as a result, by the 1980s, there were approximately 54 different plans implemented within the 50 states, Washington, D.C., and the U.S. territories. Despite federal legislation dating back to the 1950s that mandated interstate enforcement of child support orders, it was nearly impossible for the majority of parents to collect payments. The federal Child Support Enforcement Amendments of 1984 (CSEA) required states to develop formula-based child support guidelines. Because the CSEAs were advisory, in 1988 Congress passed the Family Support Act, which required states to adopt and apply presumptive (though rebuttable) child support guidelines. The intent of the guidelines is to reflect parents’ expenditures for their children. The Child Support Recovery Act of 1992 (CSRA) made it a federal crime to willfully fail to pay child support for a child living in another state if the arrearages exceed $5,000, or if child support is unpaid for longer than a year. CSRA was strengthened in 1998 by adding felonies with greater penalties. The Federal Consumer Credit Protection Act (FCCPA) limits the amount that can be withheld from wages to 50 percent of disposable income for a payor with a second family and 60 percent for a payor without a second family. If payments are in arrears for 12 or more weeks, these limits can be raised by 5 percent each. In 1996, Congress passed sweeping welfare reform, which included significant changes to child support enforcement programs to automate and increase collections, known as the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA). Nationwide, there are three model child support guidelines utilized by the states. The income shares model, followed in 37 states, Guam, and the U.S. Virgin Islands, is based on the concept that the child should receive the same proportion of income that he or she would have received in an intact family. Therefore, both parents’ incomes are added together; the actual expenses for child care and extraordinary medical care are added, and finally the total amount is prorated between the parents, based upon their

198

Child Support Enforcement

proportionate share of income. The percentage of obligor’s income model, followed in 10 states and Washington, D.C., assumes that the custodial parent will provide for the child without being ordered to do so. There are two variations—flat percentage (percentage is based upon the number of children), and varying percentage (the percentage varies according to the payor’s income). The Delaware Melson formula model is used in three states, and is based upon the theory that children’s needs are primary. Therefore, parents are only entitled to keep sufficient funds to meet their basic needs and retain employment. The Melson formula is a hybrid of the income shares and percentage of income models. Either parent can petition to have a child support order reviewed at least every three years, or when there is a substantial change in circumstances. Child Support Collection Nationwide, child support collection is a federal, state, local, and tribal partnership. The federal child support program was established in 1975, under Title IV-D of the Social Security Act. Beginning in 1984, federal statutes required states to enact and enforce child support guidelines. The guideline amount serves as a rebuttable presumption of being the correct amount. State and tribal offices provide a number of child support–related duties, namely, locating noncustodial parents, establishing paternity, establishing and enforcing child support orders, modifying child support orders, and collecting and distributing payments. In 2010, 62 percent of all U.S. custodial families participated in the federal/state/tribal-sponsored Child Support Program. States use a number of methods for collecting child support. The majority (70 percent) is collected through payroll and income withholding. In other cases, officials may resort to revoking driver’s, professional, recreational, and occupational licenses until the payor complies. Extreme cases result in asset seizures, property liens, denial of passport, and redistribution of federal and state tax refunds. States are required to distribute most child support payments within two days of receipt. The majority of states charge interest on unpaid child support. Ending Child Support Child support payments end before the child reaches the age of majority upon the death of

obligor parent; the emancipation of minor recipient child; if a child leaves the parental home and refuses to return; if the child becomes employed and can provide for himself or herself; if the child is adopted by a parent who is replacing the obligor; if parental rights have been terminated; or if the child dies. Cynthia Hawkins DeBose Stetson University College of Law See Also: Alimony and Child Support; “Best Interests of the Child” Doctrine; Child Custody; Child Support Enforcement; Custody and Guardianship; Deadbeat Dads; Divorce and Separation; Father’s Rights; Fatherhood, Responsible; Paternity Testing; Shared Custody. Further Readings Hansen, Drew. “The American Invention of Child Support: Dependency and Punishment in Early American Child Support Law.” Yale Law Review, v.108 (1999). U.S. Census Bureau. “Custodial Mothers and Fathers and Their Child Support: 2009.” http://www .census.gov/prod/2011pubs/p60-240.pdf (Accessed September 2013). U.S. Department of Health & Human Services. Administration for Children & Families. “Child Support Handbook” (February 28, 2013). http:// www.acf.hhs.gov/programs/css/resource/handbook -on-child-support-enforcement (Accessed September 2013). U.S. Department of Health & Human Services. Administration for Children & Families. “FY2012 Preliminary Report.” http://www.acf.hhs.gov/ programs/css/resource/fy2012-preliminary-report (Accessed September 2013).

Child Support Enforcement Children in single-parent families may receive support from nonresident parents or from the state. Over the past three decades, public spending on child support (i.e., welfare) has decreased while a new system of private child support enforcement



has emerged. This shift developed out of concern over rising numbers of children in single-parent families and the poverty that many of them experience. The current system of child support enforcement, directed and financed by the federal government, represents an effort to combat child poverty by transferring child support responsibility from taxpayers to nonresident parents. However, despite numerous reforms, intensive debt collection techniques, and significant public expenditure, a large percentage of children in single-parent families continue to live in poverty. Child Support Financial support for children in single-parent families has traditionally come from two sources: the government, typically in the form of welfare, and nonresident parents. The private duty of child support was historically tied to the right to custody. For centuries, fathers held the right to custody, and bore the accompanying support obligation. Following the emergence of the Tender Years Doctrine in the mid-19th century, mothers were increasingly granted custody rights to their children, especially if they were under 7 years old. However, economic and social conditions meant that few could satisfy the corresponding support obligation. Courts responded by issuing child support orders against nonresident fathers, thus decoupling the right to custody from the duty to support. By the 20th century, child support orders had become a standard feature of family law, an area overseen by state governments, and administered by individual judges. Children in single-parent families have long received economic support from the state. In 1935, Congress passed Title IV-A of the Social Security Act, creating what would become the Aid to Families with Dependent Children (AFDC) program. AFDC benefits were essentially a substitute for private child support. Originally meant to provide support to widowed mothers and their children, by the 1960s, the program had expanded to offer benefits to divorced, separated, and never-married mothers. Child Support Enforcement By the 1970s, a rise in the divorce rate and the number of out-of-wedlock births meant that more children than ever were living in single-parent families. Many of these children were poor, and

Child Support Enforcement

199

the federal government became concerned about the cost of providing them with support. Congress responded with a series of reforms, enacted over the next three decades, aimed at strengthening the private child support system. These reforms coincided with a decrease in welfare spending. The effect was to transfer child support responsibility from taxpayers (and mothers) to nonresident parents, normally fathers. The first reforms were enacted in 1974, with passage of the Family Support Act, Title IV-D of the Social Security Act. The act established the Office of Child Support Enforcement, and directed states in receipt of AFDC funds to establish their own child support enforcement agencies to impose and enforce child support obligations. AFDC applicants had to assign rights to uncollected child support to the state and cooperate in establishing paternity and securing child support orders. The act also made child support enforcement resources available to parents who were not welfare recipients. Because child support orders fell under state jurisdiction, variability in awards was common. Many custodial mothers did not have support orders, and award amounts were low. In 1984, Congress passed the Child Support Enforcement Amendments (CSEA), establishing recommended child support guidelines. CSEA also required states to increase collection of support payments from nonpaying parents through seizure of tax returns, liens against property, and wage withholding. In 1988, Congress required states to enact rebuttably presumptive child support guidelines. In addition to becoming more bureaucratized, child support enforcement was increasingly automated. A series of 1988 amendments required automatic wage withholding upon issuance of a child support order. In 1986, Congress passed the Bradley Amendment, which triggered the imposition of legal remedies as soon as a payment was missed. Major reforms to child support enforcement occurred with the 1996 passage of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA). The act ended AFDC, and replaced it with block grants. To qualify for block grants, states had to operate a Title IV-D child support enforcement program,262 and implement new collection measures. These included information sharing between child support enforcement authorities

200

Childhood in America

and employers, as well as between authorities and financial institutions, for the purpose of seizing assets from nonpaying parents. States were also required to deny certain nonpaying parents passports and establish mechanisms for revoking various licenses as punishment for nonpayment. The ultimate penalty for nonpayment of child support is jail. State criminal penalties for nonpayment have existed since the turn of the 20th century. In the 1990s, two new federal crimes were introduced to target nonpaying parents. Thousands of nonpaying parents are jailed each year, sometimes regardless of their financial ability to pay. Current Picture of Child Support Enforcement Since the 1990s, numerous reforms have been enacted to strengthen the private child support system. Significant public spending has been directed to establishing and enforcing child support orders (the Office of Child Support Enforcement spends approximately $4 billion per year). Child support enforcement has become the most intense form of debt collection, with nonpaying parents facing increasingly severe penalties. Yet,child support enforcement has had relatively little impact. Approximately one-quarter of single parents and their children continue to have incomes below the poverty level. Only about a third of eligible custodial mothers are awarded child support, and only half of these mothers receive full payment. While more child support is collected than in the past, this is not necessarily related to enforcement efforts. For example, money flowing through the system because of automatic wage withholding might have before been informally transferred from a nonresident to a custody parent. Automated collection and enforcement also carry risks, such as arrears accumulating in error, which can lead to serious consequences for nonresident parents, including jail. The sad reality is that not all parents can afford to support their children. Transferring responsibility for child support to these parents will not to improve children’s economic circumstances. In these cases, public child support remains a real necessity. Claire Houston Harvard University

See Also: Alimony and Child Support; “Best Interests of the Child” Doctrine; Child Custody; Child Support Enforcement; Custody and Guardianship; Deadbeat Dads; Divorce and Separation; Father’s Rights; Fatherhood, Responsible; Paternity Testing; Shared Custody. Further Readings Comanor, William S., ed. The Law and Economics of Child Support Payments. Northampton, MA: Edward Elgar, 2004. Garfinkel, Irwin, et al., eds. Fathers Under Fire: The Revolution in Child Support Enforcement. New York: Russell Sage Foundation, 1998. Oldham, J. Thomas and Marygold S. Melli, eds. Child Support: The Next Frontier. Ann Arbor: University of Michigan Press, 2000.

Childhood in America Childhood is generally defined as the period of development between birth and adolescence that includes a preoperational phase when the child is very dependent upon assistance from adults for survival. The operational phases of childhood include toddlerhood, when the child is learning to walk; early childhood, when the child is learning through play; and school-age preadolescence, when the child is learning socialization. The concept of childhood is a social invention—an idea that is re-envisioned in every age and culture. While the stages of childhood are universal, childhood in the United States has dramatically changed to reflect the diversity of and challenges to family life from external forces. While most children, being malleable, naturally make themselves into what society expects of them, in the United States two coexisting gender-specific experiences evolved for childhood over the centuries. Public policy, ethnic rivalries, and social media all influence the lives of children in positive and negative ways as the roles of parents in rearing children have become less absolute. Natural Qualities of Childhood Many adults fondly remember childhood as a time of innocence; childhood has traditionally been understood as the time in which young people engage in



the processes that lead to socialization and reasoning. It has evolved with changing societal, economic, and family configurations, as well as parental expectations. Childhood is a construct of how adults perceive “child-ness,” as well as the environments and societal attitudes with which children live. Child’s play is not simple or mindless; it is important to the understanding of childhood. Play, when controlled by adults, parents, or teachers, can be an important part of childhood, but it does not offer the same level of opportunity for developing creativity, leadership, and group dynamic skills. Undirected play allows children to learn how to work in groups, share and negotiate, resolve conflicts, and be selfadvocates. Urban street culture and the secret worlds sustained and shaped by children are the strongest between the ages of 7 and 12, when many children play with little supervision from parents. Colonial America Until 1790 Children were viewed as young members of adult society, and were quickly prepared to serve functional roles. During the 1600s, a change occurred when adult society began to view children as different from themselves and sought to protect them from perceived adult dangers. Europeans, who were confronted with high infant mortality rates, were impressed by the affection demonstrated between Native American mothers and their children. Native peoples had different practices for bathing and exposing infants to natural elements, and the duration of breastfeeding was longer than for Europeans. Native American children remained close to their mothers while they worked by being nestled and transported in a cradleboard. At 3 years old, Native children were weaned and the child was immersed in lessons on how to survive and contribute to the community. For European immigrants in colonial America, where the family was the most important social institution, children had recognized status that entitled them to the necessities of life. To some degree, Puritan parents believed that childhood was the time in which they were to shape behavior by breaking their offspring’s will. Childhood innocence was both sacred and secular; society felt that children would be corrupted by adults and life experiences and needed the right training while they were malleable and could undergo religious conversion to be saved. Children were treated like little adults and received education that would make them useful

Childhood in America

201

within the community. Culturally dependent upon England, the North American colonies increasingly became focused on the perils of being dependent. British philosopher John Locke (1632–1704) was a proponent of teaching parents to mold character and shape values during childhood through experience and to develop reasoning skills needed to fuel enlightened industrial society. Education was combined with entertainment because it was thought that children who enjoyed learning progressed faster than those who dreaded it. Locke suggested that children should be able to freely play outdoors during childhood; learn self-discipline at an early age; learn good behavior from parents who model it; be allowed to choose their own occupations; receive religious education early; and be taught to read as soon as they learn to talk for both amusement and moral education. Locke’s model of childhood was thought to help children develop characteristics that were inherently noble, courageous, and virtuous. Childhood and Institution Building: 1790 to 1890 After the American Revolution, parents hoped to rear sons who would grow up to be stalwart citizens and public-minded men. Most people resided in rural communities with small populations, where nearly every family grew crops and raised animals. Poverty challenged families, even in a land of abundance, and education during childhood became the key to affluence. Fortitude, the strength of mind necessary to endure great adversity and adapt during periods of tribulation, was necessary for survival, while dissipation, wanton self-indulgence, and the scattered wasteful use of resources were undesirable. In pre-industrial America, most farm and housework was divided by age and gender as men’s, women’s, and children’s tasks. The Industrial Revolution dramatically changed the dynamics of families and the lives of children as fathers increasingly left the farm to work in more industrial workplaces. Thus, the work of men became more opaque to children and women and mothers were left with increasing work in the household. During the Industrial Revolution, children became increasingly separated from household production and were relegated to feminized “unproductive” work within the home, where tasks became increasingly gender-oriented, so that boys were encouraged to be active, and girls were groomed for passive maternal roles.

202

Childhood in America

Changes in the perceptions of childhood and subsequent child-rearing practices tended to reflect what was going on in other parts of the world, specifically in Europe. German parents were more likely to lavish their children with handmade toys than British parents because of religious practices. The most salient characteristic of Pennsylvania Germans, despite the fact that many lived in extreme poverty, was their loyalty to the German language and its locally evolved dialect known as Pennsylvania Dutch that was sustained in homes, churches, and parochial schools. The process of Americanizing Pennsylvania Germans (the era’s largest white ethnic group) between 1790 and 1840 reflected a distinct perspective on nationalization that showed how Pennsylvania was culturally linked to the Atlantic world and beyond. A legacy of English Reformation traditions dictated “a ritual calendar devoid of most religious holidays,” whereas Pennsylvania Germans continued to observe Christmas, Good Friday, Easter, and Pentecost, in ways peculiar to Old World Quaker, Puritans, and Presbyterian sensibilities. When adult males left the homestead to work and as female labor was increasingly relegated to the family household, women sought more moral political input in causes important to family stability such as the abolition of slavery, child labor reform, child welfare, and women’s suffrage. Stories during this time were designed to created shared notions of what childhood should be like, beyond the nursery, park, and classroom. Childhood Play and Ephemera Individual experiences of childhood in the 19th century were defined by the environment. Children were prized for who they were, not what they might become. Play allowed children to engage and interact with their immediate environment. Unstructured play encouraged creativity and imagination. Interacting with other children during play grew friendships and taught problem solving and social skills. For boys, this meant learning about the dangers of active outdoor play in fields and forests, while for girls, active play remained closer to home. Pets were a common aspect of childhood, which taught young individuals about companionship, trust, social interaction, self-esteem, taking responsibility, not being cruel to smaller creatures, and death; for younger children, having a pet helped develop basic motor skills and inspire physical activity.

The popular American children’s rhyme, “A Visit from St. Nicholas,” first published by Clement Clarke Moore in 1823, ignited the practice of giving books and toys to children at Christmas. McLoughlin Brothers, established in 1828, became the largest manufacturer of paper dolls, trading cards, and juvenile popular culture ephemera in the United States. Doing paperwork at home became an important part of “nonproductive” childhood; children gathered stashes of discarded paper that was no longer useful, and created games and stories. Between 1850 and 1950 emerging consumerism dictated that children have their own toys, furniture, literature, games, and a social sphere. When children had no means to purchase fine chromolithographed images, they scrounged around the house for materials and common household items from food containers, newspapers, and advertising premiums culled from door-to-door salesmen. Children saved and displayed greeting cards, postcards, and embossed paper products in scrapbooks. The financial and emotional stability of many families was shattered during and after the Civil War. Women were left to support families when husbands, brothers, and fathers were killed or disabled. Children collected and adapted the artifacts of ordinary life; they recycled and repurposed ephemera to make new things. Children were once again allowed to take on more productive roles in supporting the war effort and produced items sold during a flurry of “children’s fairs” that helped raise cash and supplies for local companies and army hospitals. A childhood activity from this time was captured in time capsules that children created with discarded items from around the house. Board games were a marketed response when children were once again relegated to unproductive roles within families. Draftsman and lithographer Milton Bradley launched a new game company in 1860. His first endeavor was the Checkered Game of Life, which presented fundamental semantics of the Civil War era. Bradley filled the need of soldiers to fill idle hours between skirmishes when he started assembling a small lightweight kit of games that included chess, checkers, backgammon, dominoes, and the Checkered Game of Life. Charitable organizations on the Union side soon ordered Bradley’s Games for Soldiers to distribute to troops, and the soldiers carried their love of these games



back to children at home after the war. Improved printing technology made trade cards, die cuts, greeting cards, sheet music, and other types of paper ephemera attractive, affordable, and collectable. When Christmas became a national holiday in 1865, Americans expanded the practice of exchanging handmade or inexpensive toys and gifts among a wide circle of acquaintances and charities, and the holiday became favorite time for creating childhood memories. The demographic shift from rural to urban made children’s impoverished living conditions in cities much more noticeable. This led to a reform movement to address children’s rights in densely populated, poorly regulated, unsafe, and filthy cities that teemed with class, ethnic, and racial differences. By the late 1800s, childhood learning, in particular the kindergarten movement, was linked to nature through the concept of the child garden, which was the notion of middle-class parents seeking to develop the special qualities inherent in their children. Childhood play was essential to this idea and has continued to be promoted for its ability to foster cognitive, physical, social, and emotional well-being in adulthood. Unstructured play created opportunities for children to grow physically stronger through running, jumping, and climbing, and to develop community and emotional values by developing empathy and compassion. Games, rhymes, and chants were spontaneously played by children on playgrounds, in the streets, or other places without adult supervision. The Progressive Age and Childhood: 1890 to 1920 Legislated reforms after 1900 came out of the middle-class moral movement, which attempted to project middle-class behaviors and values onto the working class (specifically working mothers). The Children’s Bureau was established in 1912 as a federal agency charged with promoting the health and well-being of mothers and children; unusual in its time, a majority of its employees were women. During the Progressive Era, reformers sought to improve the well-being of children in urban areas by systematic means based upon demographic research. During the 20th century, American businesses used market research to discover that childhood is generative, meaning that childhood fosters new markets more quickly than any other age group.

Childhood in America

203

Psychologist G. Stanley Hall promoted the notion of adolescence with his book Adolescence: Its Psychology and Its Relations to Physiology, Anthropology, Sociology, Sex, Crime (1904) as the period between childhood and adulthood that arrived with puberty. With this new viewpoint, the duration of childhood became shorter, and it continued to shorten physiologically because it was discovered that girls in the 1990s started puberty about two years earlier than at the start of the 20th century. Caroline Frear Burk wrote about the secret places of children beginning in the age now known as adolescence. This part of childhood culture is well documented in children’s books as secret feelings, thoughts, bad deeds, or material things that are related to strong feelings. Secretiveness in children comes from the need for self-protection or the expectation of indifferent or negative responses from others, or the fear of being misunderstood or ridiculed. The Child-Centered Childhood One in five children of the Great Depression lived in poverty and went hungry, and in some regions, perhaps 90 percent of children were malnourished. Environment in childhood proved extremely important; with widespread poverty, parents were less responsive and more authoritarian because they had to carefully ration resources. In a dramatic cultural change, manufacturers began to directly target children as individual consumers. Families turned to the radio for information; it was the first form of mass media that included elements specifically targeted to children—entertainment shows and commercials for toys. According to news editor Arthur Brisbane, radio could bring the world together and educate the masses; he stated that “the home without the radio is a house without a window.” Radio listeners prized variety. Daytime programming was geared toward stay-at-home mothers, and children’s programming created a shared body of experience, entertainment content, and advertising for younger listeners. American pediatrician Benjamin Spock revolutionized child-rearing with his Common Sense Book of Baby and Child Care (1946), which reassured new mothers of the rapidly growing baby boom generation by telling them to trust their instincts because they knew more that they thought they knew. Spock

204

Childless Couples

brought about major social changes in child-rearing as young mothers in the postwar years relied more upon his marketed advice than upon the advice of their mothers and other family members. American social historian Lloyd deMause, in examining the overarching psychological motivations of childhood through history, found that Spock was part of a movement that asserted that the child “knows better than the parent what it needs at each stage in life.” While this perhaps gave parents the opportunity for nostalgic visits to childhood through their children, and the opportunity for children to be more independent to express views, it also subverted traditional safety boundaries. Emergence of the New Childhood The civil rights movement and mainstream multiculturalism opened new avenues of exploration in childhood. In the 21st century, the media have at times seemed to usurp the parents’ role in guiding children’s development. Play, once a way to occupy idle time, is now big business. Commercialism increasingly shapes what childhood is. Toddlers and children are often exposed to the same media content and information as adults, and many toys are geared toward programmed play that comes with a storyline intact, which many people believes stunts children’s ability to develop their imaginations. Rising rates of divorce and single-parent families have made two-parent households less of a standard than it used to be. Many children grow up shuttling between two households, with stepparents and stepsiblings; this “instability” was once assumed to be detrimental to children’s upbringing, but several generations of children have proven this stereotype false. Nevertheless, single-parent households are much more likely to be below the poverty level, and the percentage of children living in poverty in the early 21st century remains high—22 percent. During the 1980s, the U.S. Congress took the unprecedented move of setting up systems for collecting child support wages, usually from fathers, to assist the large number of single mothers who support children without significant financial aid from fathers. During the 1980s and 1990s, the study of childhood underwent a paradigmatic shift from examining the role of children as social beings to how the information age has undermined the traditional understanding of what childhood is and should be.

High- profile child abductions and other tragedies have instilled a sense of fear in parents that has forced many to keep their children indoors or under the constant gaze of adult supervision. With computers, video games, and the Internet, the average child spends more than 40 hours per week with electronic media, severely decreasing the amount of time outside exploring nature and playing with other children face-to-face over previous generations. Meredith Eliassen San Francisco State University See Also: Birth Order; Books, Children’s; Children’s Rights Movement; Discipline. Further Readings Burk, Caroline Frear. “Secretiveness in Children.” Child-Study Monthly, v.5/8 (1900). Calvert, Karin. Children in the House: The Material Culture of Early Childhood, 1600–1900. Boston: Northeastern University Press, 1992. Cook, Daniel Thomas. The Commodification of Childhood: The Children’s Clothing Industry and the Rise of the Child Consumer. Durham, NC: Duke University Press, 2004. deMause, Lloyd. The History of Childhood. New York: Psychohistory Press, 1974. Hawes, Joseph M. The Children’s Rights Movement: A History of Advocacy and Protection. Boston: Twayne Publishers, 1991. Illick, Joseph E. American Childhoods. Philadelphia: University of Pennsylvania Press, 2002. Marten, James, ed. Children and Youth in a New Nation. New York: New York University Press, 2009. Postman, Neil. The Disappearance of Childhood. New York: Vintage Books, 1994. Steinberg, Shirley R. and Joe L. Kincheloe. Kinderculture: The Corporate Construction of Childhood. Boulder, CO: Westview Press, 2004.

Childless Couples Couples remain childless for a variety of reasons. Even though the percentage of couples in the United States that are childless has increased in recent years, the dominant culture remains



pronatalist, especially in comparison to some European countries. According to some, childless or childfree couples would not even meet the strict definition of family. However, being childfree by choice is becoming increasingly common, and this trend in the United States is evoking respect it did not once have. Categories of Childlessness A major dichotomy exists among couples without children. One group considers themselves “childfree,” “childless by choice,” or “voluntarily childless.” This group does not desire to have children. The other group is the “involuntarily childless,” comprised of couples who are infertile or couples who have lost children at any developmental stage to disease or other tragedies. Infertile couples are defined here as those who are trying or have tried to become pregnant or to carry a child to term, but have not been successful. Childless couples can fall into any of these groups: (1) those who actively decide to forgo children, (2) those who postpone having children because of certain situational factors, (3) those who are undecided and postponing, (4) those who have postponed having children to a point that they are past their reproductive years, and (5) those who are diagnosed with infertility. Changing Demographics Although there have always been couples without children by choice or by chance, demographers looking at fertility trends are beginning to identify a new wave of adults who are choosing to remain childless. According to Abma and Martinez from the National Center for Health Statistics, rates of voluntarily childfree families have varied from about 5 percent in 1982 to 7 percent in 2003. However, demographers are beginning to predict higher rates of childlessness for the future. Rovi predicts that soon as many as 15 percent, or one in six women, will be childless by choice. The increase in voluntarily childfree couples is a major shift that is happening around the world. As reported by Donald Rowland, one-fourth to one-fifth of women born early in the 20th century remained childless, based on data from France, Finland, Germany, the Netherlands, and the United States. The prevalence of couples without children, both voluntarily and involuntarily, is expected to rise over the next two decades, along with the age of first marriage and the age of the first attempt to have a

Childless Couples

205

child. In addition, rates of infertility are not expected to decrease any time soon; therefore, it is important to understand more about the experiences of couples without children. A trend noted by A. Thornton and L. YoungDeMarco is that the United States has seen individual freedom concerning family and personal behavior increase, and the strong emphasis on obedience to norms decrease, particularly between 1960 and 1985. The United States is an individualistic society, which is becoming less dependent on family relationships. For example, while it was once common to invest in the organization of the family for support (e.g., social, financial, and emotional), it is now increasingly common for couples to invest in non-familial organizations such as work and activities. Attitudes toward childlessness have become more positive to some degree as a result of a shift in sources of support as well as what society considers valuable. Reasons for Being Childfree There are many reasons that one may postpone having children. One very straightforward reason could be that a woman or man does not have a partner with which to have a biological child. More complex reasons include having a partner who does not wish to have children, and choosing between children and career goals. The reasons are different between men and women for postponing or forgoing childbirth. According to research conducted by Joshua Gold, women’s reasons to postpone or forgo childbearing include an unwillingness to add to overpopulation, an unwillingness to bring a child into a world that they believe is violent or unfair, and a desire to avoid the trauma surrounding pregnancy and childbirth. In addition, women are conflicted about choosing between a career and motherhood, and wonder if motherhood would be fulfilling if they had to give up their careers. Furthermore, some women report valuing education, employment, and financial well-being more than parenting when considering what they need to achieve a happy life. When some women make decisions about their adult life, they place more focus on work before making any fertility decisions, and thus may find their fertility decisions molded by their career aspirations. Abma and Martinez suggest that some women who postpone having children later regret not

206

Childless Couples

having started childbearing earlier. However, women at the highest levels in their careers are more likely to be voluntarily childless and stop anticipating combining roles as mother and career professional. Many women who are voluntarily childless never envisioned themselves combining the two roles. The reasons that women do not have children help determine the meaning of childlessness, and whether it is a positive or negative experience. According to Gold, men have somewhat different reasons for postponing or forgoing fatherhood. Men often base success on professional achievement, and having children may conflict with those goals. Men may also wish to forgo children for personal development, financial reasons, a desire to avoid stress, a belief that they lack the necessary skills or talents to be a good parent, or because they experience little social pressure about the decision to be childfree. Voluntarily childless couples have many justifications for deciding to forgo parenting. They might decide not to have children because their marriage is a venue for satisfaction apart from child-rearing. Some individuals wish to avoid repeating a negative experience that they had in their family of origin. Voluntarily childless couples may also choose their lifestyle because they value their personal freedom. Impact of Postponing Childbearing Those who postpone childbearing, but intend to have children, may be negatively impacted by the effects of time. Time is a major factor in adulthood on two different levels: biological and social. Once the reproductive window has passed, the ability of a woman to change her mind about having children is gone. Recognizing the limitation of time can help a couple proactively make a decision, but ignoring the limitation of time allows a couple’s decision to be made by default when they are no longer fertile. With contraceptives, home pregnancy tests, and legalized abortion, women can usually control reproduction, but this level of control has given couples the false idea that they can control the timing of conception. When conception does not occur quickly, couples can have a tough time adjusting to their lack of control over the process. In reality, reproduction can be quite inefficient, and postponing the process can lead to deep disappointment and frustration when a couple is unable to conceive or bring a pregnancy to term. Depending

on the value and meaning that a woman places on motherhood; she may feel like a failure if she cannot become pregnant. Often, couples trying to conceive experience a strain in their sex lives when sex becomes more about technical issues and timing and less about expressing their love for one another. Society’s Impact Society also has normative expectations about acceptable windows of opportunity for completing lifecycle transitions, such as having children. It is not uncommon for some to view childless couples as outside the psychological and social mainstream. Being childless can be viewed as against both nature and society. Although the word childless can have both positive and negative connotations, ranging from loss to freedom, the ideal concept of family in the United States includes two parents and a child or children. Although children do not increase economic well-being, they are seen as socially advantageous. The diagnosis of infertility may cause a devastating emotional response for couples who envisioned parenthood as a primary function of their adult lives. Parenting is valued by society because it fosters the continuity of family, culture, and the human race. Childfree Couples Voluntary childlessness was once more stigmatizing than involuntarily childlessness, especially prior to the 1980s, when the National Alliance of Optional Parenthood (NAOP) was formed to help reduce pressures on couples to become parents and to offer support to those who chose to remain childless. Childfree couples were once treated as deviant, and their lack of desire to have children was seen as a character flaw. They were variously judged as immature, self-absorbed, selfish, irresponsible, too career focused, unmanly or unwomanly, incomplete, unloving, or child haters. In contrast, parents were judged as altruistic, mature, complete, loving, responsible, and family focused. In reality, the childfree tend to be more educated, are more likely to be employed in managerial or professional positions, earn higher incomes, are less religious, more liberal, and less accepting of traditional gender roles than those who have children. With the success of the women’s movement in the 1970s, more women felt less encumbered by societal norms to conform to an ideal that they did not believe in.



Marriages of Voluntarily and Involuntarily Childless Couples When comparing the marriages of childless and childfree couples, Victor Callan found that marital satisfaction, marital happiness, and wives’ levels of personal well-being were similar. Infertile women reported life being less interesting, emptier, and less rewarding than voluntarily childfree women. However, infertile women also reported having more love and support in their lives from family and friends compared to mothers and voluntarily childless women. When compared, mothers and voluntarily childless women were similarly satisfied with their lives. Voluntarily childless women reported more time with husbands, more exchange of ideas, and higher levels of consensus in their relationships than involuntarily childless women and mothers. Both voluntarily and involuntarily childless women were more pleased in general than mothers with the amount of freedom and flexibility in their lives. Concerns of Childless Couples Childless couples may feel pressure from family and society to procreate. This pressure often increases around holidays and at family gatherings, where they may feel sad or left out of activities that involve those who have children. While a childfree woman receives social messages in favor of childbearing, she may not experience these messages as pressure if she is confident about her decision to forgo parenting. The idea of control is central to the distress experienced by involuntarily childless women when compared to voluntarily childless women who have chosen their reproductive path. Infertile women who place high importance on motherhood experience the most difficulties at this time because they have the least control over their childlessness compared to childfree and postponing women. Infertile women can struggle to find alternative sources of fulfillment. Men are oftentimes forgotten when considering issues of childlessness, but for men who are involuntarily childless and have a great desire to be fathers, the levels of distress experienced are similar to those felt by infertile women. Involuntarily childless men often feel as though they are outsiders in family, social, and work environments. These men may experience ambiguous loss in which they grieve over children who were never born. To compensate, they may engage in risk-taking behaviors such as substance abuse, gambling, and promiscuity, and

Childless Couples

207

may find it difficult to form relationships. However, some involuntarily childless men appreciate the silver lining in terms of their careers, finances, and leisure activities. Social support is integral in helping infertile individuals adjust to stress and disappointment. Ultimately, an active family and/or social network is key to the infertile couple’s ability to cope, but there are additional resources available if the couple needs more support. Since 1974, the National Infertility Association (RESOLVE) has served men and women in the United States experiencing infertility or other reproductive disorders. Special Considerations for Couples Without Children Some couples may find themselves childless because of difficulties faced when going through the adoption process. In particular, gay couples may be subject to discrimination during the adoption process, keeping them from the opportunity to raise a child. In the experience of many childless couples, both voluntary and involuntary, pets are considered “children”. In the 21st century, the definition of family is, for most people, sufficiently fluid that childless couples, whether by choice or not, are accepted and respected as families. Stacy Conner Sandra Stith Kansas State University See Also: Abortion; Birth Control Pills; Contraception and the Sexual Revolution; Demographic Changes: Age at First Marriage; Demographic Changes: Zero Population Growth/Birthrates; Family Planning; Infertility. Further Readings Abma, J. C. and G. M. Martinez. “Childlessness Among Older Women in the United States: Trends and Profiles.” Journal of Marriage and Family Therapy, v.48 (2006). Callan, V. J. “The Personal and Marital Adjustment of Mothers and of Voluntarily and Involuntarily Childless Wives.” Journal of Marriage and Family, v.49 (1987). Earle, S. and G. Letherby. “Conceiving Time? Women Who Do or Do Not Conceive.” Sociology of Health & Illness, v.29 (2007).

208

Child-Rearing Experts

Gold, J. M. “The Experiences of Childfree and Childless Couples in a Pronatalistic Society: Implications for Family Counselors.” Family Journal, v.21 (2013). Hadley, R. and T. Hanley. “Involuntarily Childless Men and the Desire for Fatherhood.” Journal of Reproductive and Infant Psychology, v.29 (2011). Rovi, S. “Taking No for an Answer: Using Negative Reproductive Intentions to Study the Childless/ Childfree.” Population Research and Policy Review, v.13 (1994). Rowland, D. T. “Historical Trends in Childlessness.” Journal of Family Issues, v.28 (2007). Thornton, A. and L. Young-DeMaro. “Four Decades of Trends in Attitudes Toward Family Issues in the United States: The 1960s Through the 1990s.” Journal of Marriage and Family, v.63 (2001).

Child-Rearing Experts A child-rearing expert is an individual or agency that possesses comprehensive and authoritative knowledge on the physical, emotional, and cognitive development of children from infancy to adulthood. For centuries, American parents have consulted expert sources for advice on how to raise children. This accumulation of advice has resulted in an expansive advice market that parents of the modern Western world can easily access. Beginning in the 18th century, parents began to rely less on the commonsense recommendations passed down from earlier generations, and turned to information from experts to answer questions on how to raise their children. Since then, parents have remained steadfast consumers of advice produced by popular child-rearing experts. The advice offered by experts is often contingent upon the current cultural and political climate. Because of this, dominant ideologies on child-rearing come and go throughout history. As the ideological focus changes, so to do the experts who provide the advice. Advisers of Colonial America: Child-Rearing Experts in the 18th Century In the 18th century, beliefs about children revolved around two opposing philosophies: rationalism and romanticism. These ideologies, popularized by John Locke and Jean-Jacques Rousseau, respectively,

took center stage in the late 1700s and profoundly influenced child-rearing advice for several centuries to follow. John Locke, a physician and a philosopher, was born in 1632 in England. At the age of 15, he was sent to the Royal College of St. Peter in Westminster, where he later earned a degree in medicine. Practicing as a physician, Locke became aligned with the founder of the Whig movement, Anthony Cooper, who exerted great influence on Locke’s philosophical ideas. In 1693, Locke published Some Thoughts Concerning Education. This book, which went through at least 35 editions before the end of the 19th century, contained a substantial amount of advice about raising babies, as well as an approach to child-rearing that emphasized the need for infants to be systematically disciplined and educated. Locke proposed that children’s minds at birth are blank slates for parents to write upon. This metaphor reflected Locke’s belief that children are born without perceptions or attitudes, but instead form them through experience. Because of this, he argued that education must

Dr. Benjamin Spock (1903–98) speaking at the 1989 Miami Book Fair International. Spock, a pediatrician, wrote the 1946 book Common Sense Book of Baby and Child Care, one of the best-selling books of all time.



begin at birth, a time when children’s minds are ripe and malleable. Locke’s philosophy also served to redefine the nature of parental authority and control. While holding firm to previous notions that parents must demand complete obedience from their children, Locke believed that compliance should be gained through reason. Parents were instructed to use non-coercive, rational instruction to shape their children into civilized adults. This was in sharp contrast to previously held ideologies suggesting that parents should gain compliance through any means necessary, usually corporal punishment. Locke’s ideas permeated the homes of many middle-class American families, and his ideas were even further popularized by the Ladies’ Library, an organization that published a pamphlet in 1714 reprinting Locke’s philosophy. Jean-Jacques Rousseau, philosopher and prominent child expert of the 18th century, was born in Geneva in 1712. Rousseau’s mother died shortly after his birth, leaving him in the care of his father, who abandoned him 10 years later. Rousseau’s formative years were spent as a domestic slave, Catholic priest, musician, and teacher. Marking the high point in his intellectual achievement, Rousseau’s book Emile, or On Education (1762) described methods for education that diverged from Locke’s conceptions. Rousseau argued that parents should value children’s creative nature and recognize that children possess rights, freedoms, and privileges. His philosophy revolved around the preservation of children’s innocence, and central to his work was the notion that childhood was a sacred stage of development that should be prolonged and protected. Arguing that children from 2 to 12 years old should be free to carry on as they wished in their natural environments, Rousseau believed that children were to have no academic training. By the late 18th century, Rousseau’s Emile was one of the most widely read child-rearing manuals, and his philosophies saturated the practices of parents who were captivated by his appeal to sensibility and nature. A New Era of Experts: Advice in the 19th Century The ideas generated in the 19th century have played a pivotal role in shaping the contemporary ideal of parenting and family life. The philosophies of Locke and

Child-Rearing Experts

209

Rousseau remained influential, even as new experts emerged to once again redefine child-rearing. Two of the most influential child experts of the 19th century were G. Stanley Hall, who received the first psychology doctorate awarded in the United States; and L. Emmett Holt, one of the nation’s first pediatricians. Hall, born in 1844, is today recognized as a central figure in the history of child psychology. As one of the founders of the academic discipline of psychology in the United States, Hall sought to forge the natural and social sciences in his study of children. Drawing from the ideas of Rousseau and Charles Darwin, Hall argued that parents and educators should not interfere with the natural development of children. Embracing a child-centered approach, Hall believed the best method to rearing children was for parents to embrace their children’s natural impulses and imagination. The popularity of Hall’s message helped trigger the formation of childstudy societies, with the establishment of at least 23 societies by the end of the 1890s. Hall’s attention to the scientific study of children appealed to middleclass mothers, who wanted to incorporate scientific knowledge into their parenting practices. Similar to Hall, L. Emmett Holt believed that mothers should become scientific professionals on such aspects of child-rearing as feeding and nutrition. Born in 1855, Holt quickly rose to the status of expert because of his extensive knowledge of pediatrics that he garnered while studying in Europe. Influenced by Locke’s ideology, Holt promoted a parent-centered approach to child-rearing. In contrast to Hall and Rousseau, whose work was much more child centered, Holt stressed rational discipline as a means to facilitate self-control in children and peace in mothers. He emphasized rigid scheduling of bathing, feeding, and other daily activities, and argued that it was the parents’ duty to guard children from germs and overstimulation. Although differing in perspective, Hall and Holt both promoted the tenets of science in the understanding of child-rearing. In doing so, Hall and Holt catalyzed the child-study movement in the United States, and helped shape dominant child-rearing beliefs well into the 20th century. The Third Transition: Experts of the 20th Century In the 20th century, the emergence of popular childhood psychology took hold of the nation, and

210

Child-Rearing Experts

parenting informed by scientific knowledge became a defining feature of the American way of life. John B. Watson, born in 1878, was a psychologist who began his career studying the learning behaviors of animals prior to applying his theories to human children. Most well known for his experimental work with children and conditioned behavior (e.g., Little Albert), Watson founded the psychological school of behaviorism. As the self-appointed successor of Holt, Watson established himself as a public figure with the publication of his book Behaviorism (1925), in which he claimed he could condition any child from birth to become any specialist (e.g., doctor, lawyer, or artist) of his choosing. Endorsing brisk and strict training, rather than sensitive responsiveness, Watson placed primary emphasis on self-control and self-reliance, and believed that the ideal child would learn to be self-sufficient without any help from his parents. Watson’s prescriptions for childrearing revolved around a strict physical and psychological program in which parents were encouraged to approach child-rearing issues (e.g., eating and sleeping) with impersonal objectivity. He warned against excessive displays of affection. Parents were instructed to let their children “cry it out” to avoid spoiling them and were encouraged to avoid kissing, hugging, or engaging in other forms of affection with children that may subvert discipline. Instead, Watson advised parents to communicate affection only when necessary, through the use of a firm handshake or pat on the head. Because of Watson’s influence, the ability to rigidly train children became the hallmark of successful parenting. The desire to follow Watson’s protocol for raising the ideal child was somewhat short lived. By the 1930s, a new ideology emphasizing affectionate child-rearing was beginning to overshadow the unaffectionate rearing methods of Watson. In the 1930s, the broad consensus about the behaviorists’ methods of child-rearing began to dwindle. Instead, the belief that parents should take cues from their children, rather than trying to impose strict schedules, became the dominate ideology for the rest of the century. Charles Anderson and Mary Aldrich prompted this paradigm shift with the publication of their book Babies Are Human Beings in 1938. The Aldrichs rejected the evidence supporting rigid child-rearing practices and pointed out that the majority of infants studied by behaviorists and

previous researchers had been institutionalized. Studies in the late 1930s and 1940s suggested that the child-rearing strategies promoted by behaviorism were more harmful to children than helpful when parents religiously followed these tenets. Returning to the philosophies of Hall, the Aldrichs encouraged mothers to enjoy their children and to be more selective in their attempts to gain complete mastery over them. As a result of a combination of well-informed scientific conclusions and doctrines that mothers found attractive, this ideology was endorsed by a host of other experts, most notably Arnold Gesell and Benjamin Spock. Arnold Gesell, a German American born in 1880, studied under Hall at Clark University. After receiving his doctorate in psychology in 1906, Gesell engaged in clinical work with mentally handicapped children. During this period, a growing research interest in the field of psychology was the measurement and understanding of general intelligence. Inspired by his experiences and this emerging movement, Gesell set out to develop a way to measure mental and physical growth. These efforts played a role in shaping his developmental point of view, a perspective that guided his future work. Gesell emphasized individual differences as an important component of child-rearing. He believed that parents must be cognizant of their children’s uniqueness and mold their child-rearing practices in a way that allows for these differences to develop and flourish. Standing in contrast to the behaviorist approach, this child-centered philosophy asserted that parents should allow their children to express themselves, and that parents should encourage growth through all developmental stages. Guided by Gesell’s conclusions, what was previously seen as misbehavior was now viewed as “age-appropriate behavior.” Benjamin Spock, an American pediatrician born in 1903, was the most trusted and popular child-rearing expert of the second half of the 20th century. Inspired by the work of the Aldrichs, Spock published The Common Sense Book of Baby and Child Care in 1946. Millions of parents in the late 1940s and 1950s flocked to the wisdom offered in this reader-friendly bestseller. Mirroring the sentiments offered by Aldrich and Gesell, Spock urged mothers to relax, have fun, and get in touch with their feelings and the feeling of their children. Although Spock was not the only, or even the first, pediatrician to hold these views, his ability to

Child-Rearing Manuals



connect with many anxious parents on a personal level placed his views in the hearts and homes of many Americans. The expert advice offered up by the Aldrichs, Gesell, and Spock continues to influence contemporary child-rearing philosophy. The Contemporary Expert Ideal: Raising Children in the Twenty-First Century Today’s parents can easily access the extensive collection of expert advice that exists on a broad array of topics. Now more than ever, parents turn to a host of different forums to seek advice from those who regard themselves as “experts.” What was once a title reserved for those who were most influential in the field of child-rearing, “child expert” is now a label given much less selectively to those who claim to possess comprehensive and authoritative knowledge on child-rearing practices. It is not clear how the increased access to, and availability of, expert advice will shape the future of child-rearing ideology. However, experts and their philosophies will continue to capture the attention of parents who are looking for guidance about the best practices for rearing children. Christina Squires Louis Manfra University of Missouri See Also: Child-Rearing Manuals; Parenting; Parenting Styles; Primary Documents 1907 and 1922. Further Readings Grant, Julia. Raising Baby by the Book: The Education of American Mothers. New Haven, CT: Yale University Press, 1998. Hardyment, Christina. Perfect Parents: Baby-Care Advice Past and Present. Oxford: Oxford University Press, 1995. Hulbert, Ann. Raising America: Experts, Parents, and a Century of Advice About Children. New York: Random House, 2011.

Child-Rearing Manuals From oral advice passed down by Puritan women to traces in letters and diaries to the published advice

211

of (often male) pediatricians, child-rearing manuals are as varied as the religious and scientific assumptions about the nature of children that prompted them. Though some decades are characterized by particular child development theories, there is usually a dynamic polarity evident across the mainstream. Are children born in a state of innocence? Or are they born in sin and in need of redemption? Are children primarily shaped by their environment or by their biology (i.e., the nature versus nurture debate)? And what role, if any, does the child’s agency (or that of the parents) play in attaining developmental milestones? Such questions could be cast in either religious or scientific language. Puritan Precursors Puritan women who immigrated to New England were literate to some degree, and were expected to teach their children to read at an early age so that they could read and study the Bible themselves. Even so, by virtue of their gender and standing in society, combined with the Puritan propensity to publish anonymously, any child-rearing advice must be gleaned from diaries and letters that surface from time to time; nothing would have been formally published. Even though primary sources from this era are rare, Puritanism cast a long shadow over later childrearing advice, whether in reaction to its understanding of the depraved state of nature into which a child is born, or reforming the sinful church and society from which they fled, seeking to create a new world that would be a beacon of light to subsequent generations. But as empirical studies based on scientific observation increased, Puritan ideas receded. Codification and Evolution of “Common Sense” A Common Sense Guide to Baby and Child Care, Benjamin Spock’s popular book, reveals an ambivalent irony running through the history of this genre. There is an appeal to “common sense” or natural parenting, yet the very demand for such books indicates that child-rearing is anything but common. The explanation of this paradox lies in the parental need for reassurance in the midst of a changing society. Fathers leaving to fight wars, mothers entering the workforce, industrialization, the transformation from a rural to an urban

212

Child-Rearing Manuals

and then suburban culture, government support for returning soldiers, and the dramatic mobility and fragmentation of multigenerational families that all this implies largely removed mothers from communal forms of support that socialized them into traditional cultural forms of raising their children. The appeal to “common sense” maintained ties to the past while subtly introducing new conventions for a changed situation that gave mothers permission to do things differently. It also reflected greater professionalization and reliance on experts whose writings became popular. Competing Theorists The emergence of child psychology at the end of the 19th century and the beginning of the 20th century featured a tension between the Romantic Rousseauian idea of designing education around the innately good nature of the child (represented by the writings of G. Stanley Hall), and the more Lockean idea of training the child to fit into an adult world (represented by the behaviorists John B. Watson and Edward T. Thorndike). One was child centered; the other parent and society centered. The former stressed nurture, following the lead of the baby in establishing sleep, toilet, and feeding patterns. The latter tended to focus on behaviorist habit formation of the same patterns, but revolving instead around the needs of the parent and society to which the child would eventually need to conform. The behaviorists believed that children are best shaped by stimulus and response. Once the goal was determined, behaviorism as a method was employed to attain it. The maturation theory of Arnold Gesell was also influential and content to allow the developing child to emerge through normal developmental processes. The proliferation of competing “scientific” theories of child-rearing and underlying advice often added to maternal anxiety as they were scolded for being either overly emotionally indulgent or too highly structured. The psychoanalytic theories of Freud and the divergent theories of his predecessors added to this eclectic milieu of theories. So did the developmental psychology of Jean Piaget. While Watson’s behaviorist Psychological Care of the Infant and Child (1928) and Gesell and Ilg’s maturational Infant and Child in the Culture Today (1943) books were popular (though eclipsed in sales

by Spock’s book), a similar “practical” Freudian child-rearing book did not capture the imagination of the American public. Themes and Variations This eclectic mix of theories was good for book sales and authors who were not ideological or empirical purists. Ambivalent or eclectic authors willing to outline more than one theory of child development to readers looking for affirmation were the most successful. Those cultivating a religious audience often draw (knowingly or unknowingly) from these philosophies and psychological theories, seeking integration with their sources of religious authority or scriptures. It is likely that new books will continue to be written for new audiences with specialized concerns. It is also likely that such child-rearing literature will repeat past themes and variations. Douglas Milford University of Illinois at Chicago See Also: Child-Rearing Experts; Parenting; Parenting Styles; Primary Documents 1907 and 1922. Further Readings Apple, Rima D. Perfect Motherhood: Science and Childrearing in America. New Brunswick, NJ: Rutgers University Press, 2006. Grant, Julia. Raising Baby by the Book: The Education of American Mothers. New Haven, CT: Yale University Press, 1998. Hulbert, Ann. Raising America: Experts, Parents, and a Century of Advice About Children. New York: Alfred A. Knopf, 2003. Mintz, Steven and Susan Kellogg. Domestic Revolutions: A Social History of American Family Life. New York: Free Press, 1988. Reese, Debbie. “A Parenting Manual, With Words of Advice for Puritan Mothers.” In A World of Babies: Imagined Childcare Guides for Seven Societies, Judy DeLoache and Alma Gottlieb, eds. New York: Cambridge University Press, 2000. Sommerville, C. John. The Discovery of Childhood in Puritan England. Athens: University of Georgia Press, 1992. Wishy, Bernard. The Child and the Republic: The Dawn of Modern American Child Nurture. Philadelphia: University of Pennsylvania Press, 1968.



Child-Rearing Practices Child-rearing is the process by which parents care for and support the development of their offspring from birth through maturity. Throughout U.S. history, there have been a wide variety of methods, or child-rearing practices, used by parents. Broadly speaking, these practices have varied on several dimensions: (1) the centralization of children within the family, (2) the inculcation of mature behavior, (3) the degree of parental control, (4) the affectionate behavior of parents, and (5) the techniques for guiding behavior and discipline. Each of these dimensions have ranged from one extreme to the other at different points in American history, reflecting concomitant societal beliefs about both childhood and parenting. Individual dimensions of child-rearing practices can be clustered together into broader dimensions, often referred to as parenting styles. In the 1960s, Diana Baumrind discussed three broad styles of parenting: authoritarian, authoritative, and permissive, including clusters of the dimensions mentioned above. For example, authoritative parenting includes a balance of focus between a child’s and parent’s needs, high parental control, high amounts of affectionate behavior, and use of reasoning, rather than physical punishment, to guide behavior. Characteristics of authoritarian parenting include more focus on the parent’s needs than the child’s needs, high parental control, low amounts of affectionate behavior, and more use of physical punishment or authority-based discipline techniques. Finally, characteristics of permissive parenting include more focus on the child’s needs than the parent’s needs, low parental control, high affectionate behavior, and limited use of techniques for guiding behavior. While these classifications of parenting styles are useful for thinking about and understanding how various dimensions of child-rearing practices work together, it is important to understand each of the broad dimensions independently, particularly when considering how prevailing beliefs of a given time or of a given culture impact child-rearing practices. Child-Centered, Parent-Centered, and Family-Centered Practices Of the seven broad and common child-rearing practices, the degree to which children are central

Child-Rearing Practices

213

to family decisions and behavior is the broadest and encompasses many features and facets of family life. Parents with a child-centered focus believe that considerations for the children’s desires should be central to family decisions. Child-centered parents might make broad decisions about how the children will be reared, based on the belief that the children will indicate to them when they are ready to advance toward adult-like behavior (e.g., toilet training) or societal expectations. Child-centered parents might also make nuanced decisions based on a child’s wants and desires, such as not serving a certain food for dinner if child does not like it, or making decisions about household organization and décor based on considerations about their children’s opinions. Parents who pay little mind to a child’s desires and predominantly focus on their desires have a parent-centered focus. In parent-centered families, children’s desires are generally ignored, or are only considered when they do not interfere or conflict with the parents’. Children reared in parentcentered families tend to learn early on that stating their opinions is unfruitful, and may even lead to disciplinary action by parents. The focus on the parent’s wishes often permeates all aspects of family life, from broad decisions about how children should be raised, to more day-to-day decisions, including types of food in the house, meal times, noise levels, and arrangement of home furnishings and rooms. For example, parent-centered parents may have a formal living room for entertaining the occasional guest in which children are not allowed to play. Parents who make decisions based on both their wants and desires and the wants and desires of their children have a family-centered focus. For example, family-centered parents may select a meal time based on both the parents’ work schedule and the children’s activity schedules that works for everyone. Most experts agree that the family-centered approach is more likely to provide an environment in which children receive the most positive benefits. In addition to having their needs valued and considered by their parents, children also learn that others may have needs that differ from their own, and that the best decisions are often compromises. At different points in history, American families have shifted along the spectrum from parentcentered focus to child-centered focus. In colonial

214

Child-Rearing Practices

America, children were considered instruments for obtaining parental wants and desires, mainly as workers who could assist with subsistence survival. By the late 1800s, many parents believed that their duty as parents was to rear their children to be good citizens, and they adapted a slightly more child-centered approach. Inculcation of Mature Behavior The desire that parents have for who their children will be and how they will behave as adults is another important factor in child-rearing practices. Based on their view of what mature behavior should look like, parents will inculcate, or influence and teach persistently, those behaviors in their children. For example, parents who want their children to be successful in academics may focus their child-rearing on reading and cognitive growth. These parents may also place high importance on formal education and high academic achievement. In order to help their children embrace these desires, parents emphasize factors that they believe will help their children reach those goals. Parents who emphasize academic skills will surround their children with books and other learning tools. They will be attentive to the skills and knowledge that their children learn, and intervene if they believe that they are not at the level they should be. Other parents may want their children to join the family business and focus their child-rearing on the practical aspects of the family business (e.g., farming) or other trade skills that can contribute to or expand the family business. These parents may expose the children to vocational and business skills at an early age. As the children advance their understanding of the business, parents will expose them to more knowledge until they have the foundation to be successful in that field. Some parents may inculcate mature behaviors that are more general and less focused on a specific knowledge set or trade skill, such as marrying or being happy. The inculcation in these families can be thought of as more about personality and cultural norms, rather than specific behaviors. It should be noted that regardless of whether parents have specific or broad goals for their children as adults, the practices they use to inculcate their children to those goals will impact their personalities, cultural norms, skills, knowledge base, and a host of other traits.

Parental Strictness and Permissiveness The amount of control that parents believe they should exercise over their children is an another important child-rearing practice, which can be thought about in terms of strictness and permissiveness. Often, strictness and permissiveness are considered opposite ends of a spectrum, with strictness indicating very high control, and permissiveness indicating very low control. Parents who are strict believe that their children should obey the household rules that were expressly created to control behavior. Parents who are permissive believe in minimal use of rules and regulations to control behavior. Since colonial America, the tendency for parents to exercise strict control or little control over their children’s behavior has waxed and waned. Generally, these shifts in thinking have been marked by the rise of new child-rearing experts. For example, parents in the early 1900s were encouraged to use rigid and strict parenting by behavioral psychologists like John B. Watson, who believed that without such strictness, children would not reach their full potential. By the mid-1900s, the pendulum had swung in the other direction, and many parents embraced the more indulgent child-rearing as espoused by Benjamin Spock. Affectionate Behavior A parents’ level of affectionate behavior toward their children is a characteristic of child-rearing. Affectionate behavior is the degree of warmth and sensitivity parents show to their children. In that terms of child-rearing practices, whether parents are warm and sensitive or cold and emotionally distant can be the result of prevailing views of childrearing of the given culture or society, as well as what parents believe is right for their children and family. At different points in history, parents have been given opposing messages about the impact affectionate behavior, particularly the harm that can befall children who receive too much affection. While cold and emotionally distant parenting can result from the parents’ dislike of parenting and from their general personality, this type of parenting has also been advocated by experts. Thus, at various times in history, parents have believed that withholding affection was “best” for their children. Advocates for this technique believe that this will toughen up children and prepare them for the harsh realities of adult life. Watson was a proponent of



this technique in the 1920s. He advised parents to avoid kissing and hugging their children; instead, they were to pat children on the head for work well done, and shake their hands in the morning. Watson believed that parents who coddled their children would raise adults who required coddling. Techniques for Guiding Behavior Like other child-rearing practices, the techniques of guiding behavior and discipline have followed broad trends in American history. One of the most controversial discipline techniques is the use of physical or corporal punishment. Physical punishment as a means of disciplining children has been both promoted and vilified at different points in history. For example, it was championed in the 17th century as the most efficient means for eradicating negative behavior. By the late 1900s, many people believed that physical punishment created more negative behaviors in children than it eradicated. Other popular techniques for guiding children’s behavior include the use of positive reinforcement, fear, love withdrawal, isolation, deprivation of privileges, and reasoning. Positive reinforcements, such as tangible rewards or praise, are used by parents to promote positive and desirable behaviors. Fear is used by parents who are trying to diminish undesirable behaviors by convincing the children that they will be punished or deprived of something for undesirable behavior. While historically, religion was a common source of fear, parents may also use an absent parent or made-up phantasm (e.g., the boogeyman) to induce fear and conformity of behavior. Love withdrawal, or the threat of losing parents’ love and affection for behaving contrary to parental wishes, has also been a common practice that parents use for guiding behaviors. This is often heard in phrases such as “if you love me, you will do what I ask,” or “if you don’t behave, I will stop talking to you until you do.” Isolation and deprivation of privileges are both characterized by taking away freedoms that the children already have. Isolation might include relegating children to their bedrooms until they promise to behave as desired, whereas deprivation of privileges might include taking away toys, snacks, or not allowing the children to visit friends or attend other social events. Finally, reasoning (sometimes referred to as induction) is a method in which parents discuss appropriate and inappropriate

Children’s Aid Society

215

behavior with their children in an attempt to help them understand why certain behaviors are acceptable and others are not. In doing so, parents hope to induce changes in behavior based on discussion and understanding. Louis Manfra Christina Squires University of Missouri See Also: Childcare; Child-Rearing Experts; ChildRearing Manuals; Parenting; Parenting Styles. Further Readings Block, James E. The Crucible of Consent: American Child Rearing and the Forging of Liberal Society. Cambridge, MA: Harvard University Press, 2012. Stearns, Peter N. Anxious Parents: A History of Modern Childrearing in America. New York: New York University Press, 2004. Youcha, Geraldine. Minding the Children: Child Care in America From Colonial Times to the Present. Cambridge, MA: Da Capo Press, 2009.

Children’s Aid Society The Children’s Aid Society was founded in 1853 by Methodist minister and philanthropist Charles Loring Brace to help the homeless and impoverished children in New York City, many of whom were orphaned or abandoned. Brace worked in the Methodist mission in New York’s Five Points district for two years after graduating from Union Theological Seminary. During this time, he was struck by the large number of children who were living on the streets, many supporting themselves through petty thievery and other crimes, and growing up to swell the ranks of the “dangerous classes” of New York City. Brace solicited financial support from New York philanthropists and the rising business class to found the organization. The “Street Arabs” New York’s homeless children, who numbered as many as 35,000 in 1854, were known as “Street Arabs.” After the great potato famine of the 1840s

216

Children’s Aid Society

in Ireland, which drove hundreds of thousands to immigrate to the United States and New York in particular, many of these children were Irish Catholics. Protestant reformers like Brace were motivated by a dual mission of helping children lead safer, healthier lives, and naturalizing them as Americans by converting them from Catholicism and effacing their ethnicity. Areas of high poverty and crime, like Five Points and “Misery Row” on Tenth Avenue, were viewed as breeding grounds for moral degeneration and disease, and reformers ranging from temperance crusaders to child welfare advocates made these neighborhoods targets for their programs. Brace was critical of existing orphan asylums and almshouses, believing that the rigidity and depersonalized nature of these institutions made children less self-reliant, and that charity encouraged dependence. Brace maintained that a solid family life, paired with hard work and education, was the key to self-reliance. The Children’s Aid Society developed low-cost housing, reading rooms, and camps for the benefit of street children. The society also established schools where children could learn a trade and become self-sufficient. In 1864, the society founded an industrial school for boys on East 38th Street. Students received basic literacy skills, a midday meal, and carpentry classes. Orphan Trains One of efforts for which Brace is most remembered is his emigration plan, in which orphaned or abused children were removed from New York by train to be placed with families in rural communities on the Western frontier. Commonly known as Orphan Trains, the program followed Brace’s ideas that hard work and a stable family life were necessary for the development of citizens. Children were to learn Christian values and develop a strong work ethic, while the host families received help on their farms or in their family businesses. Critics believed that the program was little better than slavery or indentured servitude. Not all of those relocated as part of the program were orphans; some were taken from their families or surrendered by families who could not support them. The program was also viewed as anti-Catholic and anti-Semitic because many of the children targeted for emigration were Irish or Italian Catholics or Jewish, and they were placed in Protestant

households in the midwest and west. The children were encouraged, if not compelled, to sever all ties with their lives in New York or their home countries. Thus, critics declared that the goal of the program was not so much child welfare as the erasure of religious and ethnic difference that middle-class Protestant reformers found threatening. Brace never wavered in his conviction that emigration both helped the children and protected New York City. Between 1853 and 1929, more than 150,000 children were relocated from the slums of New York City and placed with families across the country. The Orphan Train Heritage Society in Concordia, Kansas, maintains an archive of the children’s stories. Twentieth Century to the Present After Brace’s death in 1890, the society’s emphasis gradually shifted from emigration to what would become the modern system of foster care, with placement in local homes and managed supervision. One of the society’s innovations was the development of programs that prescreened foster homes for the children and established follow-up procedures to ensure the welfare of the child. Other pioneering programs in child welfare that began with the Children’s Aid Society that have since become standard in the United States include Parent-Teacher Associations (1863), “fresh air camps” (1884), day schools for disabled children (1898), special programs for children with mental illnesses (1902), and home help services for families in which the mother is ill or away from the home (1933). The Children’s Aid Society continued to evolve as New York’s demographics changed. In the 1950s, it began to see changes resulting from the postwar baby boom, and responded with increased services, including the establishment of community centers in Harlem. The society established its first sex-education programs in the early 1970s, which not only targeted youths, but also parents and staff. Programs in adolescent sexuality and pregnancy prevention are now cornerstone initiatives of the society. Educational programs including college preparation courses, computer labs, and partnerships with New York public schools are predominant in the society’s 21st-century initiatives. Partnerships with businesses, such as Intel, and institutions such as Columbia University’s School of Dentistry and

Children’s Beauty Pageants



Oral Surgery, enable the society to provide programs that target the educational and health needs of impoverished and at-risk children. After the September 11 terrorist attacks on the World Trade Center and the Pentagon, with support from the New York Times Foundation, the Children’s Aid Society established its World Trade Center Relief Team to provide outreach and long-term support for victims’ families and displaced workers. The organization maintained a four-star rating from Charity Navigator, an independent charity evaluator, for 12 years as of 2012. In fiscal year 2011, the society had a total revenue of $111.5 million, of which $200.5 million came from contributions, gifts, and grants. In 2009, Richard Buery, Jr., an attorney, was appointed by the Children’s Aid Society’s board of trustees at the 10th CEO. Buery is the first African American to lead the society. Spencer D. C. Keralis University of North Texas See Also: Almshouses; Child Safety; Delinquency; Frontier Families; Immigrant Children; Irish Immigrant Families; Italian Immigrant Families; Orphan Trains; Poverty and Poor Families; Protestants. Further Readings Children’s Aid Society. History. http://www.childrens aidsociety.org/about/history (Accessed June 2013). Myers, John E. B. Child Protection in America: Past, Present, and Future. New York: Oxford University Press, 2006. O’Connor, Stephen. Orphan Trains: The Story of Charles Loring Brace and the Children He Saved and Failed. Boston: Houghton Mifflin, 2001.

Children’s Beauty Pageants Beauty pageants are a fast-growing industry in the United States. Estimates suggest that 250,000 children compete in nearly 3,000 pageants each year, making it a multibillion dollar industry. Boys and girls from birth to 18 years old may compete, though the majority of participants are girls, ages

217

6 months to 16 years. Age divisions are generally broken down into two- to three-year increments, depending on the number of contestants, and named to reflect the age group, such as Petite Miss, Little Miss, or Sweetheart. Children ages 2 and under are accompanied on stage by a parent or other caregiver, whereas all other contestants are expected to independently participate. The children’s beauty pageant industry is not regulated, so pageants may take place at a variety of locations (a hotel is most common), and each pageant director may have different expectations and judging criteria related to the appearance, talent, and performace of the contestants. Beauty pageants, although dating to the mid19th century, became popular after the first Miss America Pageant, which was held in 1921 in Atlantic City, New Jersey, as a marketing ploy to extend the season past Labor Day. The first teen beauty pageant, Little Miss America, launched in the 1960s in New Jersey. By 1962, the first children’s beauty pageant, Our Little Miss, was promoted as a way to help young girls practice their public speaking skills and win scholarships. Children’s beauty pageants garnered mass-media attention in 1996, when 6-year-old child beauty pageant queen JonBenét Ramsey was murdered and the industry was thrust into the public spotlight. Public attention has continued to focus on children’s beauty pageants, with the airing of two popular reality television shows, including Toddlers and Tiaras, and its spinoff, Here Comes Honey Boo Boo, both of which air on TLC. Types of Pageants In “glitz” pageants, the focus is on physical beauty, and contestants are expected to be flawless. It is not considered unusual for participants to exhibit a spray tan, wear false teeth (commonly called a “flipper”) to cover up missing baby teeth, have a professional hair stylist, wear heavy makeup and false eyelashes, have a professional manicure with acrylic nails, wear colored contact lenses, and wear custom-made outfits (short “cupcake” dresses with multiple layers of tulle and lace are popular for younger girls). The contestants’ behavior is often exaggerated. Girls may bat their eyelashes, dramatically tilt their heads, or blow kisses to the audience; they never speak on stage. “Natural” pageants are vastly

218

Children’s Beauty Pageants

different from glitz pageants, particularly in the area of appearance. Natural pageants focus on inner beauty; thus, contestants are expected to wear minimal makeup (some contestants under 12 are forbidden to wear any make-up), are not allowed to wear hairpieces or have “big hair,” are encouraged to wear typical store-bought outfits, have natural facial expressions, and speak or answer interview questions on stage. Crowning “Crowning” refers to the ceremony at the end of the pageant, when winners are selected in each age group. Common prizes may include a crown and cash; other prizes may include sashes, trophies, flowers, toys, beauty products, and, in some controversial instances, puppies. Generally, there is one “queen” and one “mini-supreme” awarded for every age division. The overall pageant winner earns the title “ultimate grand supreme,” and receives the largest crown and cash prize. Participants who have earned the lowest marks in their division may be solely recognized for their participation, and are given the title of “princess.” There are also awards for individual categories such as most beautiful, most photogenic, best hair, best smile, best eyes, and best talent. Cost and Critics Entering children in beauty pageants may prove costly for parents. While natural pageants may cost around $200, glitz pageants can cost remarkably more. Some parents reporting having spent between $50,000 and $100,000 on pageants. The most notable expenses include entry fees ($0 to $500) and attire and props ($1000 to $5,000), while coaches, modeling and dance lessons, professional hair and makeup artists, flippers (plastic mouthpieces to give the appearance of perfect teeth), tanning, and travel expenses also contribute to the cost. Considering that many children compete in several pageants a year and most do not win a cash prize, glitz families can spend upward of $15,000 on pageants annually. Pageant parents are sometimes criticized by family members, friends, or other parents for entering their children in beauty pageants. Some critics believe that children’s beauty pageants promote the notion that self-worth is measured by physical beauty. Critics also claim that children’s

beauty pageants hypersexualize young girls, especially glitz pageants with their emphasis on makeup and provocative choreography to make children look older than they are. The “pageant mom phenomenon” has also been a cause for criticism against the children’s beauty pageant industry. Some critics believe that mothers enter their children in beauty pageants as a means of fulfilling their dreams, forcing their child to compete even when he or she has expressed disinterest. Some parents may pressure their child to win in such a way that critics question if children’s participation in pageants is benefiting the child or benefiting the mother. Supporters Despite the cynics, pageant parents believe that participation in beauty pageants offers their children many benefits. Some of these benefits are tangible in nature, such as the chance to earn college scholarships, modeling contracts, and cash prizes. Other benefits relate to experience, such as the chance to learn positive life lessons, learning to always do their very best, and to gracefully accept winning and losing. Many parents in support of children’s beauty pageants liken pageantry to any other sport that requires practice, coaching, travel, and financial investment, reporting that participation in pageants helps their child build social skills, confidence, discipline, and determination. Jennifer S. Reinke University of Wisconsin–Stout See Also: Commercialization and Advertising Aimed at Children; Gender Roles in Mass Media; Reality Television. Further Readings Anderson, Susan. High Glitz: The Extravagant World of Child Beauty Pageants. Brooklyn, NY: PowerHouse Books, 2009. Cookson, Shari, dir. Living Dolls: The Making of a Child Beauty Queen (Documentary). New York: HBO, 2001. Lovegrove, Keith. Pageant: The Beauty Contest. New York: TeNeues, 2002. Merino, Noel, ed. At Issue: Beauty Pageants. Farmington Hills, MI: Greenhaven, 2009.



Children’s Bureau The U.S. Children’s Bureau is a federal agency in the Department of Health and Human Services. It was one of a number of agencies created by progressive reformers seeking to improve the social welfare of the nation’s disadvantaged populations. It was established on April 9, 1912, during the Taft administration. Today, the agency oversees issues involving child abuse, adoption, and foster care, but at its founding its purview included, in the language of the act establishing it, “all matters pertaining to the welfare of children and child life among all classes of our people, and . . . especially . . . infant mortality, the birth rate, orphanage, juvenile courts, desertion, dangerous occupations, accidents and diseases of children, employment, and legislation affecting children in the several states and territories.” It was the first national government agency in the world devoted to the well-being of children, and was headed by the first woman to head a U.S. government agency, Julia Lathrop, who became known as “America’s first official mother.” Lathrop was born to a politically prominent family. Her father helped found the Republican Party, her mother worked for women’s suffrage, and both parents were abolitionists. Lathrop had been involved with the Hull House social reformers in Chicago, but her work with the Children’s Bureau was notable for they way it encouraged the political participation of conservative women, who were at ideological odds with the women’s political movements of the time, including suffrage. Advocating for policies that were advantageous for children and motherhood, on the other hand, was something even the most politically conservative woman could support. The dominant ideology in the Children’s Bureau was maternalism. Maternalist reform was a strain of progressivism that called for public policy initiatives to benefit mothers in need, especially single mothers or the wives of unemployed or disabled men. Maternalism was part of a larger reform movement that insisted that the government had a responsibility to provide for the basic needs of its citizens when they could not provide for themselves, and that this responsibility was too important to entrust to charity. Maternalism had previously had a major success in the 1908 Supreme Court decision Muller v. Oregon, which upheld the constitutionality of a

Children’s Bureau

219

law limiting the working hours of women, but the creation of the Children’s Bureau was maternalism’s true and lasting success. The bureau was mainly staffed by women, and under Lathrop’s term (which lasted until 1921), it focused on lobbying for better child labor laws, directing research into infant and maternal mortality, providing for mother’s pensions, and addressing the problem of juvenile delinquency. Lathrop brought to the agency a scientific approach that heavily depended on research for finding solutions to social problems. However, the bureau was often at odds with other factions of the progressive movement and women’s rights groups. Despite the fact that the bureau was largely staffed by women, most of the staff opposed women working outside the home, especially if they were mothers. The early years of the bureau have also been criticized for its approach to race, leaving unaddressed the disproportionately high mortality rate of nonwhite babies, and strongly supporting assimilation and Americanization efforts aimed at immigrant families. Numerous families wrote to the bureau asking for child-rearing advice—400,000 letters a year, at the peak—and the bureau was happy to respond. While today, most parents accept that there is no one right way to raise a child and would object if a federal agency were to tell them how to raise their children, this was a large part of the bureau’s activity in the early 20th century. In addition to answering letters, bureau employees disseminated pamphlets, which were invaluable in an era when many women ended their formal schooling early and public education had little to say about, for instance, maternal and reproductive health, childhood disease, or approaches to discipline. The bureau’s work nicely dovetailed with the burgeoning home economics movement, which called for a scientific approach to the classroom teaching of homemaking. During the Great Depression, the Children’s Bureau collected monthly reports from 7,000 public and private agencies providing relief to families in order to monitor the flow of relief and identify populations that were most in need. Many of the reforms that the Children’s Bureau advocated were not adopted until the 1930s, when the New Deal changed so much about government and its role in American society. The 1935 Social Security Act gave the bureau responsibility for overseeing maternal

220

Children’s Defense Fund

and child health services, child welfare services, and medical care for disabled children. Shortly thereafter, the landmark Fair Labor Standards Act was passed, establishing minimum ages for general labor (age 16) and dangerous labor (age 18). During World War II, few in the bureau could continue to argue against women in the workplace. Beginning in 1943, the bureau established policies for the Emergency Maternal and Infant Care program (administered by the states), which provided medical and nursing care for the wives and children of men in the military. After the war, during a reorganization of the federal government, the bureau was relocated to the Social Security Administration, and was no longer tasked with lobbying for the needs of all children and mothers, but only for specific at-risk groups. In the 1950s, the bureau guided and funded the professional development of child welfare workers and social workers throughout the country. In the same era, it was moved again, to the Department of Health, Education, and Welfare (now the Department of Health and Human Services). Over time, its role has diminished as other state and federal agencies have assumed many of its responsibilities. Today, the Children’s Bureau helps fund essential services to children through state and tribal agencies, monitors state and tribal welfare services, funds and shares research on child welfare, and advocates for child welfare in legislative and regulatory matters. It supports National Foster Care Month, National Adoption Month, National Child Abuse Prevention Month, National Children’s Mental Health Awareness Day, and helps produce public service announcements with the Ad Council. Bill Kte’pi Independent Scholar See Also: Child Abuse; Child Advocate; Child-Rearing Experts. Further Readings Briar-Lawson, Katharine, Mary McCarthy, and Nancy Dickinson. The Children’s Bureau: Shaping a Century of Child Welfare Practices, Programs, and Policies. Washington, DC: NASW Press, 2013. Coontz, Stephanie. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books, 1993.

Gordon, Linda. Pitied but Not Entitled: Single Mothers and the History of Welfare, 1890–1935. New York: Free Press, 1994. Katz, Michael B. In The Shadow of the Poorhouse: A Social History of Welfare in America. New York: Basic Books, 1996. Ladd-Taylor, Molly. Mother-Work: Women, Child Welfare, and the State, 1890–1930. Chicago: University of Illinois Press, 1995.

Children’s Defense Fund The Children’s Defense Fund (CDF) is a national nonprofit advocacy and policy organization dedicated to assisting underserved children, youth, and families. Supported annually by $14 million in contributions and grants, the CDF issues reports, manuals, and guides on the socioeconomic and political well-being of children. It also provides public testimony, lectures, and news pieces on children and youth-related concerns. The CDF’s mission is to ensure that all children, regardless of ethnicity, gender, race, religion, sexual orientation, or socioeconomic status, have a healthy transition from childhood to adulthood. The CDF works on policies and programs covering child and youth abuse, education, health, homelessness, hunger, leadership, morality, neglect, poverty, and welfare. The CDF’s headquarters are in Washington, D.C., and state-based offices are located in California, Louisiana, Minnesota, Mississippi, New York, Ohio, South Carolina, and Texas. CDF Founder and President Born into the segregated south in 1939, Marian Wright Edelman developed an early interest in civil rights and collective community engagement. She graduated from Spelman College, a historically black women’s college, in 1960, and Yale University Law School in 1963. After becoming the first black woman admitted to the Mississippi Bar in 1964, Edelman registered black voters in the Mississippi Delta, and directed the National Association for the Advancement of Colored People’s (NAACP) Legal Defense and Education Fund in Jackson, Mississippi.



An activist and prolific writer, Edelman worked on a number of state and federal civil rights projects, including Head Start and the NAACP’s Legal Defense and Education Fund in New York City. In 1968, she partnered with Robert F. Kennedy, Martin Luther King, Jr., and the Southern Christian Leadership Conference (SCLC) to found the Poor People’s Campaign, which focused on the plight of impoverished families. Despite the assassinations of King and Kennedy in that same year, 7,000 campaigners marched on Washington in 1969 to call for an economic bill of rights. CDF History In 1968, Edelman founded the Washington Research Project (WRP), the nonprofit parent body of the CDF. The WRP aimed to raise national awareness about poverty and hunger, as well as monitor federal programs like Title I of the Elementary and Secondary Education Act (ESEA) of 1965, which was enacted to assist low-income families. Additionally, the WRP assisted civil rights–based organizations to defeat segregationist government nominees. Edelman founded the CDF in 1973, as a nonprofit nonpartisan organization dedicated to the wellbeing and rights of all children. As the organization’s founder and president, a position she retains as of 2103, Edelman has focused the broad mission of the CDF on raising awareness and shaping public debate toward the needs of children and families. Influenced by President Johnson’s War on Poverty policies, she also engineered the CDF’s proactive institutional engagement with underserved minority, poor, and disabled children. Policy Landmarks The CDF spends approximately $20 million annually on policy efforts to maintain and expand federal nutrition and welfare programs such as Women, Infants, and Children (WIC) and Head Start. In the 1970s, the CDF focused on early childhood education, student disabilities, and the U.S. juvenile justice system. For example, the CDF’s legal work helped secure separation of children and adults in South Carolina jails, passage of the Education for All Handicapped Children Act (1975), and increased funding for Head Start in 1977. In the 1980s and 1990s, the CDF tackled child welfare and health policy. For example, the organization assisted in the passage of the Adoption

Children’s Defense Fund

221

Assistance and Child Welfare Act (1980). The first annual Children’s Defense Budget Report (1981) proved influential, and helped defeat the Reagan administration’s attempt to defund ESEA’s Title I, foster care, and Medicaid. Between 1986 and 1992, Hillary Rodham Clinton served as the chair of the CDF’s Board of Directors, and was successful in increasing funding for children and youth social programs. During President Bill Clinton’s administration, the CDF also mounted a campaign to defeat the Republican legislative Contract with America, which sought large cuts to social programming for children and minority families. Programs and Campaigns The CDF embarked on a number of national programs and campaigns during the 1990s and 2000s. Concerned with illiteracy and incarceration rates in African American communities, in 1990, the organization co-convened the Black Community Crusade for Children (BCCC) with activist scholars John Hope Franklin and Dorothy Height. The BCCC currently trains youth leaders to work in majority African American counties across the south. After launching its 14-state Adolescent Pregnancy Prevention Program, the CDF established Freedom Schools in 1993. Freedom Schools provide after-school and summer enrichment, nutrition, reading, self-esteem, and social action programming. Through Freedom Schools, the CDF serves approximately 11,500 students across 25 states. The CDF also purchased the Haley Farm in Clinton, Tennessee, in 1994. Once owned by the Pulitzer Prize–winning writer Alex Haley, the farm now hosts policy roundtables, youth and college-age leadership events, and literature festivals. In 2001, the organization launched a five-year campaign, Leave No Child Behind, which called for comprehensive federal legislation on health, safety, and education. In 2004, the CDF also started the grassroots campaigns, Wednesdays in Washington, Wednesdays at Home, and Children Can’t Vote, You Can in an attempt to hold elected officials accountable through phone banks and voter registration. More current CDF campaigns include the Cradle to Prison Pipeline, focused on juvenile incarceration rates and corresponding impacts on families, and

222

Children’s Online Privacy Protection Act

Protect Children, Not Guns, aimed at raising gun violence awareness and passing comprehensive federal gun legislation. Melinda A. Lemke University of Texas at Austin See Also: Child Abuse; Child Advocate; Civil Rights Movement; Head Start; Poverty and Poor Families; War on Poverty; Working Class Families/Working Poor. Further Readings Children’s Defense Fund. Research library. http:// www.childrensdefense.org/child-research-data -publications (Accessed December 2013). Edelman, M. W. Lanterns: A Memoir of Mentors. New York: First Perennial, 2000. Iceland, J. Poverty in America: A Handbook, 2nd ed. Berkeley: University of California Press, 2006.

Children’s Online Privacy Protection Act The Children’s Online Privacy Protection Act (COPPA) first became U.S. federal law in 1998. Under the law, the Federal Trade Commission (FTC), which is responsible for consumer protection, enforces regulations of online privacy for children under the age of 13. COPPA’s purpose is to help safeguard private information that children may divulge online by placing control with parents and legal guardians, and requiring their consent before such information is collected. Web site operators with sites or services that are directed at children (including smartphone apps) that use or disclose personal information from children under 13 years of age are legally required to adhere to the following set of rules provided by the FTC: • Operators must give direct notice to parents and obtain verifiable consent before collection, use, or disclosure of a child’s information. • Operators must provide a privacy policy that explains the purposes for personal

information collection, how the operator uses the information, and the disclosure practices. • Operators are not to disclose information to third parties, unless that is explained to parents, and it is integral to the purpose of the operators’ Web site or service. Reasonable steps must be taken to ensure that other parties are capable of maintaining confidentially and security. • Upon request, operators must provide descriptions of specific information collected from the child, the opportunity to refuse operators’ future use or collection of personal information, and for parents to obtain personal information that has been collected from that child (within reasonable means) for the purpose of review and/ or deletion. • Operators must maintain and protect the confidentiality, security, and integrity of personal information, only until the purpose of its collection is met, and then it must be deleted to prevent unauthorized use. COPPA has also provided instances in which consent is not required. These exceptions include one-time direct responses in which recontact is not established by the operator and private information is not maintained (e.g., single response to an inquiry). If operators are contracted through schools, consent is also not required, but the operator will have to obtain consent if any information is used for commercial purposes. Safe harbor programs are available for those that provide COPPA with details of how private information will be collected through their sites or services. By submitting self-regulatory guidelines to the FTC for approval, as well as meeting other requirements, industry companies or other operators may be given commission approval if guidelines are considered compliant. Companies such as Privo Inc. or TRUSTe that have received approval may have protected liability that would be reduced or removed as long as the operator follows the self-regulatory guidelines. Terminology COPPA originally defined “operator” as any individual who maintains a Web site or any online



Children’s Rights Movement

223

service, and who collects personal information. As of July 1, 2013, COPPA had expanded its definition to account for plug-ins and other third-party networks that collect private information. Plugins or advertising networks must also obtain prior parental consent when given actual knowledge of personal information that is collected by child-oriented Web sites or services. Operators should be aware of security and confidentiality practices of third parties or other services that may be collecting information from them. COPPA defines “child” as protected individuals under 13 years of age. COPPA has also defined “personal information” as any identifiable information about individuals that is collected online. This includes first and last name, physical address, e-mail address, telephone number, social security number, or other identifiers that allow for contacting an individual.

from being shared; however, if third parties are illegally using private information, unbeknownst to an operator, regulations may not be upheld, and children’s information or safety may be at risk. There is also concern for operators because they are forced to stay updated with new technology that is driven by the user, which can be difficult. One major concern for the unintended consequences of COPPA is that children and parents alike may choose to ignore the restrictions.

The Amended Rule Since COPPA was first signed into federal law, revisions have been necessary. Such revisions have been implemented to account for dynamic advancements in technology. COPPA now includes geolocation as personal information that requires obtaining parental consent. When COPPA was first enacted, GPS services were not available on the majority of phones or other devices. Online technology has also improved to handle a vast data stream of photos, videos, and audio recordings. For sites or services collecting such content involving a child under 13 years of age, parental consent must first be obtained. Originally, a screen or user name was required to reveal an individual’s email address to be considered personal information. With the amended rule however, any screen or user name is considered personal information if associated with identifiers that serve as contact information. Furthermore, operators that use identifiers to recognize individuals across multiple Web sites or services over time are now required to first obtain parental consent.

Further Readings Boyd, D., E. Hargittai, J. Schultz, and J. Palfrey. “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the Children’s Online Privacy Protection Act.” First Monday, v.16 (2011). Children’s Online Privacy Protection Act (COPPA). “Complying With COPPA: Frequently Asked Questions.” http://www.business.ftc.gov/documents/ Complying-with-COPPA-Frequently-Asked -Questions (Accessed December 2013). Federal Trade Commission. Children’s Online Privacy Protection Act of 1998. http://www.ftc.gov/ogc/ coppa1.htm (Accessed December 2013)

Criticisms COPPA’s revisions are not without critics. For example, if operators do not have knowledge that an application is failing to comply, the operator is not responsible for the shared private information. The issue is that COPPA was originally created to prevent private information of underaged children

Timothy Phoenix Oblad Elizabeth Trejos-Castillo Texas Tech University See Also: Child Safety; Children’s Television Act; Commercialization and Advertising Aimed at Children.

Children’s Rights Movement The concept of children’s rights simply asserts that children have the right to grow up in safe environments. Most Americans assume that parents altruistically undertake their child-rearing duties and act with the best interests of their child in mind, and the court systems uphold this notion. Rights and benefits of parents are legally derived from their duties with common law practice. However, historically, there has also been an

224

Children’s Rights Movement

assumption that children belong to their parents. When parents fail to maintain their children’s well-being or cause them severe harm, the courts have delineated that children have rights as autonomous individuals. Under the doctrine of parens patriae, the state is considered the ultimate parent to every child, and is authorized to intervene on behalf of dependent children. The earliest legal discourse on American children’s rights dates back to a “stubborn child” law, set by the colonist of Massachusetts Bay in 1641, which allowed children to defend themselves if parents were abusive. Children have increasingly become “double” dependent upon parents and the state for education and social services since then. Children today are guaranteed access to education regardless of gender, race, disability, national origin, and religion. During most of the 19th century, middle-class institutions were built to reform and educate lower-class children. Children of working parents during this time were perceived as at risk for neglect, but children also had fundamental civil rights violated in homes, schools and other public institutions. In Fletcher v. People (1869), a blind boy, Samuel Fletcher, Jr., was confined to his parents’ cellar for several days. After escaping, he notified officials, and his parents were arrested for child endangerment and neglect. The court ruled that children have the right to be protected by law against abuse and cruelty. The New York Society for the Prevention of Cruelty to Children was established in 1874, after an adopted 8-year-old named Mary Ellen Wilson was regularly beaten and was not allowed to leave home, except to go into the backyard at night. After receiving a tip, police rescued her, and her adoptive mother served prison time for a year. Children do not typically own property, which makes them physically and emotionally dependent upon adult caretakers. Before labor laws were enacted during the 1930s, children were routinely exploited in work situations. Poor laws in colonial North America mandated that children from poor families be routinely indentured into service, where they received room, board, and clothes from a master in exchange for their labor. The first child labor laws were enacted in Massachusetts during the 1830s. They required minimal schooling for children under 15 years of age who worked in factories, and

child labor was limited to 10 hours a day in 1842. During the American Industrial Revolution, children regularly worked in mines, glass factories, textile mills, canneries, and on farms. By the end of the 19th century, boys in urban areas worked as messengers, newsboys, bootblacks, and peddlers. Samuel Gompers, founder of the American Labor Federation, led the movement to end child labor in cigarmaking factories, which specifically targeted tenements in New York City in 1883. Progressive Era Children’s rights became part of the fight for women’s suffrage and rights during the Progressive Era (1890­–1920), when scientific progress led to the academic study of children and child development, which provided reformers new tools for transforming society. President Theodore Roosevelt launched the first White House Conference on Children and Youth in 1909, which spotlighted issues related to the institutionalization of neglected and dependent children. The same year, popular American satirist Marietta Holley wrote Samantha on Children’s Rights, a work of humorous fiction that focused on child abuse and neglect in a style similar to that of Mark Twain. The Children’s Year (1918–19) was an initiative of the U.S. War Department that came out of U.S. entry into World War I, when the military draft had detected a high rate of physical defects that could have been prevented in childhood. One mandate was to promote birth registration. Adelaide Brown, the doctor who led California’s Children’s Year, said “the value of the recorded birth certificate has not been realized as a possession of the child—a child’s right—but the draft, school attendance, working privileges, and Americanization all emphasize the value.” The Sheppard-Towner Act (1921) financed children’s health initiatives with federal funding. This resulted in funding for nurses to make teaching visits to homes within 24 hours of a mother leaving the hospital. This visit was referred to as “house breaking” because the nurse prepared the kitchen, bathroom, and bedroom for the infant’s care. Migrant children of farmworkers often worked in the fields, were exposed to hazardous pesticides during their early childhood years, and received few or no wages for their labor. Labor activist Dolores Huerta was a pioneering grassroots organizer in



Children’s Television Act

225

California’s San Joaquin Valley, who witnessed the plight of migrant families in which both children and parents worked the fields together to subsist; she taught children who regularly attended school hungry and without basic necessities. Huerta cofounded the National Farm Workers Association with Cesar Chavez, and lobbied the legislature to extend Aid to Families with Dependent Children to California farm workers in 1963.

addressed early intervention, special education, and other services for children with disabilities.

Enforcement and Services of Children’s Rights The notion of children’s rights remains somewhat abstract because rights without enforcement or services are meaningless. During the 1970s, several groups began to lobby for children as a specific interest or minority group in order to extend adult rights to support their needs. Marian Wright Edelman, as the founder and director of the Children’s Defense Fund, identified issues related to children such as the right to privacy, educational equity, the labeling and treatment of those with special needs, preventing children from being used as human subjects for medical experimentation, juvenile justice reform, and providing affordable quality day care. Children who have been taken out of homes where parents have been abusive or negligent are placed into institutional settings where they should be guaranteed safety. The Juvenile Justice and Delinquency Prevention Act (1974) placed restrictions on public facilities, but abuse and neglect continued to be compounded in institutional settings (often through isolation and medication) when juveniles were treated as problem children, instead of as abused children. Disability activism has more recently focused efforts on addressing children’s rights to deinstitutionalization, community-based living, and access to public education. The Advocates for Retarded Citizens (the ARC), established by parents of children with intellectual and developmental disabilities in 1950, has worked to change perceptions of individuals with these challenges, and advocates for educational opportunities and access to daycare, preschools, and eventually the right to work. The Individuals with Disabilities Education Act (IDEA) of 1975 guaranteed American children with disabilities from birth to adulthood the right to attend public schools to increase their literacy, and for the first time defined how public agencies

Further Readings Battered Child Syndrome: Investigating Physical Abuse and Homicide. Washington, DC: U.S. Department of Justice Programs, Office of Juvenile Justice and Delinquency Prevention, 2002. Gross, Beatrice and Ronald Gross. The Children’s Rights Movement: Overcoming the Oppression of Young People. Garden City, NY: Anchor Books, 1977. Hawes, Joseph M. The Children’s Rights Movement: A History of Advocacy and Protection. Boston: Twayne Publishers, 1991.

Meredith Eliassen San Francisco State University See Also: Child Abuse; Child Labor; Children’s Aid Society; Disability (Children); Discipline; Domestic Violence; Foster Care.

Children’s Television Act In the early 1950s, the first television programs for children were introduced into network schedules, and by the mid-1960s, children had the option of watching up to four hours of cartoons every Saturday morning. Advertisers quickly took advantage of this new market, finding the youth a captive audience for commercials promoting toys and breakfast cereal. Over the years, advocacy organizations such as Action for Children’s Television monitored this changing media environment, and lobbied television producers, networks, and eventually Congress to limit advertising aimed at children. Advertisers, broadcasters, and both the Reagan and first Bush administrations opposed these regulations, arguing that they infringed on broadcasters’ right to free speech. After years of debate, Congress enacted the Children’s Television Act (CTA) in 1990. The law sets standards for educational and informational shows and restricts commercial content in programming aimed at children. In 1991, the Federal Communication Commission (FCC) ratified specific rules for broadcasters to be in compliance with the CTA.

226

Children’s Television Act

Commercial television broadcasters must have at least three hours per week of programming that serves the educational needs of children. Commercials during those programs must be limited to 10.5 minutes per hour or less on the weekends and 12 minutes per hour or less during the week. Broadcasters must file reports with the FCC on a regular basis, describing their compliance efforts. In the years following the enactment of the CTA, scholars analyzing broadcasters’ compliance efforts found inadequate adherence to the regulations. Some broadcasters scheduled the required educational programming in the middle of the night, when children were not likely to be awake, and other broadcasters cited programs like The Jetsons and The Flintstones as educational because they taught about the future and the past, respectively. In 1993, the FCC initiated proceedings to review the effectiveness of the CTA, and over 20,000 letters and emails were received from the public as a part of the process. In 1996, with input from the public, educators, child advocates, and the National Association of Broadcasters, Congress updated the CTA with the Children’s Television Report and Order. This provides broadcasters with more specific guidelines as to what qualifies as educational programming for children, or “core” programming. The regulations specify that core programs must be specifically designed to meet the educational and informational needs of children aged 16 and under, and that education must be the programs’ significant purpose. In addition, these programs must be scheduled on a weekly basis, be at least a half-hour long, air between 7:00 a.m. and 10:00 p.m., and be identified as core programming at the beginning of the program, usually with the symbol “E/I.” The broadcaster must also provide program guides (e.g., in newspapers or TV Guide) with information identifying core programs and the age group for which the programs is intended. Compliance reports must be filed with the FCC on a quarterly basis, and made available to the public. Stations that do not comply with the filing requirements face the possibility of heavy forfeitures (fines) and revocation of their Class A license status. Some broadcasters have been fined up to $10,000 for not making their reports available to the public. In terms of verification, the 1996 order states that the “Commission will ordinarily rely on the good faith judgments of broadcasters” as to

whether programming satisfies the core programming requirements, and will “evaluate compliance of individual programs . . . only as a last resort.” Since the law was enacted, evaluation of program compliance has typically taken place in response to petitions filed by external individuals or organizations. One of the largest fines ever paid to the FCC was the result of a petition filed by the United Church of Christ, accusing Univision Communications of misclassifying entertainment programming as core programming. Univision reported Complices al Rescate—a telenovella about 11-year-old identical twin girls who swap identities—as part of their core programming. However, the program contained adult themes, complex subplots, and adjacent commercials that were not aimed at children. The FCC investigated and found violations at 24 Univision stations, and Univision paid a fine of $24 million. In 2004, the FCC again updated the CTA, this time in response to the proliferation of digital television broadcasters and increased use of the Internet by broadcasters as a tool to market toward children. The FCC increased the core programming benchmark for digital broadcasters that multicast so that it is proportional to the increase in free video programming offered by the broadcaster on multicast channels. The commission also modified the FCC reporting form to include a section for broadcasters to report core programming on digital program streams. In 2006, the FCC updated the CTA in order to delineate specific rules for showing Web site addresses, both during children’s programming and in the adjacent commercials. Any Web site addresses shown on television must contain a substantial amount of noncommercial content that is separate from any commercial content on the site. The Web site cannot be used for e-commerce or advertising. Television stations violating these rules have been subjected to forfeitures of up to $70,000. In some cases (e.g., a Pokemon program and a Yu-Gi-Oh program), the television show was found to be a “program length commercial,” and thus grossly over the per-hour limit on commercial material aimed at children. The FCC considers a show a program-length commercial when an advertisement in the same time slot mentions a character or product from the show. The FCC takes the stance that children may not be able to distinguish between the show and the advertising, and typically frowns upon “host-selling,” or

Chinese Immigrant Families



the use of show hosts or characters to sell products during the same time slot as their program. Since its inception in 1990, the interpretation, enforcement, and effectiveness of the Children’s Television Act has been continually discussed and challenged by the public, educators, advocates, scholars, and government officials. As the media landscape for children continues to change, the debate about how to best manage television for children will continue. Kimberly Eberhardt Casteline Fordham University See Also: Commercialization and Advertising Aimed at Children; Marketing to and Data Collection on Families/Children; Parental Controls; Primary Documents 1990 and 2006; Television 1980s; Television 1990s; Television 2000s; Television for Children. Further Readings Children’s Television Act of 1990. Publ. L. No. 101-437, 104 Stat. 996-1000, codified at 47 USC Sections 303a, 303b, 394. Federal Communications Commission. “Three Year Review of the Implementation of the Children’s Television Rules and Guidelines: 1997–1999,” Mass Media Bureau—Policy and Rules Division, 2001. Jordan, Amy B. “Is the Three-Hour Rule Living Up to Its Potential? An Analysis of Educational Television for Children in the 1999/2000 Broadcast Season.” Annenberg Public Policy Center of the University of Pennsylvania, 2000.

Chinese Immigrant Families The foreign-born population of the United States increased from 9 million in 1970 to more than 40 million in 2010, according to the U.S. Census Bureau. Among all foreign-born immigrants, Asian Pacific Americans comprise the fastestgrowing segment, with an increase of more than 50 percent compared to the total U.S. population increase of 13 percent over the same time period, making them the fastest-growing racial group in the country. The rapid growth in the Asian Pacific

227

American population is from the large number of Asian immigrants, particularly those who identify as Chinese, entering the United States. Of the 40 million individuals in the United States as of 2010 who were foreign born, 1.8 million (4.5 percent) identified as Chinese, representing the largest group of immigrants, apart from those from Mexico. Additionally, unlike previous generations that immigrated to the United States, today more immigrants are arriving as a family unit, with children making up a significant proportion of the total number. Waves of Chinese Immigration The historical migration of persons of Chinese ancestry highlights the differing paths that Chinese immigrants have taken on their way to the United States. Chinese migration began in the mid-19th century, and has been influenced by national immigration and legal policies, politics, and economic upheavals. Because of changes and fluctuations in the United States’ societal attitudes toward immigration, politics, and economics, circumstances regarding the immigration of Chinese persons have tremendously varied in different eras. Generally, there have been five waves of Chinese migration to the United States. The first large-scale wave occurred from 1850 to 1919, after gold was discovered in California and extraordinary amounts of manual labor were necessary to construct a network of railroads across the country. Many of these Chinese immigrants were peasant farmers from villages in China, with the hope of becoming wealthy through the good fortune of “Gold Mountain.” Many also left China to escape the difficult economic conditions and the conflict of the British Opium War. However, because of growing racism and xenophobia against this first wave of Chinese immigrants, Congress passed the Chinese Exclusion Act of 1882. This barred Chinese laborers and their family members, including their wives, from immigrating into the United States. This separated many families as wives and daughters could not join their husbands, fathers, and sons after the men had immigrated to the United States. This disruption caused many families to be separated for years, and resulted in the delay in the growth and maturity of the second generation. The second wave of Chinese immigration began with their exclusion. The Immigration Act of 1924

228

Chinese Immigrant Families

essentially prohibited immigration of persons of Asian descent to the United States; in 1930, this law was relaxed to allow the immigration of wives of Chinese merchants residing in the United States and Chinese women who had married U.S. citizens before 1924. During this time, large family units began to form as the immigrant husbands and wives could now raise a generation of American-born Chinese children, creating intact Chinese immigrant families in the United States. In addition, many of the Chinese who immigrated during the first wave as manual laborers for the railroads and mines started to create small businesses, such as laundry, fishing, and produce businesses. Because of these circumstances, the Chinese immigrant family could function as a productive enterprise. These enterprises were influenced by collectivist values and a division of labor by age and gender. The third wave of Chinese immigration took place from 1943 to 1964, when many more wives reunited with their husbands after years or decades of separation. Due to the long separation, many of the women and children had stronger bonds than the husband and wife. Also, changes in immigration policies allowed Chinese men to return to Hong Kong to find wives and return with them to the United States. Often, these marriages were arranged through relatives or matchmakers, and many of the wives were 10 to 20 years younger than their new husbands. The fourth wave of immigration was prompted by the Immigration Act of 1965, which allowed a huge influx of Chinese immigrants into the United States, and unlike previous waves of Chinese immigration, these immigrants arrived as intact families. Most of the immigrant adults in this wave found labor-intensive low-paying employment, such as working in restaurants and the garment industry, with long hours that left little time to spend with their families. The priority of this generation of immigrants was economic survival. The fifth wave of immigration began in 1978, and continues to the present. This wave consists of different subgroups of Chinese immigrants. Because of the reestablishment of diplomatic ties between the United States and the People’s Republic of China in 1978, many students and professionals came to study or work in the United States, and decided to stay permanently and start families. Another group of Chinese immigrants came from Hong Kong, prompted by the uncertainty associated with the transfer of the

region’s sovereignty from Great Britain to China in 1997. Yet another group of immigrants came from Taiwan and sought refuge from the political climate and a better education for their children. Furthermore, ethnic Chinese living in southeast Asia, including Vietnam, Cambodia, and Laos, immigrated because of wars, genocide, and atrocities occurring in those countries. A more recent group of Chinese immigrant families are what are known as “astronaut” or “parachute” families. These are families who set up households in the United States and their home countries due to various reasons such as employment and educational opportunities. One particular type of astronaut family is where the child lives in the United States alone or with siblings, while the parents continue to live in the home country after they receive their green cards. Because of these many waves of immigration and the long period over which they span, there is no typical Chinese immigrant family. They come from many parts of Asia, have numerous historical backgrounds and traditions, and vary in their socioeconomic and political backgrounds and the circumstances in the United States in which they are immersed. Depending on the immigrant family’s exposure to Western culture, the family may undergo the process of acculturation once they arrive. Acculturation is defined as the cultural and psychological changes that take place in the individual as a result of two cultures meeting. This can include negotiating cultural differences in language, attitudes, values, beliefs, customs, and behaviors. Recent research examines acculturation from a bidimensional framework, wherein orientation to, or maintenance of, the immigrant’s culture of origin, in this case Chinese culture; and orientation to, or acceptance of, the host culture, the United States, is measured in individuals and families. In general, research has shown that those who maintain elements of their culture of origin while adopting to the host culture tend to show better adjustment, whereas those who do not maintain ties to their culture of origin or those who do not adopt elements of the host culture tend to show the worst health indices. Acculturation Acculturated families can be divided into several categories. The acculturation categories relevant to Chinese immigrant families include traditional



This Chinatown neighborhood in Chicago is the second-oldest settlement of Chinese in America. It was settled after the Chinese fled persecution in California after 1869.

families, culture-conflict families, bicultural families, and highly acculturated families. Traditional families are those in which many family members were born and raised in an Asian country and have limited contact with U.S. culture once they immigrate. Generally, these families include those who recently immigrated to the United States and have very limited exposure to U.S. society; families who live in ethnic communities (e.g., Chinatown); older adult immigrants/refugees at the time of immigration; and those from agricultural backgrounds. These families tend to maintain their culture of origin’s beliefs and values, practice traditional customs, and do not adopt or even accept Western culture. Culture-conflict families are those in which different

Chinese Immigrant Families

229

acculturation levels cause conflicts between family members. Typically, the conflict is between the older generation adults who more commonly adhere to traditional perspectives, and the younger generation family members who often accept aspects of the host culture to the dismay of the older generation. Conflicts can manifest in terms of values, behaviors, gender roles and expectations, dating, major in college and career choices, religion, philosophy, and politics. Bicultural families consist of families who have acculturated parents due to the exposure to the host culture prior to immigration. These families tend to be bilingual and bicultural, making the adjustment to the host culture much easier because family members are familiar with both Eastern and Western cultures. Highly acculturated families are those where families have adopted the host culture’s belief system and values, tend to speak the host culture’s language, and do not retain or possibly reject their culture of origin. The acculturation process may prove stressful on a social, emotion, and economic level for individuals and families. This acculturative stress is a typical part of adjustment. This stress can then produce a negative response in the individual or family to the conflicts in values, attitudes, and behaviors between the two as a byproduct of the acculturation process, which may lead to a decrease in physical, psychological, and social aspects of health. Moderating factors that influence the accompanying stress in the acculturation process are the nature of larger society (e.g., welcoming, hostile, or indifferent); type of acculturating group (e.g., temporary or sojourner, immigrant, refugee, or indigenous); modes of acculturation (e.g., separated, assimilated, marginalized, integrated); demographic and social characteristics of the individual and family (e.g., age, educatio level, employment, and language proficiency); and psychological characteristics of the individual (i.e., having similar characteristics and values to U.S. culture). A 2004 study by Xie, Xia, and Zhou on Chinese immigrant families focused on their strengths and challenges, particularly with acculturative stress. The strengths reported by the participants included family support leading to achieving a renewed sense of family; contextual support from friends and community; communication among family members; spiritual well-being; and balancing of U.S. and Chinese cultures. Challenges related to acculturative stress included barriers from language, loneliness,

230

Christening

and the loss of social status and identity at the early stage of immigration. Recent Changes in Chinese American Families Despite the lack of a quintessential Chinese immigrant family, some changes in Chinese American families have been noted that highlight aspects of Chinese immigrant families. First, families appear to be moving toward the nuclear family structure. As a result, functional relations are more applicable as opposed to the family’s functioning based on the actual household structure. Second, in terms of decision making, a shift is taking place from a traditional patriarchal family where the head male figure makes most decisions, to a more equitable system that gives the wife input into family decisions. Third, the primary importance placed on the parent–child dyad, particularly mother–child, is decreasing, whereas the husband–wife relationship is increasing in significance. Furthermore, the favoritism toward sons because the sons remain with the family and daughters become a member of her husband’s family is decreasing. Sons and daughters are now attaining similar levels of importance, care, and concern from the parents as daughters attain high education and career positions, continue their relationships with their families-of-origin throughout their lives, and help care for their parents as they age. Next, marriages based on love and romance are more the norm in Chinese immigrant families than arranged marriages. In addition, children tend to leave home prior to marriage to live independently, so multiple-generation households are less common. Parents are now experiencing the “emptynest syndrome.” In terms of child-rearing, children’s academic and career achievements are used as a measure of success. Last, the family’s financial state does not solely rest on the father, but is shared among other family members. These changes are now shaping a different type of family, and Chinese immigrant families are transforming into a new family structure. Debra M. Kawahara Alliant International University See Also: “Anchor Babies”; Asian American Families; Immigrant Children; Immigrant Families; Immigration Policy.

Further Readings Berry, J. W. “The Acculturation Process and Refugee Behavior.” In Refugee Mental Health in Resettlement Countries, C. L. Williams and J. Westermeyer, eds. Washington, DC: Hemisphere, 1986. Berry, J. W. “Immigration, Acculturation, and Adaptation.” Applied Psychology: An International Review, v.46 (1997). Berry, J. W., U. Kim, T. Minde, T., and D. Mok. “Comparative Studies of Acculturative Stress.” International Migration Review, v.21 (1987). Berry, J. W. and D. Sam. “Acculturation and Adaptation.” In J. W. Berry, M. H. Segall, I. Kagitcibasi, eds. Handbook of Cross Cultural Psychology: Vol. 3 Social Behavior and Applications. Boston: Allyn & Bacon, 1997. Lee, E. “Chinese American Families.” In Working with Asian Americans: A Guide for Clinicians, E. Lee, ed. New York: Guilford Press, 1997. Lee, E., and M. R. Mock. “Chinese Families.” In Ethnicity and Family Therapy, 3rd ed. M. McGolderick, J. Giordano, and N. Garcia-Presto, eds. New York: Guilford Press, 2005. Sodowsky, R. G., K. L. Kwan, and R. Pannu. “Ethnic Identity of Asians in the United States.” In Handbook of Multicultural Counseling, J. G. Ponterotto, et al., eds. Thousand Oaks, CA: Sage, 1995. Wong, M. G. “The Chinese American Family.” In Ethnic Families in America, 3rd ed. C. H. Mandel, R. W. Habenstein, and R. Wright, eds. New York: Elsevier Science, 1988. Xie, X., Y. Xia, and Z. Zhou. “Strengths and Challenges in Chinese Immigrant Families.” Great Plains Research, v.14 (2004). Zane, N. and W. Mak. “Major Approaches to the Measurement of Acculturation Among Ethnic Minority Populations.” In Acculturation: Advances in Theory, Measurement, and Applied Research, K. M. Chun, et al., eds. Washington, DC: American Psychological Association, 2003.

Christening From the colonial era to the present, christenings have primarily been performed in the Roman Catholic Church. The ceremony functions as a naming ceremony, and as way to extend the child’s



family to include one or more godparents and all the members of the Roman Catholic Church. Christenings are often mistakenly equated with the Protestant practice of baptism, in which all the members of a congregation, rather than just the godparents, promise to rear the baptized infant in the Christian faith. In a christening, tradition long dictated that at least one of the infant’s names would include a saint’s name, but since the early 1980s, this requirement has not been enforced. The naming part of the ceremony usually occurs at the same time that the child is baptized by the priest, when he sprinkles holy water on the infant’s head, and says, “I baptize you in the name of the Father, Son, and Holy Spirit.” The priest may require the parents to attend classes to understand the significance of christening and baptism before conducting the ceremony. A key element of the christening ceremony involves the child’s parents appointing one or two godparents to assume responsibility for their child in the event that they both become incapacitated or unexpectedly die. The godparents are also selected to serve as religious mentors, who will help the child understand his or her Catholic faith throughout childhood. If the parents appoint only one godparent, that godparent may be either male or female, but must be a practicing Roman Catholic (i.e., confirmed, has received Holy Communion, attends church regularly, and is free of church penalties) and over the age of 16. If the parents appoint two godparents, the godparents must be male and female, and be over the age of 16, but only one of the two must be a practicing Roman Catholic. Divorced people can be godparents, as long as they are in good standing with the Roman Catholic Church; namely, they neither are in a serious relationship nor have remarried without having been granted an annulment for the previous marriage. The non-Catholic is referred to as a Christian witness; this is an individual who has been baptized in the Christian faith and affirms the Trinitarian nature of God. Both the parents and the godparents uphold the baptismal vows on the infant’s behalf until the child has attended confirmation classes and can voluntarily reaffirm his or her vows to become a member of the church. Thus, since colonial times, the infant’s family has extended beyond the nuclear and extended family to include members of the Roman

Christening

231

Catholic Church, and possibly Christian friends outside the Roman Catholic Church. However, the Roman Catholic Church does not recognize gay marriages, so an infant with gay parents is not allowed to have an extended family that includes members of the Roman Catholic Church. This ceremony is usually conducted at a time other than Sunday mass, but with the same reverence. The parents, relatives, and friends dress up for the ceremony in their usual attire for mass, and the infant may be dressed in a long, white or cream-colored baptismal gown to symbolize his or her new life in Christ. Afterward, friends and family may gather for a festive reception. Christian parents in other major branches of Christianity, such as Eastern Orthodox Christians and Protestant Christians, with the latter including Lutherans, Methodists, and Presbyterians, have their infants baptized during their Sunday morning worship services but do not refer to these ceremonies as christenings, although some Christians belonging to other denominations sometimes mistakenly refer to those baptisms as christenings. In these public baptisms, all the members of that congregation promise to join the parents in rearing the child in the Christian faith. The child may also have godparents present, who promise to guide the child’s religious education. The parents or godparents often receive a candle as a memento of the promise of baptism. The parents wear their usual Sunday attire, and babies may or may not be dressed in long white baptismal gowns, much like those used in Roman Catholic christenings. See Also: Baptism; Catholicism; Christianity. Emily R. Cheney Independent Scholar Further Readings Fraser, Antonia. The Weaker Vessel. New York: Alfred A. Knopf, 1984. “St. Joseph in Scripture.” Oblates of St. Joseph. http:// www.osjoseph.org/stjoseph/scripture (Accessed June 2013). U.S. Council of Catholic Bishops. “Marriage: Love and Life in the Divine Plan.” http://www.usccb .org/issues-and-action/marriage-and-family/ marriage/love-and-life/upload/pastoral-letter -marriage-love-and-life-in-the-divine-plan.pdf (Accessed June 2013).

232

Christianity

Wills, Garry. “Garry Wills on Catholic Culture.” Commonwealth, v.91 (1969). http://jakomonchak .files.wordpress.com/2012/02/garry-wills-on -catholic-culture1.pdf (Accessed June 2013).

Christianity Christianity is one of the major religions of the world with approximately 2.2 billion adherents. These adherents are split into three groups: the Catholic Church, the Eastern Orthodox Church, and the various Protestant denominations that were formed during the 16th century Reformation when they split from the Catholic Church. The term Christian means “Christ follower.” The religion is based on the Bible, which teaches that Jesus Christ is the Son of God, who was born and died for believers’ sins, and that those who follow his teachings will have eternal life. Jesus, along with God the Father and the Holy Spirit, comprise the Holy Trinity, the essential Christian idea that God is of one nature but comprises three distinct “persons.” In the Bible, the time between the creation of Earth and the birth of Jesus is known as the Old Testament, and Jesus’s birth begins the New Testament. The first five books of the Bible are also known in Judaism as the Torah. Today, the three Abrahamic religions (Christianity, Islam, and Judaism) all trace their origins to Abraham. These religions are interrelated. The Old Testament tells the story of Abraham, the father of the Israelites, and his relationship with God. Christians believe that the New Testament account of the death and resurrection of Abraham’s distant progeny, Jesus Christ, saves the Jews from eternal damnation that was their destiny because of Adam and Eve’s sins in Paradise, a concept known as “original sin.” Throughout the Old Testament, many authors focused on the relationship between God and man, including atonement for sin. Many books of the Old Testament are dedicated to teaching about sin and the sacrifices necessary to atone for it. The Old Testament foretells the life of a messiah (savior) who would be born and serve as the ultimate sacrifice for sin; the one who would take away sin, rather than simply atoning for it through continued sacrifices.

At the time of Jesus’s birth, the Jewish people were not looking for a Messiah to take away their sin; they were looking for a political “savior” to provide peace and stability for their nation; therefore, Jesus was not widely recognized as the Messiah. Jesus lived a quiet life until around the age of 30, at which time he chose 12 men to follow him throughout Judea as he preached. Many viewed him as a prophet; others viewed him as a political threat. Jesus claimed to be the Son of God. He taught about the Kingdom of God and introduced a new covenant binding people together in love and forgiveness. At the age of 33, Jesus triumphantly entered the city of Jerusalem, but within one week, he was sentenced to death for his teachings. Christianity teaches that Jesus’s death was foretold in Old Testament scripture, that he would rise from the dead in three days. The religious leaders of the day knew these teachings, and placed a Roman guard by his tomb to prevent his followers from stealing the body and proclaiming that the prophesy had been fulfilled. The New Testament records that three days after his death by crucifixion, his followers proclaimed that his tomb was empty and that he had risen to heaven. Christianity teaches that Jesus’ resurrection proves that he was the Son of God, the savior, and he had defeated original sin. After the resurrection, Jesus is believed to have appeared to Peter and his disciples. During that time, he commissioned them to advance the kingdom of God by carrying his message to the world. Forty days after the resurrection, Jesus ascended into Heaven, an event that, according to the Bible, was witnessed by many. The New Testament, especially the Gospels of Matthew, Mark, Luke, and John, is the story of these followers and close friends of Jesus. They told the story of Jesus’s life, death, and resurrection, and of those committed to spreading his word and establishing the Christian church. Christianity in American Culture One could argue that Christianity has influenced many aspects of American life. The term culture refers to the distinctions of a people in terms of education, language, customs, politics, and policies. These are influenced by what that group of people stands for—their basic beliefs and values. The United States is an ethnically diverse country, and was founded on the belief that all people



are created equal and possess the natural rights of liberty and property. Therefore, it follows that each individual is important, and all life is valuable. Without freedom and individual rights, there can be no freedom in economics, politics, or religion. These beliefs are founded in the idea of natural law—the use of sound reason to distinguish right from wrong. Many of America’s founding fathers who promoted natural law, pointed to the Bible’s teaching that humans have within them the ability to understand and choose right from wrong because they were created by God. The moral values held by founding fathers significantly shaped the political system of the country. It is commonly believed that early settlers came to America to experience freedom from religious persecution, and although that proved true for many groups, what brought the early explorers to America was a combination of spreading the Gospel and finding spices, gold, and an opportunity for trade. England began sending settlers to clear the land and make way for an expanded English empire. However, the settlers moved in a different direction, and a new nation was born. With a few notable exceptions, the settlers were free to live and worship as they pleased, thus paving the way for a nation based upon the freedom to choose. The Pilgrims of Plymouth Colony came to America in 1620 in search of religious freedom. They had left England because of political unrest and fled to Holland, and later arranged for English investors to establish the Plymouth Colony in North America, thus building a home where they could maintain their identity and worship as they pleased. According to 2012 Gallup research, 77 percent of Americans identify with Christian religion. However, over the centuries, many varieties of Christianity have taken root in the country (and around the world). Varying sects of Christians have differing views of how to respond to their culture. For example, the Old Order Amish live apart from modern life and shun politics and popular culture, while conservative or fundamentalist Christians use their political power to influence legislation and national discussions. Christianity in American Politics The Constitution of the United States says little of God and religion. Most historians believe that the authors of the Constitution determined that

Christianity

233

establishing an official religion was not a good idea, and they strongly desired the separation of church and state. Many believe that the founding fathers were Christians although several prominent founding fathers (e.g., Thomas Jefferson, Benjamin Franklin, John Adams, Alexander Hamilton, and George Washington) were theistic rationalists, which was a mix of beliefs drawn from Christianity, natural religion, and rationalization. They were Christians to the extent that they believed in God and that God set the universe in motion, but not to the extent that they believed God remained involved in people’s everyday affairs. An opposing argument for the Constitution’s silence about that God is that the founders were creating this new government for limited purposes—to enable government to rule the people, yet also be ruled by the people—rather than to control peoples’ lives. The reasons given are many and complicated. What stands out is that the founding fathers were committed to freedom for men and women to worship (or not) as they pleased, according to their beliefs. So although many of the Founding Fathers of the United States claimed to be Christians, they could foresee the value of a nation with freedom for all and protection for all—even those who viewed a relationship with God as essential—and they did not advocate one viewpoint over another. Suzanne K. Becking Fort Hays State University See Also: Baptism; Catholicism; Evangelicals; Protestants; Religious Holidays; Religiously Affiliated Schools; Saints Days; Sunday School. Further Readings Brekus, Catherine A. Sarah Osborn’s World: The Rise of Evangelical Christianity in Early America. New Haven, CT: Yale University Press, 2013. Feuerback, L. Essence of Christianity. Amherst, NY: Prometheus Books, 1989. Noll, Mark A. A History of Christianity in the United States and Canada. Grand Rapids, MI: Eerdmans, 1992. Osborn, R. E. Spirit of American Christianity. New York: Harper & Brothers, 1958. Wills, David W. Christianity in the United States: A Historical Survey and Interpretation. Notre Dame, IN: University of Notre Dame Press, 2005.

234

Christmas

Christmas American families celebrate Christmas as both a religious and secular holiday, but it has not always been celebrated throughout the United States. Between 1659 and 1681, the Puritans in New England disapproved of the celebration of Christmas, believing that birthday celebrations were pagan customs. Christmas was banned in some states, and it fell out of favor after the American Revolution because it was considered an English custom. By 1850, however, it had begun to be commercially promoted, and it became a federal holiday in 1870. In the Christian tradition, Christmas marks the birth of Jesus Christ, who Christians believe was the Son of God and the source of human salvation. Regardless of religious orientation or degree of religiosity, Americans families typically celebrate Christmas in secular and religious ways. Christmas is the most anticipated and tradition-rich holiday for many Americans, and it is marked by numerous rituals, an abundance of symbolism, and time spent together as a family. Like other holidays (e.g., Easter), the pagan rituals and religious rituals overlap. December 25, for example, was chosen as the date to celebrate Jesus’s birth because it was also the birth of the pagan invincible sun god. Pagans were more readily accepting of the Christian religion if they were not forced to give up their cherished rituals. Secular Rituals Most American families exchange gifts for Christmas, and children wake up to numerous gifts from Santa Claus on Christmas morning. In the weeks leading up to Christmas, children write letters to Santa Claus asking for particular gifts, or parents take their children to visit Santa Claus so children can sit on Santa’s lap and tell him what they would like for Christmas. According to the myth, on Christmas Eve, while children are asleep, Santa Claus rides through the sky in a sleigh driven by reindeer and filled with presents for nice children and coal for naughty children. Santa’s sleigh lands on the roof of every home, and he climbs down the chimney to place gifts under the decorated Christmas tree. Children are encouraged to have good behavior during the year so they can receive a present from Santa

Although other countries had earlier versions of Santa Claus, the contemporary U.S. Santa was born in an 1809 novel by Washington Irving. In Irving’s novel, Santa smokes a pipe, has a reindeer and sleigh, and delivers presents to children. However, he Santa did not get his red suit and begin living at the North Pole until 1863. More recently, perhaps because the threat of not getting presents from Santa was controlling children’s behavior for an entire year, a new tradition, the elf on the shelf, has evolved. Stemming from a 2005 children’s picture book, the elf, a small stuffed doll, arrives at children’s homes on Thanksgiving. Once everyone goes to bed at night, the elf is believed to fly to the North Pole and report children’s behaviors to Santa. Before the family wakes up in the morning, the elf flies back and hides in a new spot in the house. The children search for the elf each day, and the only rule is that they cannot touch the elf—if they touch it, it will lose its magic. In 2012, the elf made its debut in the Macy’s Thanksgiving Day Parade, so it is a tradition that seems likely to last. Religious Rituals Several Christian traditions mark the passage of time leading up to Christmas, and many Christian churches hold services that are widely attended the night before the holiday. Some churches have Christmas pageants in which children reenact the biblical story of Jesus Christ’s birth, some sing songs and read biblical scripture associated with Christmas, and others (Catholics, in particular) take communion, which is a symbolic ritual celebrating the sacrifices that they believe Jesus Christ made for them. Other religious traditions, such as Advent and Epiphany, are also closely associated with Christmas, but are generally only recognized by highly religious families and those of particular religious faiths. Historical Traditions The Christmas traditions that American families observe today are rooted in older traditions that families celebrated in the past. These traditions often followed immigrants to the United States. Some traditions have been forgotten or discontinued for various reasons. For example, families historically decorated their Christmas trees with lit candles, and they placed less emphasis on the gifts that were underneath the tree. Instead, they



placed more emphasis on being together as a family. Many of the gifts given to family members were homemade, and it was often common to receive only an orange and nuts in the bottom of the stocking. Today, most families decorate their trees with artificial lights, in part because of the fire hazard that the lit candles present. While more emphasis is placed on gifts, and it is less common to find homemade gifts under the tree, many current Christmas traditions are variations of those celebrated in the past. Conflicting Traditions Observers of Christmas have occasionally debated about the best way to observe the holiday in public. Although Christmas has both secular and religious aspects, it is so ubiquitous in American culture that some Christians advocate to keep the emphasis on the religious aspects of the holiday, whereas others seek to remove religious aspects from public acknowledgements of the holiday. For example, some have argued that public acknowledgements should reference the “holiday season,” rather than Christmas in particular, in order to be more inclusive in a pluralistic society. Although the majority of Americans self-identify as Christians, a substantial minority do not, and some of these people celebrate other holidays, such as Hanukah and Kwanzaa, at the same time of year. In some cases, public displays depicting the birth of Jesus have been removed or banned following complaints about religious observations on public land. Although a source of tension among some, many families celebrate Christmas with a blend of religious and secular rituals. For example, it is common for a child to be involved in a church Christmas pageant, as well as having a photograph taken with Santa Claus. Families choose which rituals they would like to celebrate together, and in most cases do not feel conflicted about combining both religious and secular traditions. Service and Materialism Whether families observe Christmas as a religious or secular holiday, many American families view Christmas as an opportunity to give to those who are less fortunate. For example, several nonprofit organizations, such as the Salvation Army, set up donation centers outside stores and ask shoppers to consider donating to families in need. Food banks

Christmas

235

typically receive a relatively high volume of donations for families who may not otherwise be able to enjoy an abundance of food around Christmas time. Similarly, some cities, businesses, and charity organizations place Christmas trees in malls and other public places with requests for toys for local families who do not have enough money to buy gifts for their children. December also marks the end of the tax year in the United States, and many families choose to make generous donations of cash to a nonprofit organization of their choice, which is considered a gift “in the Christmas spirit,” but also serves to reduce the family’s tax burden for the year. Materialism Much of the Christmas season is driven by materialism. The month between Thanksgiving and Christmas is the most critical period of the year for most American businesses that sell consumer goods. Black Friday, the day after Thanksgiving, has taken on considerable significance because that is the day that businesses begin making a profit. The materialistic focus is difficult for some economically disadvantaged families who cannot afford to buy numerous or expensive gifts, and families that observe Christmas as a religious holiday often struggle to find a balance between its sacred and secular aspects. Christmas shopping is an enormous boon to the economy, so the focus on gifts and materialism is not likely to change in the near future. Family Transitions Maintaining long-held and deeply cherished family traditions associated with Christmas can be difficult following a family transition, but can also provide an opportunity to establish new traditions. Transitions such as marriage require newlyweds to explicitly discuss their differing holiday traditions. For example, although exchanging gifts and the Santa Claus myth are widely adhered to by Americans who celebrate Christmas, the process associated with exchanging or opening the gifts, the food and the context in which it is served, the prominence of the religious aspects of the holiday, and the permeability of household boundaries during the holiday are more varied across families. Transitions out of a family unit, such as when parents of minor children divorce, also force adjustments to established family traditions, most notably concerning when and where (i.e., with which

236

Church of Jesus Christ of Latter-day Saints

parent) children will spend the holiday. Examples of common arrangements in this circumstance are that children alternate between spending the holiday with the mother and the father every other year, or that the children spend Christmas Eve with one parent and Christmas day with the other parent. Stephanie E. Armes Jason Hans University of Kentucky See Also: Catholicism; Christianity; Church of Jesus Christ of Latter-day Saints; Easter. Further Readings Etzioni, A. “Toward a Theory of Public Ritual.” Sociological Theory, v.18 (2000). Stronach, I. and A. Hodkinson. “Towards a Theory of Santa: Or, the Ghost of Christmas Present.” Anthropology Today, v.27/6 (2011). Thomas, J. B. and C. Peters. “An Exploratory Investigation of Black Friday Consumption Rituals.” International Journal of Retail & Distribution Management, v.39 (2011).

Church of Jesus Christ of Latter-day Saints The Church of Jesus Christ of Latter-day Saints (LDS) was formally organized on April 6, 1830, in Palmyra, New York, by founding president Joseph Smith during the era of U.S. history known as the Second Great Awakening. Smith was considered a prophet, through whom God restored the church of that Latter-day Saints (i.e., the Mormons), believed to be lost to apostasy in ancient times. As of the early 21st century, there are about 15 million Mormons in the world, including about 6 million in the United States, making Mormonism the fourth largest denomination of Christians in the United States. The LDS Church is based in Salt Lake City, Utah, which was founded by Mormon leader Brigham Young and several others in 1847, three years after Smith was killed by a mob in Carthage, Illinois, as the result of his controversial

religious views. As of 2012, about 62.5 percent of Utah’s population identified as Mormon. In addition to the Bible, the LDS Church also has several other canonical texts, including the Book of Mormon, the Doctrine and Covenants, and the Pearl of Great Price. Familial Nature of God Latter-day Saints believe that the Godhead consists of God the Eternal Father, His Son Jesus Christ, and the Holy Ghost. Unlike the Christian Trinity, in which there is only one God, manifest in three aspects, Mormons believe the members of the Godhead are separate, distinct persons. God the Father and Jesus each have a glorified body of flesh and bones, while the Holy Ghost is a personage of spirit. Mormons pray to Heavenly Father, in the name of Jesus Christ, by the power of the Holy Ghost. Mormons believe God the Eternal Father is literally the father of the soul (spirit) of every person who has ever lived. Mormons also believe that not only do all human beings have an Eternal Father, but also an Eternal Mother. So, every person’s soul is literally the offspring of heavenly parents. Thus, there is a eternal familial relationship between God and all human beings. This emphasis on the familial nature of the relationship between God and humanity is distinct from most other religions that teach that human beings are creatures, not literal children, of God. This doctrine of God is central to how Mormons think about all aspects of life. The Plan of Happiness The plan of happiness (the plan of salvation, or the great plan of redemption) is a system of doctrines that answer the questions: Where were people before birth? Why are they on earth? Where do they go when they die? The plan teaches that before birth, all individuals lived with God as His spirit sons and daughters, where they accepted God’s plan to come to Earth to progress toward becoming more like God. The purpose of Earth life is to gain a physical body, make choices, be tested in a place of opposition and temptation, and learn to form families. When people die, their spirits (souls) go to the spirit world, where they continue to learn and grow and make choices, and at some point, all human beings will receive a resurrected body of flesh and bones. All human beings have the potential to form eternal marriages and families.



Church of Jesus Christ of Latter-day Saints

237

a revelation to discontinue plural marriage. Thus, of the roughly 185 years that the LDS Church has been in existence, plural marriage was sanctioned for only about 50 of them. Although plural marriage may have characterized 19th-century Mormonism, contemporary LDS marriage and family life is decidedly traditional in that the church and its members has become one of the strongest proponents of marriage between one man and one woman.

A statue of Joseph Smith stands holding a copy of the Book of Mormon at the North Visitors’ Center, Temple Square, at the Church of Jesus Christ of Latter-day Saints.

Plural Marriage Joseph Smith taught a set of distinctive doctrines pertaining to marriage and family life, including what Mormons call “plural marriage” (more commonly known as polygyny, or one husband with multiple wives). The practice of plural marriage was considered a divinely sanctioned restoration of biblical marriage as practiced by Abraham, Isaac, Jacob, David, Solomon, and others in the Old Testament. Although polygyny has been practiced in many cultures throughout world history, it was considered wrong and strange in 19th-century United States and Europe. Thus, Mormons suffered persecution from other Americans for this practice, and the government of the United States enacted laws against it, including disenfranchising the church and confiscating its property. Wilford Woodruff, fourth president of the LDS Church, announced in 1890 that he had received

Eternal Marriage and Family Although most people know that Mormons once practiced plural marriage, relatively few know the most central Mormon doctrine on marriage and family—eternal marriage and family. Smith taught the doctrine that God has made it possible for marriage and family life to transcend death and last throughout eternity, and this belief is central to the LDS Church. Mormons believe that God intends for marriage to be eternal. Thus, in LDS temples, couples are “sealed” to one another, and are considered married “for time and all eternity,” rather than until death. Mormons believe that this doctrine is consistent with the biblical passage in which the Apostle Paul states in 1 Corinthians 11:11 that “neither is the man without the woman, neither the woman without the man, in the Lord.” The doctrine of eternal marriage provides inspiration for LDS couples to marry and to strive to strengthen and preserve their marriage in the face of the many challenges to marital happiness and stability found in contemporary Western culture. Latter-day Saints have a high rate of marriage and marry, on average, a few years earlier than the national average. Couples married in LDS temples also have a lower divorce rate than the general population. The doctrine of eternal marriage also provides solace to widows and widowers, who look past the temporary separation that death of a spouse brings, and toward an eternal reunion with their departed loved one. Eternal Families Latter-day Saints believe that children born to couples that have been sealed in an LDS temple are “born in the covenant,” and are thus sealed to the parents throughout time and eternity. Converts to the LDS Church may go to temples with their children and be sealed to them. Therefore, LDS

238

Church of Jesus Christ of Latter-day Saints

doctrine emphasizes the eternal nature of family relationships. LDS couples tend to have more children than the national average, and strive to live a family-centered life. There are a number of coupleand family-oriented religious practices that the devout engage in, including couple prayer, family home evening, family prayer, and scripture study. Daily couple prayer is practiced among LDS married couples. Mormons are encouraged to pray as individuals each day, and married couples also join together in daily prayer. This practice fosters greater unity, harmony, and marital love. Couples with children—even young children—are also encouraged to have daily family prayer. In the early 19th century, LDS Church leaders encouraged families to hold a weekly family home evening (FHE; also called home evenings, or family night). Toward that end, the church sets aside Monday evenings for FHE by not allowing any other church gatherings to be scheduled then. FHE typically involves the family gathering in the home to do things such as sing hymns, study scriptures, have fun, play games, share religious lessons, have religious discussions, and enjoy refreshments. Family History and Genealogy Mormons believe that every person who has ever lived is a literal daughter or son of the Heavenly Father and Heavenly Mother. Therefore, every person is a spiritual brother or sister to every human being who has ever lived. Every person is a beloved eternal child of God, and their goal is to ensure that everyone has the opportunity to hear and accept the gospel and obtain salvation. Jesus taught that only those who believe in him and are baptized may enter the kingdom of God and enjoy eternal life. Smith taught that God has made provisions for every person who has ever lived on Earth to have an opportunity to hear the fullness of the restored gospel of Jesus Christ (that is, the gospel taught by the Church of Jesus Christ of Latter-day Saints). The vast majority of humanity that has lived on Earth has not had the opportunity to hear about Jesus Christ, much less the restored gospel. Mormons believe that every single person will be fully taught the doctrines and ordinances of the restored gospel either in this life or after death when their souls are in the spirit world. Jesus taught that “the hour is coming, in the which all that are in the graves shall hear his voice” (John 5:28), which

Mormons interpret to mean that the gospel will be taught to those in the spirit world. Additionally, Jesus stated that to be saved, a person must be baptized. Smith taught that those people who accept the gospel of Jesus Christ in the spirit world must have someone be baptized for them in order for them to be saved. Thus, Mormons search out the names of their deceased ancestors and go to LDS temples in order to be baptized by immersion on their behalf. In addition to baptism, they perform other vicarious ordinances (sacraments), such as sealing of couples and families on behalf of deceased persons. This means that practicing Mormons conduct much genealogical research in an effort to create strong connections across generations and create eternal families. Temple Attendance Practicing Mormons worship each Sunday in LDS chapels, which are sometimes called meeting houses. The service includes the sacrament of the Lord’s Supper in remembrance of the suffering and death of Jesus. They also teach each other the doctrines of the gospel, and discuss how to best live gospel principles in a challenging and changing world. Because it is only in LDS temples in which members may perform the sacred ordinances for deceased persons, attending temple on a regular basis is an important part of being a practicing Mormon. In the early 21st century, 170 Mormon temples exist throughout the world. Like most other religious bodies, the LDS Church teaches that heterosexual marriage is the foundation of society. Because of its central and strong commitment to marriage, the church and its members have actively opposed efforts to redefine marriage to include same-sex couples. The church maintains that these efforts are not driven by hatred or fear of homosexual persons, but rather as a way to preserve traditional marriage as a union between a man and a woman. The church supports efforts to prevent legal discrimination against gays and lesbians, and it opposes any form of violence or hostility toward them. David C. Dollahite Brigham Young University See Also: Christianity; Cults; Evangelicals; Sunday School.

Further Readings Dollahite, David C. “Latter-day Saint Marriage and Family Life in Modern America.” In American Religion and the Family: How Faith Traditions Cope With Modernization, Don S. Browning and David A. Clairmont, eds. New York: Columbia University Press, 2007. Dollahite, David C. and Loren D. Marks. “The Mormon American Family.” Ethnic Families in America: Patterns and Variations, 5th ed. Upper Saddle River, NJ: Pearson, 2010. Hawkins, Alan J., David C. Dollahite, and Thomas W. Draper. Successful Marriages and Families: Proclamation Principles and Research Perspectives. Provo, UT: BYU Studies Press, 2012. Ludlow, Daniel H. The Encyclopedia of Mormonism. New York: Macmillan, 1992. http://eom.byu.edu (Accessed December 2013).

Circumcision The circumcision of male infants is one of the most common surgical procedures performed in the world, and is the most common surgery performed in the United States. While adolescent and adult males sometimes undergo circumcision, the procedure is most commonly performed on newborns, typically within a few days, weeks, or months following birth. The procedure involves removal of up to 50 percent of the foreskin (or prepuce) of the penis, thereby fully exposing the glans. In some instances, the frenulum (a ridge of skin that connects the prepuce to the glans) is also cut, and in some cases, an incision may be made on the glans. Male circumcision has a long history, and is one of the oldest recorded surgical procedures in the world. It is carried out for a variety of religious, social, and medical purposes. Yet, the procedure has been and continues to be widely contested, particularly as scientific evidence about the health benefits associated with circumcision are debated. Current objections to the routine circumcision of newborns are raised by those who believe that the practice is a fundamental violation of the individual rights to bodily integrity. Historically, circumcision has served a religious function, and has also been used to promote a variety of social and medical aims. In Judaism,

Circumcision

239

circumcision symbolized a covenant with God, and is believed to go back as far as Abraham. The procedure is typically carried out on the eighth day following birth, and is generally performed by a trained individual known as a mohel. Within Islam, the ritual of circumcision is known as kitan or khatna, which symbolizes cleanliness, used to be carried out as a rite of puberty when a boy was between 7 and 10 years old, but now it is more commonly performed in a hospital following a boy’s birth. In ancient times, circumcision was frequently used in a context of social control and warfare. For example, it was used to physically mark slaves. It was also used as a means to celebrate military conquests, as victors would remove the foreskin of members of the opposing force and collect them in a bag, displaying them as war trophies. During the 19th, 20th, and 21st centuries, circumcision has been promoted as a “cure-all” of sorts, a way to treat or prevent a variety of illnesses and conditions. For example, in earlier decades, circumcision was touted as a means to prevent masturbation and the “perverse” mentality that was attributed to men and boys who were caught engaging in the practice. During the early and into the mid-20th century, circumcision was also promoted as a means to cure or treat paralysis, hip disease, nervous conditions, and antisocial behavior, and prevent penile cancer, skin conditions such as eczema, tuberculosis, and imbecility. It was also endorsed by some physicians to address certain sexual problems including as a means of curing impotence, and to cure lewd and voracious sexual appetites. Recent Practices and Perceptions In more recent years, there has been increased attention given to circumcision and sexually transmitted diseases (STDs) and sexually transmitted infections (STIs). Beginning in the mid-1990s, studies in various African countries indicated that circumcised males had lower rates of STDs and STIs than uncircumcised males. Consequently, some health care providers and politicians began to advocate for routine male circumcision as a means to stem HIV infection rates. Yet, more recent data suggests, that there may be no statistically significant difference in rates of STD and STI infection between circumcised and uncircumcised males, particularly when the studies are controlled for other factors, such as age,

240

Civil Rights Act of 1964

race, ethnicity, socioeconomic status, sexual orientation, and sexual practices. Thus, while medical associations such as the American Association of Pediatrics and the American College of Obstetrics and Gynecology suggest that there may be some health benefits associated with circumcision, such as reduced rates of infection and penile cancer, they have stopped short of endorsing routine circumcision of male infants as a prophylactic measure. Since the 1970s, there has been a gradual decline in rates of routine infant circumcision in the United States. Whereas approximately 80 percent of male infants born in the 1970s and 1980s were circumcised, at present, only about half of newborn males in the country undergo the procedure. In some states, circumcision rates are only around 40 percent. Rates of circumcision remain higher among those of white European descent than among African Americans or Latinos. Many parents choose to circumcise their sons for social reasons, including because other male family members are circumcised, because they wish to protect their sons from the ridicule they presume their sons will face if they are not circumcised, or because they deem a circumcised penis more aesthetically pleasing. Yet, there are growing concerns about the side effects and complications associated with routine circumcision. The procedure can lead to infection and excessive bleeding. In some cases, infants have died as the result of complications from circumcision. In addition, adults who were circumcised as infants may face long-term complications. With the removal of the foreskin and other tissues also comes the removal of highly sensitive neuroreceptors and scarring. Thus, men who are circumcised may experience decreased sensation in their penis and experience higher rates of sexual dysfunction. Some research also suggests that some female sexual partners have a distinct preference for either circumcised or uncircumcised partners. In the past decade, a number of activist groups have attempted to stop the routine circumcision of male infants, dubbing the practice a violation a human rights. As part of their efforts, these organizations have compared male circumcision to female genital cutting (also known as female genital mutilation or female circumcision). Such groups have provided public education about the health risks associated with the practice and supported legislation that

would make routine circumcision illegal. However, the practice remains legal in all 50 states. Jillian M. Duquaine-Watson University of Texas at Dallas See Also: HIV/AIDS; Islam; Judaism and Orthodox Judaism. Further Readings Aggleton, Peter. “‘Just a Snip’?: A Social History of Male Circumcision.” Reproductive Health Matters, v.15 (2007). Fink, Kenneth S., Culley C. Carson, and Robert F. DeVellis. “Adult Circumcision Outcomes Study: Effect on Erectile Function, Penile Sensitivity, Sexual Activity and Satisfaction.” Journal of Urology, v.167 (2002). Gollaher, David L. Circumcision: A History of the World’s Most Controversial Surgery. New York: Basic Books, 2000. O’Hara, K. and J. O’Hara. “The Effect of Male Circumcision on the Sexual Enjoyment of the Female Partner.” BJU International (Supplement I), v.83 (1999). Smith, Dawn K., et al., “Male Circumcision in the United States for the Prevention of HIV Infection and Other Adverse Health Outcomes: Report From a CDC Consultation.” Public Health Report (Supplement I), v.125 (2010). Svoboda, J. Steven. “Circumcision of Male Infants as Human Rights Violation.” Journal of Medical Ethics, v.39 (2013). Task Force on Circumcision. “American Academy of Pediatrics: Circumcision Policy Statement” (2012). http://pediatrics.aappublications.org/ content/130/3/585 (Accessed December 2013).

Civil Rights Act of 1964 The Civil Rights Act of 1964 is one of the most important civil rights laws passed by Congress in the history of the United States. As the civil rights movement gained momentum following the U.S. Supreme Court decision in Brown v. Board of Education in 1954, President Kennedy called for expansion of civil rights in speech of February 23, 1963.



A civil rights bill was first introduced in June, and a revised bill was introduced later in October 1963. After President Kennedy’s assassination, President Lyndon Johnson quickly supported the passage of a civil rights act. From December 1963 to June 1964, the legislative battle regarding the bill reflected a tumultuous division in Congress between those who sought to pass the bill and the powerful southern forces who opposed it. Ultimately, the bill proceeded through the Commerce Committee, rather than the Judiciary Committee in order to keep it alive in the Senate. Eventually, it gained support through the efforts of Republican leader Everest Dirksen, and passed both the House and the Senate. President Johnson signed the bill into law on July 2, 1964. The Civil Rights Act consisted of 11 titles, of which Titles II, III, IV, and VII were the most important in the immediate aftermath of the law’s passage. Title I dealt with the extension of voting rights and the right of the U.S. Attorney General to file lawsuits. Title II prohibited public discrimination because of race, color, religion, or national origin in places of public accommodation such as hotels, restaurants, and entertainment venues. Both of these titles disproportionately affected southern states. Changes were rapidly adopted, and they effectively ended legal discrimination. Title III dealt with desegregation of public facilities. If a person complains that he or she is deprived or threatened with a loss of rights to the equal protection of the laws in “any public facility owned, operated or managed by or on behalf of any State or subdivision thereof,” the Attorney General can bring a lawsuit for relief. Title IV provided for desegregation of public schools, and gave the Attorney General the right to initiate court proceedings to obtain relief for equal protection for those denied entrance to the schools. The immediate effect of the act was the desegregation of the southern public schools within five years of passage of the act. By having federal agencies enforce regulations and threaten public schools with a cutting off of federal funds (Title VI), the schools were desegregated. Following Supreme Court cases in the early 1970s, desegregation also reached the metropolitan urban areas outside of the south, but were not as successful as the southern states. In the 1970s, the act also led to the establishment of bilingual education programs throughout the country for

Civil Rights Act of 1964

241

students who did not speak English and desegregation of state public colleges. Title V amended the Civil Rights Act of 1957 to further define the role of the Civil Rights Commission in investigating particular incidents, calling of witnesses, and writing reports on its activities. Title VI cut off funding to federally assisted programs that did not integrate. Although this title was not recognized as significant as the others, the threat of cancelling federal funding became an important method of achieving public school integration. Title VII prohibited employment discrimination based on race, color, religion, sex, and national origin. The addition of sex to the title was devised as a means to forestall passage, but the amendment went through without opposition. The newly created Equal Employment Opportunity Commission handled complaints, but the act’s failure to define discrimination left the courts to figure it out in thousands of cases. Two early U.S. Supreme Court cases that fell under this aegis were Griggs v. Duke Power Co. (1971) and McDonnell Douglas Corp. v. Green (1973). The legal framework for gender discrimination also led to the women’s movement of the 1960s, and paved the way for record numbers of women entering the workplace in subsequent decades. Title VIII called upon the Secretary of Commerce to promptly conduct surveys for registration and voting statistics, as recommended by the Commission on Civil Rights. Title IX gave the Attorney General the right to move civil cases to federal court if it seemed likely that an all-white jury and segregationist judges would prevent a defendant from receiving a fair trial. Title X established a Community Relations Service as part of the Department of Commerce in order to help resolve community disputes involving allegations of discrimination. Title XI provided for miscellaneous provisions, including proceedings for criminal contempt. Those found guilty could be fined $1,000, or imprisoned for no more than six months. The Civil Rights Act helped promote future legislation that instituted more rights for women, people with disabilities, the elderly, and Hispanic minorities. Title VI also led, along with the Medicare and Medicaid programs of 1965, to desegregation of hospitals and giving African American families complete access to what were previously

242

Civil Rights Movement

“white” hospitals. Title VII removed “male only” job positions, and gave women the right to apply and hold various positions. Black poverty dropped from more than 40 to 27 percent, while child poverty has decreased from 67 to 40 percent. Median family income rose from $22,000 in the early 1960s to around $40,000 in 2013, according to the U.S. Department of Health and Human Services. Educational opportunities for minority students and women have increased in terms of the numbers of students and percentages of those graduating from high school, attending colleges and graduate schools, and graduating with college and graduate degrees. Since 1965, the federal government has provided funding for public housing that has given poor families better housing opportunities. Joel Fishman Duquesne University See Also: African American Families; Civil Rights Movement; Segregation. Further Readings Grofman, Bernard, ed. Legacies of the Civil Rights Act of 1964. Charlottesville: University Press of Virginia, 2000. Loevy, Robert D., ed. The Civil Rights Act of 1964. Albany: State University of New York Press, 1997. Mayer, Robert H. The Civil Rights Act of 1964. Farmington Hills, MI: Greenhaven Press, 2004. U.S. Department of Health and Human Services. “2014 Poverty Guidelines.” http://aspe.hhs.gov/ poverty/14poverty.cfm (Accessed April 2014). U.S. Department of Justice, Civil Rights Division. http://www.justice.gov/crt (Accessed December 2013).

Civil Rights Movement For centuries in the United States, the principles of freedom, equality, and justice eluded African Americans. By the turn of the 20th century, however, black Americans began to dismantle the system of oppression and exploitation that they suffered under Jim Crow–era segregation. The modern

civil rights movement gained momentum in the 1950s. In 1954, the U.S. Supreme Court delivered an important legal victory in Brown v. Board of Education, which signaled the official launch of this most crucial period for civil rights and the struggle for freedom. This modern struggle symbolically culminated in 1968 with the assassination of civil rights leader Martin Luther King, Jr. During this sustained mass movement during the mid-20th century, African Americans and whites challenged a government that fought for freedom abroad during World War II, but kept 20 million people oppressed with restrictive laws and customs at home. Through nonviolent protests, black people asked for “double victory” over fascism abroad and racism at home. They succeeded on many legal, political, and social fronts, making the civil rights movement one of the most important eras in American history. This modern civil rights era was a continuation of African Americans’ long struggle for freedom and citizenship that began when they were brought to the colonies in the early1600s as slaves. However, the 1950s and 1960s marked a distinctive period in the political and social life of black Americans. These two decades included a series of turning points, including Brown v. the Board of Education (1954), the murder of Emmett Till in 1954, the 1955 Montgomery bus boycott, the 1958 Little Rock Nine incident, the 1960 Greensboro SitIns, the 1961 Freedom Rides, and the 1963 March on Washington, all of which built up an undeniable momentum that led to the passage of the 1964 Civil Rights Law and the 1965 Voting Rights Act. Moreover, these major turning points—almost a century after the end of slavery and Reconstruction—brought renewed hope that fundamental change in society was possible, not only for African Americans, but also for their children. In that sense, the civil rights movement was a family endeavor to dismantle racial stratification and secure economic, political, and social equality for all African Americans, both then and in the future. Moreover, no individual, group, or institution put the civil rights movement on the national agenda, and that many families played an important role in this defining moment in history. For many, the civil rights movement began long before 1954, and has evolved into a new phase in the 21st century. This centuries-old movement was



situated in a larger context of parallel movements of oppressed people and freedom fighters in Africa, the Caribbean, and other parts of the developing world, which sought to dismantle the shackles of colonialism and apartheid.

Brown v. Board of Education The seed for victory in the 1954 Brown v. the Board of Education, Topeka, Kansas school desegregation case was planted in the 1930s with a series of court cases regarding segregation brought by the National Association for the Advancement of Colored People (NAACP) Legal Defense Fund and a group of lawyers from Howard University. Their aim was to dismantle constitutionally sanctioned law school segregation. In University of Maryland v. Murray, the Maryland Supreme Court ordered the all-white law school to admit a black student. This victory led to other challenges to school segregation in the south with a series of major Supreme Court cases. In 1954, under the leadership of Chief Justice Earl Warren, the Supreme Court produced a unanimous decision in the Brown case, rendering the longstanding “separate but equal” approach to segregated schools both unequal and unconstitutional. Brown overturned Plessy vs. Ferguson, which in 1896 sanctioned rigid segregation in U.S. educational institutions. The Supreme Court’s decision in Brown marked a turning point in the history of race relations in the United States. On May 17, 1954, the Court stripped away constitutional sanctions for segregation by race, and made equal opportunity in education the law of the land. Brown reached the Supreme Court through the fearless efforts of lawyers, community activists, parents, and students. Their struggle to fulfill the American dream set in motion sweeping changes in American society, and redefined the nation’s ideals. This was a family affair. The Brown family and other families galvanized to help lawyers with this important victory. In the other victories throughout the civil rights movement, there were families at the center of each struggle. Like the families that helped with the 1954 Supreme Court ruling that ushered in a wave of massive resistance from whites across the south, other events involved families that faced the wrath of whites, particularly those who joined the White Citizenship Council, the Ku Klux Klan, and other organizations that pledged to fight racial integration.

Civil Rights Movement

243

The murder of a 14-year-old boy named Emmett Till was perhaps one of the most tragic illustrations of the racial fanaticism that permeated the south and shattered one family. Till was a precocious Chicago boy who was visiting his family in Money, Mississippi, during the summer of 1955. While at a store with his cousins, Till whistled at a 21-year-old white woman, Carolyn Bryant. The woman reported this “forbidden act” to her husband Roy. Several nights later, the husband and his brother J. W. Milam went to the home of Till’s great uncle, dragged the boy out of the house, and took him to a barn. They bludgeoned him to death, gouged out one of his eyes, shot him, and threw him in the Tallahatchie River. Three days later, federal officials dragged the bloated body out the river. Till’s mother took her son back to Chicago, and asked to have an open casket funeral so that the world could see the savagery of the attack. A picture of a bloated body and face with a gouged out eye was shown in Jet Magazine, attracting worldwide attention and rallying black support. Thousands attended the funeral. However, Mississippians were not swayed by the intense scrutiny. An all-white jury acquitted Bryant and Milam of kidnapping and murdering Emmett Till. Montgomery Bus Boycott The Montgomery bus boycott in 1955 catapulted a young southern preacher named Martin Luther King, Jr. to fame. But the catalyst of that movement was Rev. Fred Shuttlesworth and the thousands of men, women, and children who “substituted tired feet for tired souls.” The bus boycott was prompted by Rosa Parks’s arrest on December 1, 1955. Police arrested Parks after she refused to abandon her seat on a bus so that a white man could sit down. The arrest of Parks, a longtime activist, sparked the 13-month mass protest. During this time, a federal district court ruled in Browder v. Gayle that segregation on public buses was unconstitutional, and the Supreme Court upheld the ruling in 1956. Montgomery Improvement Association President Martin Luther King, Jr., and the marchers agreed to end the boycott on December 20, 1956. This was the first major post–World War II nonviolent protest by blacks against racial segregation. The preparation for the bus boycott began years before Parks’s arrest. The Women’s Political Caucus (WPC), a group of black professionals founded in

244

Civil Rights Movement that their parents and grandparents fought in previous centuries. Parks was a member of the Montgomery Improvement Association, which collaborated with E. D. Nixon of the NAACP and the Brotherhood of Sleeping Car Porters that coordinated the bus boycott. King and his close-knit group of preachers drew international attention to Montgomery after they organized men, women, and children to walk miles to and from school and work. The bus boycott proved that King’s nonviolent strategy of confronting power and challenging racial segregation could succeed.

The Martin Luther King, Jr., Memorial, located in Washington D.C., opened to the public on August 22, 2011, to commemorate Dr. King and his contributions to civil rights.

1946, had been addressing Jim Crow practices on the Montgomery city buses. They met with Montgomery Mayor W. A. Gayle, and outlined their three main requests: blacks should be treated with courtesy; blacks should not have to pay the driver, get off, and then enter at the rear of the bus; and buses should stop at each corner like they did in the white neighborhoods. A letter and meeting with Wiley failed to produce meaningful change, and Jo Ann Robinson, president of the WPC, wrote to Gayle, stating that there were plans to boycott the buses. Parks’s arrest put that plan in motion. There were arrests before Parks’s, most notably that of 15-yearold Claudette Colvin and 18-year-old Mary Louise Smith. Besides Robinson, other women who played a pivotal role in the success of the boycot included Johnnie Carr, Irene West, and the “nameless cooks and maids who substituted tired feet for tired souls.” Most important, this historical event involved not just mothers, but also daughters, sons, and fathers who championed a cause and carried on the fight

Little Rock Nine Following the 1954 U.S. Supreme Curt ruling that segregation was unconstitutional, nine black students, with the blessings of their families, attempted to enroll in Little Rock High School on September 4, 1957. They met brutal resistance. Martin Luther King, Jr. wrote to President Dwight D. Eisenhower, asking him to “take a strong forthright stand” and help enforce the law. Eisenhower refused. However, a stunning turn of events changed his stance. When a white mob gathered in front of the school to spew racial epithets and Governor Orval Faubus used the Arkansas National Guard to block black students from entering the high school, Eisenhower responded by sending in the U.S. Army’s 101st Airborne Division to protect the students. On September 1957, with the help of a federal district court injunction to circumvent Faubus’s human blockade at the front entrance of the school, the student were escorted through a side entrance. The students had to be escorted by the federal troops, and the Arkansas National Guard throughout the school year. However, Eisenhower’s effort to restore law and order in Little Rock did not sway Faubus. He closed all of Little Rock’s public high schools in the fall of 1958 to prevent desegregation. In December 1959, the Supreme Court ruled that the Arkansas school board must reopen the schools and continue with plans to desegregate. This episode added to the previous victories since Brown, and secured the foundation upon which the civil rights movement grew in the following decade. Sit-Ins and Freedom Rides On February 1, 1960, four African American college students at North Carolina A&T College



decided to demand service a Woolworth’s whitesonly lunch counter in Greensboro, North Carolina. They refused to leave until they were served a cup of coffee. This small success prompted a sitdown protest movement across the south, with about 70,000 people in 150 towns and cities joining the movement. They were beaten, abused, and arrested, but they maintained their nonviolent decorum, as Ella Baker, founder of the Student Nonviolent Coordinating Committee (SNCC), had counseled them. This new form of protest signaled that the battle for civil rights had entered a new phase. In 1961, James Farmer, the leader of the Congress for Racial Inequality, advocated for desegregating public transportation throughout the south by instituting Freedom Rides, which were to be carried out by nonviolent volunteers known as Freedom Riders. The first Freedom Ride was on May 4, 1961, with seven black and six white students. They were tasked with riding through the south to protest many states’ refusals to comply with a 1960 Supreme Court order, Boyton v. Virginia, which had declared segregation on interstate buses and in waiting rooms unconstitutional. The Freedom Riders started in Washington, D.C., on two public buses that were headed to Alabama and Mississippi. They encountered white mobs in most cities along the route. The worst mob attack happened Birmingham, Alabama, led by Chief of Police Bull Conner. The attack was so vicious, it made national and international headlines, and forced organizers to discontinue the Freedom Rides. These incidents also forced these young students back into the arms of their families, many of whom had warned them not to venture down south. The March on Washington for Jobs and Freedom The August 28,1963, March on Washington for Jobs and Freedom was the emotional high point of the civil rights movement. More than 250,000 African American and white people gathered on the National Mall in Washington, D.C., for a peaceful protest. An estimated 80 percent of the protesters were black, and 20 percent were white. They gathered in front of the Lincoln Monument for speeches, songs, and prayers; it was the largest human rights protest in the country’s history. Participants gathered not just for civil rights, but also

Civil Rights Movement

245

for jobs, justice, and the freedom to pursue happiness for all citizens of the country. The genesis of the march began with A. Phillip Randolph, president of the Brotherhood of Sleeping Car Porters labor union and the Negro American Labor Council. Randolph had organized a similar march in 1941. The leaders of six civil rights organizations planned the march: Roy Wilkins, president of the Congress of Racial Equality; James Farmer, president of the Congress for Racial Equality (CORE); Whitney Young, president of the National Urban League (NUL); John Lewis, president of the Student Nonviolent Coordinating Committee (SNCC); Bayard Rustin, organizer of the first Freedom Ride in 1947; and Martin Luther King, Jr., founder and president of the Southern Christian Leadership Conference (SCLC). This event drew on the groundswell of support for the civil rights movement; just a month before the march, the Rev. Albert Cleage, Jr., organized the Walk to Freedom in Detroit. With over 125,000 participants, it was the largest civil rights gathering in history until it was eclipsed by the march on Washington. The group set goals for the march, which included passage of meaningful civil rights legislation, a speedy end to school segregation, job training and jobs for the unemployed, and a federal law banning discrimination in public and private hiring. The emotional high of the successful March on Washington did not last as families returned home in many corners of the country. On September 15, 1963, the 16th Street Baptist Church in Birmingham, Alabama was bombed during a church service. The racially motivated violence killed four girls and marked another turning point in the civil rights movement. The loss of innocent lives of young children signaled to many the depth of hatred among some white Americans. The church was targeted because it was the meeting place for civil rights leaders such as Martin Luther King, Jr., Fred Shuttlesworth, and Ralph Abernathy. The Civil Rights Act of 1964 Although the March on Washington galvanized public sentiment, it failed to have an impact on congressional votes. The bombing of the 16th Street Baptist Church in Birmingham and other racial violence by white mobs was broadcast to the nation and the world on television. This display of attacks against black and white men, women, and

246

Civil Unions

children (some involving police dogs) helped garner widespread support for legislation (except in the south), and prompted President Kennedy to introduce a bill in Congress. However, Kennedy was assassinated on November 22, 1963. The martyred leader evoked public sympathy that aided his successor, President Johnson, in pushing through the civil rights bill that fundamentally changed America. A seasoned politician from Texas and longtime Senate leader, Lyndon Bains Johnson negotiated in the backrooms of power, and was successful in getting Congress to approve the most far-reaching civil rights legislation since Reconstruction. The cornerstone of the Civil Rights Act, Title VII, outlawed discrimination in employment on the basis of race, religion, national origin, or sex. The law also guaranteed equal access to public accommodations and schools. Most importantly, the law granted new enforcement powers to the U.S. Attorney General and established the Equal Employment Opportunity Commission to fight job discrimination. The Voting Rights Act of 1965 Congress passed the Voting Rights Act on August 6, 1965, ending disenfranchisement of African Americans. The legislation outlawed literacy tests and other measures used to prevent blacks from registering to vote. It also authorized the Attorney General to send federal examiners to register voters in any area where less than 50 percent of the voting population was registered. This act, coupled with the Twenty-Fourth Amendment that outlawed the poll tax in 1964, removed many barriers that have prevented blacks from registering and voting. Before the 1965 voting rights law, only 5 percent of blacks voted in Mississippi. By 1965, the rights, that African Americans lost in the 1870s after Emancipation, or that they never had in the first place, were enshrined in law. After waiting almost 100 years after Reconstruction, African Americans embarked on a largely nonviolent movement to force the federal government to recognize their rights. By the end of the 1960s, blacks had achieved a second Reconstruction with a series of legal and legislative victories that made real the promise of the first Reconstruction. Each moment in history of the civil rights movement involved an entire family. It is therefore

difficult to conceptualize a movement such as this without thinking of how it impacted others associated with an individual such as Martin Luther King, Jr., Rosa Parks, or the children in the Little Rock Nine. Reimagining the civil rights movement as a family affair is therefore critical when looking at key turning points during the 20th century that shaped the United States today. Ann-Marie Adams Fairfield University See Also: African American Families; Brown v. Board of Education; Civil Rights Act of 1964. Further Reading Branch, Taylor. Parting the Waters: America in the King Years, 1954–1963. New York: Simon & Schuster, 1988. Branch, Taylor. Pillar of Fire, America in the King Years, 1963–1965. New York: Simon & Shuster, 1998. Crawford, Viki, et al., eds. Women in the Civil Rights Movement: Trailblazers and Torchbearers, 1941– 1965, Bloomington: Indiana University Press, 1990. Joseph, Peniel E. Waiting Til the Midnight Hour: A Narrative History of the Black Power Movement. New York: Henry Holt, 2006. Klarman, Michael J. “How Brown v. Board Changed Race Relations: The Black Last Thesis.” Journal of American History, v.81 (1994). Payne, Charles. I’ve Got the Light of Freedom: The Organizing Tradition of the Mississippi Freedom Struggle. Berkeley: University of California Press, 2007.

Civil Unions Diverse institutions and people use the term civil union to describe a process that two individuals enact when they seek to formalize their relationship in legal and cultural terms. By entering into a civil union, individuals are also celebrating their commitment to and love for one another in a way that has some parity with heterosexual marriage. However, not everyone sees civil unions as having equal footing with marriage, and this particular issue has become a source of debate among



multiple groups. In recent decades, the term civil union has been employed in reference to same-sex couples who wish to solemnize their relationships in the eyes of families, friends, and institutions such as the state. Scholars can trace the civil union concept back to 19th century England, where it similarly suggested that two people were joined together by an official. Through civil unions in the United States today, both partners receive a greater number of rights and responsibilities in public contexts and in each other’s lives; however, the exact set of options often vary depending on the jurisdiction that confers the civil union. This variability has led critics to see civil unions as occupying a peculiar position within social and familial contexts. Many view civil unions as a modern-day form of “separate but equal,” because while this social arrangement mirrors the institution of marriage, it also is viewed as confusing and inferior to the suite of options that come with the institution of marriage. The Defense of Marriage Act Lawmakers in the United States developed the concept of civil unions because for several years the federal law prevented officials from recognizing or facilitating marriages between same-sex couples. In particular, a 1996 federal law called the Defense of Marriage Act (DOMA) contained a directive that prohibited same-sex couples from officially receiving the benefits associated with marriage. A sizeable number of critics believe that the creation of this law was driven by antigay and homophobic sentiment, even though some conservative commentators disagree with that characterization. As a result, the debate continues about the way in which DOMA came to exist, and its relevance to contemporary American life. Regardless of its origin, DOMA has significantly affected the lives of many people. In particular, it adversely affected the life of an American woman named Edith Windsor, who was forced to pay more than $300,000 to the federal government when her wife Thea died in 2009. At that time, the federal tax code did not allow the widow of a same-sex spouse to inherit the deceased person’s property tax free. For heterosexuals, the widow or widower of the deceased inherits a spouse’s property without paying additional taxes. Dissatisfied with her predicament, Windsor sued the U.S. government,

Civil Unions

247

and ultimately her case was argued before the U.S. Supreme Court. In 2013, the Supreme Court ruled that section three of DOMA violated the equal protection clause of the U.S. Constitution. The high court struck down section three of the DOMA law, which says that the institution of marriage and the rights and responsibilities of marriage shall only be reserved for opposite-sex couples, meaning heterosexual couples. While this set of events may appear tangential to the debate over civil unions, these developments actually played a meaningful role in subsequent discussions of civil unions because recent dialogues about same-sex unions have highlighted the unique and unequal status that civil unions continue to create for those who enter into them. By dismantling DOMA, the Supreme Court stated that same-sex couples and spouses like Windsor deserve equal protection under the law, and hence the federal government must recognize same-sex marriages as having a valid legal status equal to heterosexual marriages. Nonetheless, this court ruling did not require all 50 states to offer marriage or civil unions to LGBT couples. Instead, this ruling required the federal government to change its policies, such as taxation law and social security codes, so that those samesex couples who are married can receive most of the same rights and privileges that heterosexual married couples typically enjoy. Nevertheless, these developments have not substantively helped those who have entered into a civil union and self-identify as gay, lesbian, bisexual, or transgender (LGBT). The Supreme Court’s ruling has not elevated civil unions to the same legal status that matrimony holds in American culture and society; hence, couples who have civil unions continue to face much uncertainty and unequal treatment. While civil unions may appear less desirable than marriage, many critics still contend that they provide a productive first foothold for advancing the cause of equality and freedom for LGBT people. The civil union laws are also sometimes seen as a compromise between granting LGBT people complete equality, and denying them access to the benefits that come with marriage. Still, some critics who support LGBT people see the usage of civil union law as an action that consigns same-sex couples to a lesser and “different” position in American society. Moreover, in several of the states that originally

248

Civil Unions

passed civil union laws, there has been a backlash in which independents, liberals, and progressives have fought to change the civil union laws so that LGBT people may access the full benefits that come with marriage. It appears that the dialogue about civil unions will forever be entwined with debates of same-sex marriage. Historical Contexts In 2000, the term civil union first entered the mainstream lexicon because the state of Vermont legalized civil unions, thus allowing lesbian and gay couples to have some legal benefits. This was not the first instance of a legislative body passing legislation that fostered civil unions. In October 1989, the European nation of Denmark became the first nation to pass legislation that enabled samesex couples to officially solemnize their love and commitments to one another. While the country of Denmark did not grant same-sex couples the right to marry until 2012, the country remained an example of social change that provided a road map for other countries that were dealing with similar issues. Vermont that led the way in the United States, creating a veritable blueprint for other states to follow in developing alternatives to the institution of traditional marriage. To some degree, experts contend that Vermont’s lawmakers energized those groups and individuals who created what some today call the “marriage equality movement.” At the time, civil unions appeared to be a politically radical step, but as time passed, civil unions became an almost conservative option because they prevent what some have called the “redefinition of marriage.” It is worth noting though that Vermont’s civil unions eventually became so undesirable to LGBT and heterosexual people that steps were taken to replace the existing civil unions, and in 2009 Vermont lawmakers implemented a law that legalized same-sex marriage. While Vermont was the first state to legalize civil unions, several others have instituted them since the early 2000s, including Colorado, Connecticut, Delaware, Hawai‘i, Illinois, New Hampshire, New Jersey, and Rhode Island. Other states enacted laws that legalized “domestic partnerships,” which mirror civil unions and matrimony in several ways, although their parallels vary with each specific piece of legislation. States that passed domestic

A man protests for the legalization of same-sex marriage at the Minnesota senate chambers. The Minnesota Legislature passed a same-sex marriage bill in May 2013.

partnership laws include California, District of Columbia, Hawai‘i, Nevada, Oregon, Washington, and Wisconsin. Furthermore, these laws continue to change as public opinion changes, and thus it is likely that these states may offer new options to same-sex couples in the near future. In 2014, more than 30 states or territories within the United States neither recognized nor offered legal rights to same-sex couples. Most of these “nonrecognition states” were in the Midwest and south. Numerous LGBT organizations are mounting initiatives and lawsuits to repeal antigay amendments and statutes that prevent LGBT individuals from acquiring a legal status on par with married couples. However, there are numerous gay, lesbian, bisexual, and transgender people who have not wanted to pursue the option of marriage because of a particular distaste for the institution. Many same-sex couples express concern that marriage is merely a way of conforming; likewise, some critics believe that if LGBT people embrace marriage, then they will lose a significant part of their uniqueness.

Cocooning



Further Implications Through engaging in a civil union, same-sex couples may avoid the heteronormative and sexist constraints that some critics believe have accompanied the patriarchal institution of marriage for centuries. Additionally, civil unions may bestow on same-sex couples many of the same responsibilities and privileges that matrimony offers, but as most laws currently stand, civil unions would not enable them to have all of the same legal options as heterosexuals who engage in so-called traditional marriage. Still, civil unions would provide some financial and legal advantages for these individuals and their families; however, the exact advantages and benefits are predicated on what the state government delineates, and whether the partners in the civil union are employed by a company that offers greater benefits to such partners. Similarly, the federal government will not recognize a civil union as a form of marriage because it is codified as a distinctive kind of relational status that is different from marriage; hence, same-sex couples receive no federal benefits by entering into a civil union. While same-sex couples receive no assistance from the federal government upon entering into a civil union, the arrangement nonetheless has some affective and social advantages. Some critics believe that civil unions may strengthen families, bolstering their unity and well-being. Civil unions are thought to provide a more stable environment for rearing children, though same-sex couples who choose not to enter into a civil union may argue the opposite—that no civil union or matrimony is necessary for bringing up children in a healthy, positive way. In this way, critics have questioned how a civil union may affect the emotional and psychological dynamics of families. For some of these critics, a main question is: Does the separate legal status of civil unions adversely affect how children and families see themselves and interact with the world? In other words, Can the status of civil unions inhibit social, emotional and/or psychological development and well-being? Civil unions are a relatively new legal status, and further study is necessary. Already, critics have claimed that same-sex couples experience more challenges and greater uncertainty because employers and institutions do not understand the concept of civil unions, and thus there has been a call for lawmakers to pass legislation that will enable same-sex

249

couples to be married because the institution is more easily understood and commonly used. However, only 19 states currently offer the option of marriage to same-sex couples, currently creating an uneven legal landscape. Edward Chamberlain University of Washington Tacoma See Also: Defense of Marriage Act; Domestic Partner Benefits; Gay and Lesbian Marriage Laws; Persons of Opposite Sex Sharing Living Quarters; Same-Sex Marriage. Further Readings Badgett, M. V. Lee. When Gay People Get Married: What Happens When Societies Legalize Same Sex Marriage. New York: New York University Press, 2010. Goldberg-Hiller, Jonathan. The Limits to Union: Same Sex Marriage and the Politics of Civil Rights. Ann Arbor: University of Michigan Press, 2007. Johnson, Greg. “Civil Unions: A Reappraisal.” In Defending Same-Sex Marriage: “Separate But Equal” No More, A Guide to the Legal Status of Same-Sex Marriage, Civil Unions and Other Partnerships. Mark Strasser, ed. Westport, CT: Praeger, 2007. Stein, Edward. “Marriage, Same-Sex and Domestic Partnerships.” In LGBTQ America Today: An Encyclopedia, John C. Hawley, ed. Westport, CT: Greenwood Press, 2008.

Cocooning Cocooning refers to the intentional effort to withdraw from the larger society in order to create and maintain a certain level of security and comfort. This shift is sometimes motivated by a desire to separate from the wider culture, and also by a need to be alone. As a metaphor, the cocoon references the covering spun by many varieties of butterflies as protection during their time of transformation from pupae to adults. The rise of social cocooning, beginning in the 1980s, greatly impacted the social life of families. Where once families lived in more modest-sized homes and spent time engaged in community endeavors such as church, PTA and school functions, sports,

250

Cocooning

fraternal organizations, service organizations, and casual neighborhood gatherings, many families now have more spacious homes with enough amenities that they do not need to go to the public park, the local swimming pool, or even the local movie theater; cable television and the Internet have made the home a pleasant place to retreat to after long hours spent at work and commuting. For some, this transition was a conscious effort to increase their level of perceived safety and security in response to the instability of society at large. Parents of adopted children have openly embraced and popularized the idea of cocooning as a strategic way to create a safe and secure home in which to raise a newly adopted child. The term cocooning has been trademarked by psychologist Patti Zordich, and is the focus of the resources she develops for adoptive families. Regardless of the family type, cocooning is an approach to raising a family that privileges privacy and safety over public engagement. In the closing decades of the 20th century, futurist Faith Popcorn introduced the concept of cocooning as a way to label the shift away from the post–World War II emphasis on public life, toward the new trend of staying at home. As this trend in family life continued to develop, three specific categories of cocooning were identified: armored cocoons, wandering cocoons, and socialized cocoons. Armored cocoons are families concerned with safety and security, and are likely to own firearms, home security systems, and firewall protection for home computers. Wandering cocoons are those who attempt to simultaneously stay a part of the larger society while being intentionally withdrawn. For example, a DVD player in a car or van allows a family to remain mobile while eliminating the need to interact or even notice the world around them. Wandering cocoons are also illustrated by those who run in public parks but listen music on their mobile devices; they are part of the wider society but they are intentionally creating a barrier to reduce human interaction. Socialized cocoons are illustrated by the increase in events such as Super Bowl parties in which many people are invited into the host’s home, instead of gathering in public at a local bar or restaurant to socialize with others. Many cultural milestones and events contribute to the rise of cocooning. However, the two most commonly cited are the emergence of new technologies

and major public catastrophes such as the events of September 11, 2001, and mass shootings such as that which occurred in an Aurora, Colorado, movie theater in July 2012. Technological inventions that facilitated the private cocoon included the Sony Walkman in the 1980s, the personal computer in the 1990s, the Internet, large screen high-definition televisions, mobile electronic devices, and e-commerce. Each of these advancements allowed individuals to withdraw from the wider society to various degrees. For example, a brand-new 70-inch high-definition television with surround sound provides a very similar experience as going to a movie theater. With so many options on cable television, satellite television, and online digital media services, a consumer can almost always create a cozy, satisfying experience without leaving home. Movie theater–style microwave popcorn and theater-style seating can complete the experience. In addition to these technological advances, the rise of the Internet, laptop computers, and smartphones has allowed many parents to work at home. The increase in time spent at one’s residence contributes to increased feelings of safety and security. Technological advances impact the workday and leisure time. For example, prior to the development of e-readers with Internet capability, individuals were required to go to a public library or book store to acquire a new item to read. Now, one can purchase or borrow a book from the privacy of his or her home and never enter into this type of public space. Moreover, with the rise of online stores, most forms of shopping (and banking and bill paying) can be performed online without the need to leave one’s house. The rise of social networking sites such as Facebook allow for more interconnectivity and decrease the need for faceto-face interactions. The events of September 11, 2001, provided an opportunity for American families to reconsider how they spent their time and money. One of the outcomes of this re-evaluation was an increase in the time in that families stayed at home as a single unit and to entertain other families. This new environment for bonding helped advance the trend for larger kitchens and open floor plans in new home design. Home theater systems were incorporated into existing homes as a way to increase comfort and retreat from public places. Additionally, the seemingly increasing frequency of public mass shootings

Cohabitation



has also contributed to the feeling that one’s home is the best place to experience security and comfort. Brent C. Sleasman Gannon University See Also: Adoption, International; Assimilation; Technology; Television. Further Readings Popcorn, Faith. The Popcorn Report: Faith Popcorn on the Future of Your Company, Your World, Your Life. New York: HarperCollins, 1991. Snider, Mike. “Cocooning: It’s Back and Thanks to Tech, It’s Bigger.” USA Today, February 18, 2013. http://www.usatoday.com/story/tech/personal/ 2013/02/15/internet-tv-super-cocoons/1880473 (Accessed December 2013). Zordich, Patti M. “Cocooning With Your Newly Adopted Child” (2012). http://www.lianalowenstein .com/articles /ParentAdoptedChild.pdf (Accessed December 2013).

Cohabitation Cohabitation, or living together with a romantic partner outside of marriage, is an increasingly common living arrangement in the United States. In the 1940s, less than 1 percent of first marriages were preceded by cohabitation, and the practice was not undertaken in any significant numbers until the late 1960s and 1970s. By 2010, over two-thirds of women marrying for the first time lived with their husbands before marriage, and almost 60 percent of women aged 19 to 44 had at least one prior cohabitation experience. Around 60 percent of children born outside of marriage are born to cohabiting parents, and around half of all children spend some of their life living with a parent who is cohabiting with a partner outside of marriage. There are many reasons that cohabitation rates have risen since the 1960s, when reports about a couple living together before marriage made national news. These include the availability of birth control, the sexual revolution, and the rise of women’s education and labor-force participation, which increased women’s independence and

251

reduced the economic benefits of marriage. Other factors include an increase in the average age at marriage, the rising cost of achieving a middleclass lifestyle, the increase in divorce rates that made young adults more cautious about entering marriage without prior cohabitation, and a general shift in attitudes about the social acceptability of living with a partner and having children outside of a married relationship. Trends in Cohabitation Rates of cohabitation have been rising since the 1960s. In the late 1970s, surveys showed that around 3 percent of women and 5 percent of men were then living in a cohabiting relationship; by the late 1990s, these rates had risen to 9 percent of men and 12 percent of women. By 2011, approximately 7.6 million couples were cohabiting, which is double the number of couples that were living together in 2000, five times the number of cohabiting couples in 1980, and 17 times the number in 1960. These numbers only represent the number of people in a cohabiting relationship in a given year. The number of women who had ever cohabited rose from around one-third in the late 1980s to around 45 percent in the mid1990s, and by the mid-2000s around 58 percent of women aged 19 to 44 had cohabited at least once. During the 1940s and 1950s, less than 3 percent of first marriages followed premarital cohabitation. These rates began to rise in the 1960s, and in the late 1960s, around 7 percent of first marriages began with premarital cohabitation. By the late 1970s these rates had tripled to around 30 percent, by the 1980s between 40 and 50 percent of first marriages began in premarital cohabitation, and rates rose to between 50 and 60 percent in the 1990s, and between 60 and 70 percent between 2000 and 2010. Rates of cohabitation were higher among those who had been previously married, compared with those who had never been married. During this period, the length of time that couples lived together before marriage has also grown, from around 10 months in the late 1960s to almost three years by the late 2000s. Around 90 percent of cohabitations end by five years’ duration, whereas the remaining 10 percent are long-term cohabitations, and these rates have remained stable over time. Although the majority of those living together in the 1960s and 1970s were young, cohabitation rates have increased among those aged 35 and

252

Cohabitation

over, who now make up almost half of cohabiters. Among cohabiters in the early 1970s, up to one-fourth were enrolled in college at the time, and marriages that had started in cohabitation were more likely than marriages that did not to include a wife with a college degree. Since then, the rates of cohabitation among the college educated has declined, whereas rates among those without a college education have increased. By the late 2000s, around 45 percent of women who married their first husband without premarital cohabitation had a college degree, compared to 30 percent of women who lived with their first husband before marriage. Rates of cohabitation have been consistently higher among African American women than white women, higher among the less than the more religious, and higher among those who were raised by single or divorced parents than those raised with married parents. Cohabitation has frequently been associated with higher rates of divorce after marriage, and until recently, that was statistically true. That is not to say that cohabitation causes divorce; the association is more likely related to the type of people who cohabit before marriage. They tend to be less religious, have lower levels of education, are less likely to come from an intact family, and in some cases are more uncertain about their partner before entering marriage. All these factors are related to an increased risk of divorce. Furthermore, among couples that married in the 2000s, the divorce rate of premarital cohabiters was not substantially different from couples that did not cohabit prior to marriage, mostly likely because cohabitation has become so widespread and socially acceptable. Childbearing and Cohabitation In the late 1970s, less than one-third of cohabiting families included children under the age of 18; by the late 1990s and 2000s, that rate had increased to around two-fifths, and almost half of those children were the biological children of both partners. This increase is both because of an increase in nonmarital childbearing among mothers living with their partner, and the popularity of cohabitation among divorced parents who may have children from a prior marriage. In the mid-1980s, around one-fifth of children were born to unmarried parents, and less than one-third of those were born to cohabiting parents. By the mid-1900s, around one-third of children

were born outside of marriage, and of those, around 40 percent were born to cohabiting couples. By the late 2000s, around 40 percent of children were born to unmarried parents, and nearly 60 percent of those were born to cohabiting parents. Rates from the 1990s suggest that between 40 and 50 percent of all children will spend some part of their life living with a cohabiting couple before they turn 18, regardless of the circumstances of their birth. Prior to Cohabitation: Common Law Marriages In the 19th century, common law marriages were recognized in 37 states in the United States. These were relationships that were never formalized in front of an officiant, but were considered marital relationships because the couple was living together as romantic partners. In many cases, these relationships were not brought to the attention of the state unless one partner died or left the relationship, and the other partner came before the state courts requesting benefits that would have been given to them had their relationship been a formal marriage. However, by the mid-20th century, following efforts by religious leaders and social reformers, the majority of states had abolished the legalization of common law marriage, and therefore couples romantically living together were no longer recognized as married. The LeClair Affair In March 1968, the New York Times reported on several unmarried couples living together in New York City; one of these couples was Linda LeClair, a Barnard College student, and her boyfriend, Columbia University student Peter Bahr. New York State had abolished common law marriages several decades earlier, in 1933. After the article was published, Barnard College found LeClair guilty of violating university rules by living with a partner outside of marriage, and censured her by preventing her from using the university cafeteria and participating in social events. Eventually, she was expelled her from the college. The New York Times article sparked a widespread debate, and many national news articles reported on the situation, which came to be known as the LeClair Affair. The LeClair Affair first spread public awareness of the phenomenon of young couples living together outside of marriage.



Factors Leading to Increasing Rates of Cohabitation The increase in control over childbearing was one major factor leading to the increased rates of cohabitation. The simple and effective birth control pill was first approved in 1960 by the Federal Drug Administration (FDA) for use in preventing pregnancy, but was not widely available to single women until the late 1960s and early 1970s. The 1973 Supreme Court decision Roe v. Wade, which legalized abortion in the United States, further increased the ability of women to control when they would have a child. This increasing control over childbearing contributed to the rise of premarital cohabitation in a number of ways. First, women could engage in premarital sex without fear of unwanted pregnancy, which led to an increase in rates of premarital sex. This, along with the general liberalization of attitudes and norms regarding sexuality outside of marriage (known as the sexual revolution), led to an increase in cohabitation because women no longer feared becoming unwed mothers. Increased cohabitation was also prompted by women who were invested in their education and careers. Until the 1964 Civil Rights Act was passed, it was legal in the United States to discriminate against women in employment, and until the 1978 Pregnancy Discrimination Act was passed, it was also legal to fire women for becoming pregnant. As a result, before that time, many women did not invest in careers that required high levels of education and a long training period because they could be (and many were) fired if they became pregnant. As women gained more control over childbearing decisions, they increasingly invested in their careers, and in the 1960s and 1970s, women’s education and labor-force participation rates skyrocketed. Women spent an extended amount of time obtaining higher education and establishing their careers, thus delaying entering marriage. However, they were not willing to give up romantic relationships, and thus increasingly formed marriage-like partnerships in the form of cohabitation. At the same time that women were increasingly entering well-paying careers, men’s wages stagnated; the average wage for working men has not substantially risen since the 1960s, after adjusting for inflation. This led to a shift in the relative earnings power of men and women. With men’s wages stagnating and women’s career options and wages

Cohabitation

253

increasing, the former role of marriage as a form of economic security for women declined. As the economic function of marriage was diminished, the impetus to marry also diminished. However, as marriage became less of an economic necessity, its symbolic function increased. Marriage became a symbol of achieving a certain status in society, and was increasingly delayed by young adults until they obtained other markers of a middle-class lifestyle, including financial stability, homeownership, and enough savings to pay for a nice wedding. Since the turn of the 21st century, however, as the costs of homeownership, a college degree, and health insurance has risen, it has been increasingly difficult to obtain these markers of a middle-class lifestyle. For young adults, especially those with lower levels of education, cohabitation allowed couples to save money by living together while building savings that would allow them to marry and achieve economic security. This led naturally to an increased age at marriage. Because of these factors, it is not surprising that according to the U.S. Census Bureau, the median age at marriage (the age at which half of all young adults are married) rose from 20 years old among women and 23 years old among men in 1960, to 26 years old among women and 28 years old among men by 2010. Although young adults have put off marriage, they increasingly form marriagelike relationships at younger ages in the form of cohabitation. Finally, the rise in divorce rates has contributed to the rise in cohabitation rates. Prior to the 1970s, one partner had to be proven “at fault” for a number of transgressions, such as adultery, abuse, or abandonment, in order for a couple to divorce. Throughout the 1970s, states legalized the no-fault divorce, which allowed couples to divorce for irreconcilable differences without one partner being proven guilty of a misdeed. At the same time, women’s increasing labor-force participation rates led to a decrease in the economic necessity of marriage. As a result, divorce rates skyrocketed in the 1970s and 1980s, and have remained high since then, hovering near 50 percent. The high divorce rate has led to increasing uncertainty about marriage and a reluctance among young adults to enter into it without first experiencing it in trial form through cohabitation. The high divorce rate has also increased the rate of cohabitation by

254

Collectivism

increasing the number of individuals who enter into cohabiting relationships with new partners after a divorce. Attitudes Toward Cohabitation and Other Family Issues Over Time Important factors leading to the rise in cohabitation have been the shift in societal attitudes toward it and a number of other factors of family life. Shifting norms, along with the increase in secularization of society and the corresponding weakening of religious constraints on behavior, have lessened the disapproval of premarital sex, divorce, and living together without being married, all of which contributed to a rise in cohabitation. An article by Arland Thornton and Linda YoungDeMarco in the Journal of Marriage and Family examined changes in attitudes from the 1960s through the 1990s among those in the United States. They found an increase in the number of negative attitudes toward marriage, and increasing concern about marriage being restrictive. They also found an increase in the ideal age at marriage and increasing acceptance of divorce. At the same time, the percentage who believed that premarital sex was wrong significantly declined, while the percent who thought marriage was necessary for childbearing also decreased. Thornton and Young-Demarco also found that in the 1970s, attitudes toward cohabitation were already very accepting, and these attitudes became even more accepting later on. During the 1980s and 1990s, the number of high school seniors agreeing with the statement that “it is usually a good idea for a couple to live together before married in order to find out whether they really get along” showed a dramatic increase, from around 40 percent of women and 53 percent of men in the mid-1990s to almost 60 percent of women and 70 percent of men in the late 1990s. This question suggests that not only is cohabitation increasingly acceptable among young adults, cohabitation is now seen by the majority of young adults as a preferable arrangement prior to marriage. These rates were likely driven by, and helped to drive, the increasingly visible number of young adults living in cohabiting relationships.

Arielle Kuperberg University of North Carolina at Greensboro

See Also: Abortion; Birth Control Pills; Childhood in America; Civil Rights Act (1964); Common Law Marriage; Contraception and the Sexual Revolution; Demographic Changes: Age at First Marriage; Demographic Changes: Cohabitation Rates; Demographic Changes: Divorce Rates; Divorce and Separation; No Fault Divorce; Weddings. Further Readings Bumpass, Larry and Hsien-Hen Lu. “Trends in Cohabitation and Implications for Children’s Family Contexts in the United States.” Population Studies, v.54 (2000). Kroeger, Rhiannon and Pamela J. Smock. “Cohabitation: An Assessment of Recent Research, Findings, and Implications.” In The Wiley-Blackwell Companion to the Sociology of Families, Judith K. Treas, Jacqueline Scott, and Martin Richards, eds. Hoboken, NJ: Wiley, 2014. Smock, Pamela. “The Wax and Wane of Marriage: Prospects for Marriage in the 21st Century.” Journal of Marriage and Family, v.66 (2004). Thorton, Arland and Linda Young-Demarco. “Four Decades of Trends in Attitudes Toward Family Issues in the United States: The 1960s Through the 1990s.” Journal of Marriage and Family, v.63 (2001).

Collectivism Collectivism is a cultural value of prioritizing group interests over individual interests and valuing cohesion within social groups. Native American families and immigrant families from collectivist societies (e.g., China, India, Japan, Korea, Vietnam, and Latin America) often have collectivist beliefs and practices that conflict with the individualistic beliefs and practices of U.S. families who descended from western Europe. External threats to survival foster collectivist beliefs, which societal institutions reproduce. These beliefs encourage collectivist families to engage in collectivist processes (i.e., group goals, concerns about public image, consensus-seeking, and valued feedback from family) and create collectivist structures (such as concentric circles of family members and hierarchical societies) that differ from those of individualistic U.S. families.



Sources of Collectivism Faced with external threats to survival (e.g., war, famine, or disease), individuals experience uncertainty, insecurity, and fear. Seeking economic and/ or physical security, they try to maximize predictability; they draw closer to family and friends likely to help them, and distance themselves from strangers and outsiders who might harm them. By doing so, they sharpen the boundaries between insiders (the in-group) and outsiders (the outgroup), facilitating scapegoating and xenophobia. Moreover, they tend to defer to an in-group leader’s authority, which offers security, absolute rules, or familiar norms. Collectivist societies reproduce collectivist values through family interactions and national institutions. Families with collectivist values teach their children to prioritize family needs over those of acquaintances and strangers. When necessary, parents act in the interest of their family, rather than their individual interests (e.g., inviting a cousin they dislike to a birthday party), thereby exemplifying collectivism for their children. Explanations of its importance for family harmony and cohesion further reinforce collectivism’s value. Collectivist nations also reproduce collectivist values through economic, religious, educational, and cultural institutions. Collectivist societies often reward family members for an individual’s achievement. For 1,300 years, China’s civil-service exam system not only selected government officials, but also gave money, prestige, power, and fame to their extended families, thereby supporting collectivist beliefs, values, and norms. Similarly, religious institutions such as churches value the congregation’s interests over individual interests (e.g., by suggesting an annual donation of 10 percent of income [tithe] without allowing them to provide input on budget decisions). Schools can also reinforce collectivist values through their daily practices (e.g., South Korean high school students stay in a classroom with the same classmates for the entire school year, thereby intensifying their relationships). Last, museums, films, television shows, newspapers, and other media in these nations often reflect collectivist values and shower them upon their citizens daily. Thus, belief in collectivism is more closely tied to the nation than to employer organization or individual personality.

Collectivism

255

Collectivist Processes Unlike individualists, collectivists often recognize group goals, attend to public self-image (face), seek consensus, and seek group-member feedback. Socialized to attend to in-group members, a collectivist often recognizes both their needs and his or her personal needs. Considering both sets of needs helps them express shared group goals. Likewise, sensitivity to others is often accompanied by sensitivity to both one’s face and others’ faces. Hence, collectivists try to promote the selfesteem (positive face) of both oneself and of others, sometimes at the expense of their freedom or negative face (e.g., eating a homemade dish they dislike). Attending to the goals and face of all group members, collectivists prefer consensus over individual choice (e.g., sharing entrees over separate entrees). If consensus seems unlikely, collectivists often change topics to avoid conflict, which could harm someone’s face, especially in public. Hence, collectivists value stability and social conformity over change and diversity. Collectivists often seek in-group member feedback, rather than relying on their judgment. Hence, they often evaluate their behaviors with an external standard (“shame” society), rather than an internal one (“guilt” society). Collectivist Structures in the Family Because external threats foster concentric circles of trust, collectivist societies often include concentric circular structures (immediate family and extended family). The immediate family, comprised of parents and children, is typically the innermost circle. Then, rings extend outward to include aunts and uncles, first cousins, and second cousins. At the societal level, the overall structure is typically hierarchical (vertical collectivism) rather than egalitarian (horizontal collectivism). Unlike most individualistic U.S. families, collectivist families are often large, pursue family interests, make major decisions together, and accept consequences together. Facing threats to survival, collectivist parents typically have many children so that some will survive to support them after they retire. Compared to smaller families, larger families have many family members with a wider variety of skills (human capital) and a broader network of connections (social capital) with more resources to help them survive.

256

Comic Strips

To protect and enhance family resources and reputation, collectivist family members pursue family interests, unlike most individualistic U.S. families that prioritize individual interests. When major questions affect the family’s overall resources (Who to marry? Which university to attend? Where to work?), all family members participate in the decision; the individual does not decide alone. Moreover, family members advocate decisions and actions that benefit the family as a whole, rather than the individual. As a result, family members often act in their family’s interest, rather than their interest. For example, an older sister with a profitable business pays for her brother’s university education, rather than expanding her company. Such extensive family support and family reminders of it both motivate and pressure the younger brother to study hard, graduate, and find a suitable job. His success or failure affects both the family’s resources and its reputation. Unlike most individualistic U.S. families, extended family members in collectivist families often live nearby, participate in major decisions, and share resources. Living nearby, extended family members often regularly participate in family gatherings, know family members well, and have a greater stake in their welfare. As a result, extended family members often voice their views regarding major decisions, such as whether to hire a relative. Furthermore, if immediate family members do not have sufficient money for a child to attend a prestigious university, extended family members will often contribute if they believe that a cousin’s success will enhance their family resources or reputation. Because extended family resources can compensate for inadequate nuclear family resources, the resources of the extended family are often decisive when making major decisions. While collectivist groups can be horizontal or vertical, most collectivist clans are vertical. In small, horizontal collectivist groups, individuals are treated equally and make decentralized decisions democratically and locally (e.g., U.S. bowling teams). As group size increases, shared decision making becomes difficult. Thus, large clans are often are hierarchical and centralized (vertical collectivist), relying on a leader to act on behalf of the group’s interests. Ming Ming Chiu State University of New York, Buffalo Gaowei Chen University of Hong Kong

See Also: Asian American Families; Chinese Immigrant Families; Indian (Asian) Immigrant Families; Individualism; Immigrant Families; Japanese Immigrant Families; Korean Immigrant Familes; Native American Families; Vietnamese Immigrant Families. Further Readings Chiu, Ming Ming. “Families, Economies, Cultures and Science Achievement in 41 Countries.” Journal of Family Psychology, v.21 (2007). House, Robert J., Paul J. Hanges, Manour Javidan, Peter W. Dorfman, and Vipin Gupta. Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies. Thousand Oaks, CA: Sage, 2004. Inglehart, Ronald, and Wayne E. Baker. “Modernization, Cultural Change, and the Persistence of Traditional Values.” American Sociological Review, v.65 (2000).

Comic Strips Sequences of pictures have been used to tell stories for as long as humans have been drawing pictures. With the advent of modern printing techniques and the rise of newspaper syndicates, the comic strip quickly became a popular feature in the newly emerging mass media of the late 19th century. Since then, the comics pages of American newspapers have told stories in a dozen genres, ranging from single-panel gag comics (such as The Yellow Kid, the first American comic, premiering in 1895) to longrunning soap opera and adventure strips. The history of the comic strip has been intertwined with those of animation, comic books, and the pulps, and its popularity similarly dimmed along with those media when television became ubiquitous. One of the first significant comics was The Katzenjammer Kids, created by German immigrant Rudolph Dirks, which began running in the Sunday supplement of the New York Journal in 1897. Dirks invented many of the tropes common to sequential art: the use of speech balloons for dialogue and thought balloons for interior monologue, representing pain with little stars drawn around a body part, and drawing a saw through a log to represent the sound of snoring. The New York Journal was a Hearst paper, run by media magnate William Randolph Hearst, and the comic strips that were first



popular came from either Hearst or Pulitzer papers because of their wide distribution networks. The comics page offered entertainment in small doses on a daily basis, and included both serial and episodic strips across a number of genres. In the early days, adventure strips were among the most popular. Sunday strips have traditionally been larger. While today that often means only double-sized, in the early 20th century, a single Sunday strip could occupy an entire page, and the comics section was as thick as the sports section. Over time, comic strips have been reduced in size, both in terms of their number of panels and the overall size of the art. No full-page comic strip has been published in a major U.S. newspaper since 1971. Little Nemo in Slumberland by Winsor McKay ran from 1905 to 1926, and combined humor, surrealism, and adventure in full-page strips that detailed the elaborate dreams of a young boy who would wake up at the end of each page. The full page gave McKay room to experiment with the layout and pacing of the strip to a degree that few cartoonists have since enjoyed. Though the strip remains a cult classic nearly a century after its cancellation, attempts to revive or adapt it have largely been unsuccessful. Many of the early classic strips were adventure strips that focused heavily on family issues. Little Orphan Annie, which ran from 1924 to 2010, followed the adventures of the ageless young orphan and her makeshift family, consisting of her puppy Sandy, her benefactor Oliver Warbucks (whose wife takes a dislike to Annie, and prevents him from adopting her), Warbucks’s right-hand man Punjab, and his friend the mystical Mister Am. Thimble Theater was created by E. C. Segar and launched in 1919 but eventually changed its name to Popeye, after the character was introduced in 1929 and became the strip’s protagonist. Like Annie, Popeye assembled a family for himself, consisting of his girlfriend Olive Oyl and foundling Swee’Pea. The comic strip was more complex than the television and film cartoons based on it, and featured only one appearance by Bluto, the main antagonist of the cartoons; later arcs in the strip by subsequent writers more closely resembled the cartoon and television versions of the character. After the Katzenjammer Kids, the second longest-running strip in the United States is Gasoline Alley, which introduced characters who aged in real

Comic Strips

257

time to American comics. Originally created by Frank King in 1918, and still running in 2014, the strip has been authored by a succession of writers as the characters have grown up and generations have passed. Bachelor Walt Wallet was originally the focus of the strip, and found a newborn baby boy, Skeezix, on his doorstep less than three years into the strip. Over the course of the strip, Skeezix grew up, married, had children and a midlife crisis, and Walt is now over a century old. Real-time aging was key to For Better or For Worse, which Lynn Johnston launched in 1979 and which has been in reruns since 2008. Centered around the marriage of Elly and John Patterson and their children Michael and Elizabeth, the strip followed the characters through the children growing up, the birth of an unexpected third child, April, the death of family dog Farley, and the marriages of Michael and Elizabeth. When the strip ended, April was leaving for college, and Michael was the father to two children roughly the same ages that he and Elizabeth had been when the strip began. The supporting cast had repeatedly changed over the course of the strip, in response to the lives of the main characters. The Doonesbury strip by Garry Trudeau, which launched in 1970 and is still in print, did not originally transpire in real time. Beginning with the main characters as students at progressive Walden College, it remained in the college setting for 12 years, until Trudeau took a 22-month hiatus from 1983 to 1984, during which he worked on a Broadway musical based on the strip. The musical concerned their graduation from college; when the strip resumed, the characters were graduates, and as they entered the “real world” they began to age in real time. Protagonist Mike Doonesbury has since married twice, the second time to Kim, originally introduced as a Vietnam War orphan after the fall of Saigon. Mike’s daughter Alex has grown up, graduated college, earned a Ph.D., and given birth to twins. Trudeau began another hiatus in 2013 to work on an original film series for Amazon.com. Over its more than 40 year history, Doonesbury has addressed not only changes within the family, but also social and political matters, with recurring “meta” touches, such as when Mike officially handed the reins of the protagonist over to Alex. Other strips have been ageless, or nearly so. Peanuts, created by Charles Schulz, ran from 1950

258

Comic Strips

to 2000, and the characters never aged, with the exception of Linus van Pelt. Introduced as a baby, the younger brother of Lucy, he ages more or less in real time in the 1950s strips, and seems to catch up to his sister and protagonist Charlie Brown. Nevertheless, despite the apparent lack of aging, the Peanuts kids experience a changing world as new technologies are introduced; after the 1960s or so, topical references became less frequent but never entirely disappeared. One of the defining characteristics of Peanuts was its focus on the world of children. Adults, including parents, are frequently talked about—and even spoken to—but they are never seen. In the popular television specials based on the strip (on which Schulz closely worked), the adults never appear on screen, and when they are heard talking, their voices are conveyed as an incomprehensible mumble. If Peanuts has a rival for most loved comic strip of all time, it is Walt Kelly’s Pogo, which premiered in 1948, shortly before Charlie Brown first graced the funny pages, and ended in 1975. Where Peanuts has been hailed for its emotional realism (despite the presence of the very sophisticated and anthropomorphic beagle Snoopy) and Little Nemo for its surrealistic flights of fancy, Pogo was the best of the satires, and—in a genre where words are as important as pictures, but in which there is far less room for them—the best-written strip, incorporating puns, wordplay, malapropisms, and poetry in a dense mishmash that could be overwhelming to the new reader. Pogo starred anthropomorphic animals of the same sort seen in cartoons of the day like Looney Tunes and Mickey Mouse, living in the Okefenokee Swamp along the Georgia–Florida border. It was distinct in its ability to entertain children with its funny animals and goofy characters, while reaching adults with its sharp-eyed political satire. Equally multigenerational in its appeal was Bill Watterson’s Calvin and Hobbes, which ran from 1985 to 1995. Six-year-old Calvin became one of the most popular characters in comic strips, along with his stuffed tiger and imaginary friend, Hobbes. Set in the American suburbs, Calvin and Hobbes included familiar sitcom tropes like the father who is often a stern disciplinarian, but also explains the mysteries of the world (with completely fictitious explanations), the young girl antagonist, and the long-suffering mother who serves suspiciouslooking nutritious food. More philosophical than

political in nature, it was influenced by Doonesbury, Pogo, Little Nemo, and Peanuts, and like Peanuts and Little Nemo, much of the story took place in Calvin’s imagination. Since Watterson’s retirement, the closest thing to a successor has been Cul de Sac, a strip by Richard Thompson, which ran from 2004 until 2012, when Thompson retired. The strip revolved around 4-year-old Alice Otterloop, her adventures in preschool, and her life at home. Though Alice is as imaginative and brash as Calvin, a key difference is that she is not an only child, and her relationship with her older brother Petey is important. Petey suffers from numerous neuroses, including obsessively picky eating and the long-held belief that one of his friends is imaginary, and Thompson has not denied the speculation that Petey is autistic. In a nod to comics history, Petey is a fan of a comic strip called Little Neuro, about a boy who never gets out of bed. Comic Books and Webcomics Comic strips led to the formation of two related media, which for the most part are identical to strips, except in the format of their presentation. Comic books, generally published as short magazines, but today often collected in paperbacks or hardcovers, and sometimes conceived in a novellength form called graphic novels, originated in the 1920s as collections of comic strips. Original material soon followed, and the comic books rose in popularity on the strength of the genre that they created: the superhero story, beginning with that of Superman in 1939, followed by Batman and Captain America. As with comic strips, comic books spanned a wide variety of genres in their heyday, but eventually the genre that prevailed with comic strips was humor, and in comic books it was the superhero. By the late 1980s, comic books from the Golden Age (the late 1930s to the late 1940s) were reaching record prices at auctions and with dealers. Eventually, the market bubble burst, but in the 21st century, many highly sought-after issues can still fetch thousands or tens of thousands of dollars. Original comic books have enjoyed a resurgence in recent decades, thanks to the popularity of several adaptations (including many superhero movies and The Walking Dead TV show), and the availability of graphic novels in traditional bookstores and online.



Commercialization and Advertising Aimed at Children

Webcomics take the same form as comic strips, but they exclusively appear online, rather than also in newspapers or magazines. As with blogging, most Webcomics creators are hobbyists, but a small number creating professional-quality work are able to do so for a living, either through donations and ad revenue, or by selling reprints and merchandise. Bill Kte’pi Independent Scholar See Also: Books, Children’s; Games and Play; Television for Children. Further Readings Blackbeard, Bill, ed. The Smithsonian Collection of Newspaper Comics. Washington DC: Smithsonian Institution Press, 1977. Martell, Nevin. Looking for Calvin and Hobbes. New York: Bloomsbury Academic, 2010. Robinson, Jerry. The Comics: An Illustrated History of Comic Strip Art. Portland, OR: Dark Horse, 2011.

Commercialization and Advertising Aimed at Children There was a time in American history when children did not figure into the discussion of commercialization, marketing, or advertising. That is no longer the case. Today, the youth market is large, totaling more than $1 trillion in the United States, according to the Advertising Educational Foundation. Advertisers have many opportunities to reach youngsters, because the average child spends more than twice as much time in front of televisions and computers than he or she does in school. Each year, U.S. children are bombarded with more than 20,000 commercial messages for soft drinks, snack foods, toys, fast food, and clothes. Unwittingly, children learn about brands and labels early in their lives. The commercialization and advertising aimed at children is controversial, debated by lawmakers, industry leaders, physicians, educators, and families. The growth of advertising aimed at children is a reflection of the shifting role that minors have played

259

within the family economy in the United States. In the 18th and 19th centuries, children did not make economic decisions in the family and few products specifically for them even existed. Thus, what little commercialization and/or advertising that seeped into everyday life was solely geared toward adults. The children’s magazine niche grew during the 19th century, but the advertising in these periodicals was usually aimed at parents. For example, The Youth’s Companion, a popular weekly in the post–Civil War period, included promotions for everything from corsets to cleaning supplies, patent medicines, and wheelchairs. Much changed in the 20th century as a result of changes in communication technology, the marketing/advertising industry, and the rise of mass media. Broadcasting, Advertising, and Children In the 1930s and 1940s, radio became the primary medium for entertainment within the home. It also became an important conduit for advertising to families. The earliest advertising focused on adults because they were thought to make all the family’s purchasing decisions. In the 1930s, however, a small group of advertisers began to realize the role that children could—or did—play in influencing family purchasing decisions. At this time, the commercialization of children began. According to Mark I. West, the maker of Ovaltine, a chocolate milk powder, was the first advertiser on radio to identify the potential of children to influence their parents’ buying decisions. The company developed a children’s radio program in 1931, based on Harold Gray’s newspaper comic strip Little Orphan Annie. The program, which was broadcast in the late afternoon on NBC, became popular with young listeners, and Ovaltine sales soon increased, triggered in part by the promised badges, pins, and secret codes promoted on the radio program. Seeing the success of the Ovaltine program, cereal maker Kellogg’s developed a children’s program based the comic strip Buck Rogers in 1932. Broadcast in the early evening it was popular with juveniles, who no doubt encouraged their parents to purchase Kellogg’s Corn Flakes, the program’s sponsor. The slow trend toward advertising aimed at children quickly accelerated in the 1950s, when television became a central part of many families’ homes. Children and television proved the ideal pairing.

260

Commercialization and Advertising Aimed at Children

There were no controls over television advertising aimed at children in the 1950s. Host selling was a common practice. Buffalo Bill and Clarabell, characters on the popular children’s program Howdy Doody, urged their viewers in “Doodyville” to eat Hostess Twinkies; Jimmie Dodd, lead Mouseketeer on The Mickey Mouse Club, sang the Ipana toothpaste song; and Miss Frances, everyone’s favorite teacher on Ding Dong School, encouraged her “pupils” to help their mothers find Wheaties at the grocery store. Advertisements on televised children’s programs also asserted special powers for their products: breakfast cereal often provided super-human strength, and sneakers endowed children with unequaled athletic expertise. In the 1960s, the three major networks—ABC, CBS, and NBC—competed for the burgeoning youth market. The Saturday morning time slot was especially important in reaching youngsters, and the networks filled the time with cartoons like Bullwinkle, Tom & Jerry, Superman, and The Jetsons, and live action Westerns such as Sky King, Roy Rogers, and The Lone Ranger. The caliber of Saturday morning children’s programming, not to mention all commercial television programming, caused the new Federal Communications Commission chairman Newton Minow to brand the medium a “vast wasteland” in 1961. “Vast wasteland” or not, children’s programming had become a lucrative advertising oasis. Advertisers identified children as a large, profitable market for such products as breakfast cereal, candy, drinks, snacks, and toys. These advertisements featured popular cartoon characters such as Bullwinkle (for Cheerio’s breakfast cereal), Bugs Bunny (for Tang, an orange-flavored drink) and Top Cat (for Kellogg’s Corn Flakes cereal). Popular real-life characters from children’s programs also appeared in commercials—the Lone Ranger for General Mills breakfast cereals, and Roy Rogers for Post Toasties. Some advertisers created animated characters: Rootin’ Tootin’ Raspberry, Injun Orange, Chinese Cherry, and Loud Mouth Lime for Funny Face Kool Aid; Bucky Beaver for Ipana toothpaste; and Yipes, the fruit striped zebra, for Beech-Nut chewing gum. Advertisers also offered premiums as an added incentive for children to buy the products or to pester their parents to make the purchase. For example, Trix included tiddlywink sets in cereal boxes, and General Mills cereals offered

Lone Ranger stories on the backs of certain cereal boxes during the 1960s. The time that children spent in front of the television beginning in the 1960s, and the number of commercials to which they were exposed, caused researchers to examine the influence that television and advertising had on America’s youngsters. Research on Children, Television, and Advertising Research on children and television in the 1960s took many different directions, but one that proved especially fruitful vis-à-vis the commercialization of youth dealt with how youngsters process televised advertising messages. According to Kunkel and Roberts, only a small number of studies examined the subject early on, but they served as an important baseline for research that followed. These early studies, which were cited in an addendum to the Surgeon General’s report on children and televised violence in 1972, became important information as federal agencies considered regulating television advertising aimed at children. These early studies indicated that children under the age of 5 could not differentiate between television programs and commercials. Moreover, early research indicated that children under the age of 7 or 8 did not recognize the persuasive content of advertising. Subsequent research supported these preliminary findings. As Dale Kunkel and Donald Roberts point out, young children who did not or could not comprehend the persuasive nature of television commercials were likely to believe the advertising and ask their parents to buy the product. Subsequent research in the communication, psychology, and education disciplines supported the findings that children have cognitive limitations that prevent them from discerning the persuasive nature of television commercials. In recent years, researchers have extended their focus beyond television to the Internet and video games. Researchers in a range of disciplines in the United States and other countries are studying online advertising, how juveniles interact with that advertising, and how minors are using the Internet and social networking sites. Researchers are also examining how youngsters interact with video and online games. In the 21st century, physicians have taken the lead in examining the role that food advertising plays in the obesity epidemic among U.S. children.



Commercialization and Advertising Aimed at Children

In its 2006 report on food marketing to children, the Institute of Medicine of the National Academies found evidence that TV advertising influences food and beverage preferences among children; their purchase requests; and their short-term consumption of products that are high in fat, sugar, and salt. Because advertising of cereals, snacks, candy, and fast food dominate much commercial children’s programming on television, this represents not only an important consideration in the commercialization of youngsters, but also poses important health questions. During the past five decades, research on children and the media has played an important part in the debate over advertising targeting children. Lawmakers, media critics, and parents have often cited academic research to bolster their efforts to strengthen regulations. Advertising industry organizations have also used research in their self-regulation activities. Regulations Designed to Protect Young Consumers Although advertisers have reached out to children for decades, it was not until the 1970s that the government stepped in to control commercials aimed at children. The target of this regulation was television, specifically during programs aimed at children. Under legislative mandates, two federal agencies have assumed the authority to regulate commercial messages targeting children. Under the Federal Communications Act of 1934, the Federal Communications Commission (FCC) was given the broad mandate of creating rules and regulations that would ensure that broadcasting was carried out in the “public convenience, interest, or necessity.” The Federal Trade Commission Act of 1938 gave the Federal Trade Commission (FTC) the power to regulate “unfair or deceptive acts or practices in or affecting commerce,” and to prohibit “any false advertisement.” Both laws extended well beyond advertising aimed at children. Nonetheless, these two agencies would assume the job of regulating advertising in children’s programming on television. According to Kunkel and Roberts, three developments triggered the 1970s push toward regulation: the shift of children’s television programming to the Saturday morning slot; a new recognition by advertisers of children as a distinct, lucrative market; and

261

the Surgeon General’s report on the influence of televised violence on children. In 1974, the FCC established two principles to cover TV advertising aimed at children. The first limited the amount of advertising during children’s programs to 9.5 minutes per hour on weekends, and 12 minutes per hour on weekdays. The second policy addressed the cognitive limitations of young children to differentiate between commercials and programming, which was pointed out in the Surgeon General’s report. To that end, the FCC established what has been called the “separation principle,” and developed three policies designed to differentiate advertisements from the children’s programming. Under this separation principle, children’s programs had to include “bumpers” that separated commercials from the programming. These bumpers, which were approximately five seconds long, preceded and followed all advertising, making it clear that the commercial was not part of the program. The second part of the separation principle prohibited program characters from promoting products in commercials during or immediately before or after the program. Thus, Star Trek action figures could not be advertised during Star Trek: The Animated Series, a popular Saturday morning cartoon program during the 1974–75 season. Finally, products could not be promoted within a children’s program. All advertising had to be confined to the commercial segments that were surrounded by the mandated bumpers. Working in tandem with the FCC in the 1970s, the FTC sued a number of companies that marketed and advertised their products to children in a misleading or deceptive manner. Many of the most important cases dealt with toy manufacturers. In 1971, for example, the FTC won a consent order against giant toy manufacturer Mattel. The commission argued that the photography employed to show the speed of its popular Hot Wheels racer during a televised commercial was deceptive to children. In addition, the FTC has investigated the nutritional claims made by food manufacturers in their commercials. In these actions, the trade commission has generally been successful. As J. Howard Beales admits, when the FTC attempted to establish regulations to protect children from deceptive advertising, the agency was less successful. In 1978, the FTC proposed rules to limit advertising to children on television. With

262

Commercialization and Advertising Aimed at Children

the support of advocacy groups and the Food and Drug Administration, the FTC recommended the KidVid rules that would limit advertising of high sugar foods to children. According to Beales, there were three parts of the FTC’s KidVid proposal: (1) banning all advertising to children who were too young to understand purpose of the message, (2) banning television advertising of products that posed a significant health risk to older children, and (3) requiring the advertising of products with high-sugar content be balanced with nutritional and health disclosures funded by advertisers. Utilizing research on the cognitive ability of children, an investigation of the content of commercials aimed at children, and data on the rise of sugar consumption and problems associated with it, the FTC used its statutory power to control “unfair” and “deceptive” advertising to argue for the new rules. Not surprisingly, the proposal brought a firestorm of reaction. While advocacy groups like Action for Children’s Television, the Consumers Union, and the Center for Science in the Public Interest supported the measure, broadcasters and advertisers did not, arguing that they had First Amendment protections that allowed them to inform young viewers about their products. In face of such strong corporate opposition, the FTC proposal failed. Indeed the measure almost spelled disaster for commission itself. According to Beales, Congress allowed FTC funding to lapse, commission enforcement rules were limited, and Congress passed a law severely limiting the agency’s power. That action was in keeping with a philosophical shift in Washington, D.C. In the 1980s, both the Reagan and Bush administrations were committed to an open-market/laissez-faire policy of business. That meant that federal agencies were committed to preserving an unfettered business marketplace, one free from the intrusion of regulation. As Angela J. Campbell points out, this meant that the FCC abandoned its “public trustee” mission and dismantled much broadcast regulation. Accordingly, the FCC rescinded all limits on television commercials for adult and children’s programs in 1984. Stations and networks could schedule as many commercials as they wished. Under this free-market policy, if the number of commercials exceeded viewer tolerance, ratings would suffer, advertising revenue would

decline, and broadcasters would be forced to cut back on the number of commercials. The regulatory pendulum began to swing in the other direction in 1990, when Congress adopted the Children’s Television Act. Designed to protect children from excessive commercialization, the law did not ban advertising from children’s programming on television, but it limited advertising on programming targeting children 12 and under. The limits were 10.5 minutes per hour during the week, and 12 minutes per hour on the weekend, a greater amount of advertising than what appeared on prime-time commercial television. By the late 1990s, as more children went online to explore the Internet and play games (including advertising-sponsored video games or advergames), Congress again acted to protect the new generation of Americans with the Children’s Online Privacy Protection Act of 1998 (COPPA). The law articulated what Web sites could and could not do with regard to collecting personal information from minors. The Federal Trade Commission was given oversight, but an advertising industry self-regulation group has taken the lead in developing guidelines on what advertisers can do while still protecting the privacy of juveniles. Industry Self-Regulation of Advertising Aimed at Children Both the advertising and broadcasting industries have developed self-regulation to address concerns about advertising aimed at children. These self-regulatory codes and organizations have often developed in face of government plans to regulate the industry. Self-regulation in broadcasting dates back to the National Association of Broadcasters (NAB) Code of 1951, which was developed to head off proposed legislation to create a citizen’s advisory board for radio and television. The NAB code established ethical standards for radio and television that prohibited profanity, negative portrayals of family, irreverence to God and religion, and illicit sex and drunkenness, among other things. It also limited the number of commercial minutes per hour in broadcasting. In 1974, facing FCC regulations on advertising on children’s programming, the NAB amended its code to reduce the amount of advertising on children’s programming. The NAB code remained in effect until 1983, when



Commercialization and Advertising Aimed at Children

the Department of Justice—in response to the Reagan administration’s push to deregulate broadcasting—sued the NAB, asserting that limiting commercials manipulated the supply of commercial TV time, and thereby did not allow the free market to function. In 1973, as the FCC moved to regulate the commercial content of children’s programming on television, the National Advertising Review Council— a collaboration of the American Association of Advertising Agencies, the American Advertising Federation, the Association of National Advertisers, and the Council of Better Business Bureaus— created a self-regulatory organization to address issues associated with advertising directed at children. This organization, the Children’s Advertising Review Unit (CARU), continues to operate today. When it started in 1974, CARU had a broad mission, which has since grown. Its original mandate called for monitoring and reviewing advertising directed at children to ensure that commercial messages were not deceptive, unfair, or inappropriate. In addition, the group investigates complaints regarding advertising practices. CARU also established guidelines to help advertisers craft acceptable commercial messages for the juvenile market. These include crafting advertisements appropriate to the children targeted, not establishing unreasonable expectations of the performance of the advertised product, not using stereotypes or appealing to prejudices, and presenting a positive child–parent relationship. As advertisers ventured onto the Internet, CARU also extended its guidelines to this area as well, advising advertisers to craft messages appropriate to the age of the targeted juvenile audience, and not link to Web sites inappropriate for them. Once the Children’s Online Privacy Act (COPPA) was passed, CARU again expanded its mandate. While admitting that online data collection offers unique opportunities for marketing, CARU advised advertisers that they also had responsibilities to children who may not understand the nature of information solicited, or its intended purposes. CARU guidelines follow COPPA and FTC rules. According to CARU, advertisers who collect data from minors must disclose their information collection and tracking practices, the uses of personal data, and ways to correct or remove material. In addition, CARU guidelines advise advertisers to

263

disclose in language understandable to the child why the information is requested, and whether it will be shared. Those guidelines also recommend that advertisers obtain parental consent before a child’s personal information is publicly posted. There are limits to CARU, however. The organization has no enforcement power. All compliance is voluntary. CARU relies on cooperation from the advertiser to ensure changes. If an advertiser refuses to comply, CARU can issue a reprimand, which is publicized on the organization’s Web site and in press releases. In rare instances, CARU has reported the advertiser to the FTC. CARU’s most recent guidelines address the relationship between advertising and childhood obesity, and offer guidance to advertisers of products high in fat, sugar, and salt. The organization’s guidelines adopted in 2006, emphasize that food advertising should not depict overconsumption or disparage healthy lifestyles and dietary choices. The advertising industry has also launched another voluntary self-regulatory program to foster healthy diets and lifestyles for children. The Children’s Food and Beverage Initiative was founded by14 of the largest advertisers of food and beverages to children. The giant corporations behind this new initiative are Cadbury Schweppes USA, Campbell Soup, Coca-Cola, General Mills, Hershey, Kellogg, Kraft Foods, McDonald’s, PepsiCo, and Unilever. Advertisers who choose to participate in this program agree to spend at least of half their advertising directed to children to promote and encourage healthier dietary choices, good nutrition, and healthy lifestyles; limit products shown in interactive games to healthier dietary choices and lifestyles; not advertise food or beverages in elementary schools; and not engage in food or beverage product placement in entertainment content. In 2011, the initiative announced that its members had agreed to adopt uniform nutrition criteria with lower calories, sugar, sodium, and fat for all products advertised to children by 2014. Conclusion The commercialization of children is a complex problem with few simple solutions. Lawmakers, advertising and media leaders, physicians, educators, and families have debated the subject for decades without any satisfactory resolution. However, American youngsters represent a lucrative

264

Common Law Marriage

market, one too large for advertisers in a capitalist society to ignore. Commercialization and advertising aimed at children has increased over time. The most recent estimates indicate that children are exposed to more than 20,000 commercial messages every year. Advertising aimed at children is in every medium now, from magazines to television, the Internet, and video games. Neither U.S. regulations nor industry self-regulation has been enough to stem the tide of commercialization of children. The best that might be expected may be educating youngsters to be better consumers, and encouraging parents to more closely monitor their children’s use of the media and their interaction with advertisers. Kathleen L. Endres University of Akron See Also: Children’s Online Privacy Protection Act; Children’s Television Act; Internet; Magazines, Children’s; McDonald’s; Obesity; Primary Documents 1990; Radio: 1920 to 1930; Radio: 1931 to 1950; Television for Children; Television, 1950s; Television, 1960s; Television, 1970s; Television, 1980s; Television, 1990s; Television, 2000s; Television, 2010; Toys; Video Games. Further Readings Advertising Educational Foundation, “Advertising to Children.” http://www.aef.com/on_campus/ classroom/speaker_pres/data/3005 (Accessed August 2003). Beales III, J. Howard. “Advertising to Kids and the FTC: A Regulatory Retrospective That Advises the Present.” George Mason Law Review 2004 Symposium on Antitrust and Consumer Protection. http://www.ftc.gov/speeches/beales/040802 adstokids.pdf (Accessed August 2013). Campbell, Angela J. “Self Regulation and the Media.” Federal Communication Law Journal, v.51/3 (1999). Children’s Advertising Review Unit. “Self-Regulatory Program for Children’s Advertising.” http://www.caru .org/guidelines/guidelines.pdf (Accessed August 2013). Darwin, David. “Advertising Obesity: Can the U.S. Follow the Lead of the UK in Limiting Television Marketing of Unhealthy Foods to Children?” Vanderbilt Journal of Transnational Law, v.42/1 (2009).

Federal Trade Commission. FTC Staff Report on Television Advertising to Children. Washington, DC: U.S. Government Printing Office, 1978 Institute of Medicine of the National Academies. Food Marketing to Children and Youth. Washington DC: National Academies Press, 2006. Kunkel, Dale and Donald Roberts. “Young Minds and Marketplace Values: Issues in Children’s Television Advertising.” Journal of Social Issues, v.47/1 (1991). Mello, Michelle M. “Federal Trade Commission Regulation of Food Advertising to Children: Possibilities for a Reinvigorated Role.” Journal of Health Politics, Policy and Law, v.35/2 (April 2010). Schor, Juliet B. Born to Buy: The Commercialized Child and the New Consumer Culture. New York: Scribner, 2004. West, Mark I. “Children’s Radio Programs and Their Impact on the Economics of Children’s Popular Culture.” Lion and the Unicorn, v.11/2 (1987).

Common Law Marriage Common law marriage (CLM) has existed in various forms since the Roman Empire, and is not a phenomenon unique to the United States. For centuries, CLMs have been an option for couples who lived in communities that did not have easy access to legal/religious recognition of marriage, equal rights/status to the majority groups, or sufficient financial resources. Since the mid-20th century, CLM has lost some prominence as other choices have gained greater social acceptance. Historical Trends Following the arrival of white European immigrants on the North American continent, there was a persistent pattern of expansionism in claiming land and building communities (later codified in the concept of Manifest Destiny). Some newly established communities were so far from governmental agencies that couples who wished to marry could not find legal representatives to verify their marital commitments. In other communities, religious leaders performed ceremonies that resulted in legally recognized marriages, but couples who did not ascribe to the religious belief system were often unwilling, or restricted, from participating in the ceremonies.



In the context of these environments (e.g., lack of legal authorities and religious exclusion), couples entered into CLMs. Two principles from European law that supported these marriages were per verba de praesenti (“present vows”) and per lex loci celebrationis (“the law of the land where the marriage is celebrated”). According to these principles, marriages were valid if partners made (1) verbal/written commitments stating that they viewed their relational status as married and/or (2) participated in celebrations of this commitment. After this validation, partners had to engage in daily activities that were consistent with marital life. Couples built homes, shared income and expenses, and raised children. They had a physical relationship, and publicly presented themselves as a couple. All household members often shared the same last name to signify their collective identity as one family. Thus, a couple earned their social legitimacy via the process of living together as husband and wife. Some couples had clandestine marriages when they were living in distressed or inhumane conditions. Up to the time of the Civil War, slaves had no legal rights to personhood, to be married, or have a family. However, couples performed commitment ceremonies (often reflective of their African, South American, and/or Caribbean heritage). After the Civil War, the emancipation of slaves opened opportunities for recognition of their couplings as CLMs. During the Industrial Period (late 19th to the early 20th century), immigration to the United States soared. By this time, a formalized immigration process had been established, and there were more specific laws about family status on the books. In the years prior to immigration, millions of couples had become married via the cultural, religious/ spiritual, or legal requirements of their home countries. However, many immigrant families did not have written verification (e.g., licenses) as proof of their marital status. In the absence of this verification, couples were often granted CLM status in an effort to build more consistency in relationship standards. This also gave some proper moral standing to couples who already had children. CLM status was more ambiguous than traditional marriage. This ambiguity made it more difficult to determine how CLMs end. Many legal precedents in defining CLM (and divorce) emerged from single case judgments or regional legislation.

Common Law Marriage

265

For example, there was no “seven-year rule” of shared residence to be classified as CLMs, as is sometimes mistakenly believed. The most overarching legal decision was created by the Supreme Court (Meister v. Moor, 1877). The court ruled that CLMs (sanctioned in any one state) are entitled to marital rights in all states. This decision empowered couples to make relational choices that best fit their circumstances. During the first half of the 20th century, CLM was also prevalent among low-income couples. Such couples often could not afford the costs (e.g., license fees and medical test/report fees) associated with traditional marriage. Likewise, these families could not afford traditional divorce. In this context, CLM provided an important means for partners to signify their relationships as marriages. In addition, this signification assured that their children would be protected from being labeled illegitimate. Evidence of CLM status (e.g., verbal commitment, shared residence, and shared daily routines) could also make it possible for one spouse to collect monetary aid (e.g., Social Security benefits and employer payments) if the partner died. In the second half of the 20th century, the status of CLMs declined. This decline might not reflect a disparagement of this marital type, but rather the increasing acceptance of other relationship forms. Lifestyle choices that were previously marginalized as immoral or narcissistic (e.g., cohabitation, childfree couplehood, and single parenthood) were now more socially valid and legally protected. Thus, individuals and couples had a wider range of relational options. They were no longer restricted to only traditional or common law marriage as a means to achieve some social respectability. In sum, CLMs have existed throughout the history of the United States. Couples utilized this marital form for a variety of reasons. It has been argued that CLM served a social justice purpose. Vulnerable couples who were disenfranchised (by geography, racial/ethnic/religious discrimination, or poverty) did not have access to traditional marriage (and its legal protections). Thus, CLM allowed such couples to gain social legitimacy of their committed relationships. The legal acceptance of CLM also reinforced the primacy of marital interactions. If couples shared the benefits and strains of daily life, then they earned the right to be recognized as married. The societal impact of CLM has not been

266

Communes

limited to heterosexual couples. It has been postulated that CLM set a precedent for other relational forms (e.g., single parenthood or gay/lesbian couplehood) to gain legitimacy. Yet, some individuals and groups do not see these changes as helpful to families. They have argued that CLM is part of a slippery slope toward societal decay. Thus, the social value of CLMs remains unresolved. Jacki Fitzpatrick Erin Kostina-Ritchey Texas Tech University See Also: Cohabitation; Frontier Families; Immigrant Families; Poverty and Poor Families. Further Readings Lind, G. Common Law Marriage A Legal Institution for Cohabitation. New York: Oxford University Press, 2008. Lucas, Peter. “Common Law Marriage.” Cambridge Law Journal, v.49/1 (1990). Thomas, J. “Common Law Marriage.” Journal of the American Academy of Matrimonial Lawyers, v.22 (2009).

Communes According to the Oxford English Dictionary, the word commune first appeared in 12th-century Europe as a term to describe a municipal community. In the modern era, this word is often used to depict a community, most often rural, of shared values, mission, economy, and close living and working spaces. People live together as a community and evenly divide responsibilities, offering a more harmonious approach to daily tasks and chores. Communes are sometimes referred to as intentional or utopian communities. Communes are often started by those with a particular religious philosophy; when these individuals find that they no longer share the same theology with their original church, they may leave to start a new church with an attached community. Others have the goal to live and work in a community of similar believers, away from the distractions of modern urban life.

Early Communes The Shaker community, formally known as the United Society of Believers in Christ’s Second Appearing, was one of the most successful communes in the United States. The largest Shaker community (there were several in New England and the midwest) lasted from 1787 to 1947 in the state of New York. In mid-18th-century England, a young woman named Ann Lee joined a group then known as the Shaking Quakers (due to the fact that they danced and sang with frenzied movements), borrowing from the Quakers their pacifism, use of the term meeting instead of worship service, and belief in the importance of God’s direct revelations. They believed that sex was a sin of lust and preached celibacy, even for husbands and wives. However, Ann Lee married, had four children, and each one died while still an infant. She believed that those deaths were a direct message from God, telling her that sex was a sin. In 1774, Ann Lee, her husband, and other family members moved to New York State, eventually acquiring land for communal living and meetings together. By 1796, there were 10 Shaker communities in the northeast, each with anywhere from 30 to 100 members. They adhered to many strict rules, including mandatory meeting attendance and the consistent separation of men and women. The Shaker principles of utility and simplicity were expressed in their crafts, and especially in their furniture. Shaker furniture is well known for its simple, unadorned design that reflected their belief that unnecessary decoration promoted the sin of pride. In 2013 the only active Shaker community left in the United States was located in Maine. Another long-lasting commune, the Ephrata Cloister, developed during the early 18th century in Pennsylvania. Conrad Beissel emigrated from Germany to Pennsylvania in 1720, eventually ending up in Conestoga, east of Lancaster, Pennsylvania. He joined an Anabaptist group and was appointed the leader of a new congregation. However, Beissel strongly believed in Saturday instead of Sunday worship, and the practice of celibacy, views that soon caused a split in the congregation. He left that congregation, and a few years later, he settled on nearby Cocalico Creek. Other similar believers followed him, and soon they formed a thriving community of small dwellings,



workshops, and meetinghouses that they named Ephrata Cloister. Celibacy was an important part of the Ephrata Cloister, though it was not required. Therefore, members were divided into three congregations: the celibate men known as the Brotherhood, the celibate women known as the Sisterhood, and the married couples and families known as the Householders. The Householders were considered the lower-ranked order, beneath that of the Brotherhood and Sisterhood. They lived near the Ephrata settlement, and lived and worked on their own land and farms. They worshiped with and supported the Ephrata congregation, but were not subject to the dress code of the celibates, who wore hooded white robes. Beissel died in 1768, and membership decreased, particularly due to the fact that it was becoming more difficult to attract those who were willing to practice celibacy. The Cloister was legally dissolved in 1814, and eventually, as numbers decreased, the remaining members formed a German Seventh Day Baptist Church congregation. The community lasted until 1934, when legal disagreements between members caused the revoking of their incorporation charter for their church. At that time, the property was donated to the Pennsylvania Historic and Museum Commission, and the remaining buildings have be preserved and are open for visitors. John Humphrey Noyes was born in 1811 in Vermont, and as a child he gravitated to his deeply religious mother. He eventually graduated from Dartmouth in 1830, with plans to practice law with his uncle, but the Second Awakening, a religious revival moving through New England at the time, captivated him more. Noyes attended Yale Divinity School, where he embraced Christian perfectionism, which some believed enabled people to obtain perfect holiness so completely that they would find it possible to lead a life safe from temptation and free of sin. He also embraced the idea that selfish tendencies were to be set aside, and no one should claim strict individual ownership to anything. When Noyes preached that all condemnation in his life was gone, word spread that he was crazy. His license to preach was revoked, and he moved to Putney, Vermont, for rest and contemplation of what to do next. He began a community focused on

Communes

267

perfectionism. The Putney community consisted of Noyes, his wife, several of his brothers and sisters, and a few converts from the neighborhood. They lived as a group, sharing chores and possessions. After several years of leading this community, Noyes became concerned that his wife had experienced five difficult pregnancies in six years, four of them ending in stillbirths. He was determined to find an acceptable way, other than celibacy, to prevent conception. After considering a few pregnancy-prevention methods that were studied during the early 19th century, he decided that male continence was the best solution. During this time in Putney, as the community grew with new members, Noyes took the philosophy of “no strict individual ownership” a step further to include the idea of romantic and erotic love. He believed that love should overflow in all directions within a perfect community, and sexual intercourse should not be ruled by law any more than law restricts what one eats and drinks. This freedom would lead away from “jealousy of exclusiveness” that can prove harmful in monogamy. He believed that only in a disciplined community could such an arrangement work, and he began a practice that he called “complex marriage,” as opposed to the “simple marriage” of only two. Noyes and Harriet led the way in 1846, as they began their complex marriage with George and Mary Cragin. Noyes was not only the leader, but was also considered the father of the whole community. Therefore, when news of the complex marriage behavior eventually made its way to the community outside, Noyes was arrested and charged with adultery. After the legal difficulties, a group of perfectionists in Oneida Creek in New York offered those from Putney a sawmill and 40 acres of land. Noyes had been thinking of taking his perfectionist community into a more money-making direction, and this move to a new home enabled this plan. They began by constructing small homes for themselves and a large community building, and then ventured into marketing their canned vegetables and handmade items such as straw hats, travel bags, and silver tableware. The Oneida community prospered and continued their lifestyle of sharing with each other land, finances, and chores, with no chore assigned according to gender. There were no personal possessions, and competition did not exist. The community

268

Communes

lived as one family, with Noyes as the father. Love was considered holy, and people were allowed to love numerous others at any time. Compared to jealousy, divorce, and adultery, which occurred in couples living on the outside, this arrangement seemed more peaceful to the members of the Oneidians. As children were born, they stayed with their mother until weaned, and then children left their mother’s home and lived together in one dwelling. Here, they attended school, played, and slept, and were able to visit with their mothers once or twice a week. The people living outside the community grew more angry as more information was learned about family life in Oneida. In 1879, John Noyes fled to Canada to escape the raging antagonism. With his departure, the community living system collapsed. The children were returned to their individual mothers, and the Oneida industries were converted into a joint-stock company, with shares distributed to the remaining members of the community. Further west, in the mid-19th century, Ernest Valeton de Boissière, a former army engineer from France, settled in Franklin County, Kansas, with plans to start a utopian community. He visualized a community where people would live together and share in the labor, responsibilities, and profits. He eventually named the town Silkville because one of the community’s sources of income was silk. They planted mulberry trees for the silkworms, and built a silk factory (where they wove silk velvet ribbons), dairy barn, winery, and one three-story residence building. Unlike the Shakers and the members of the Ephrata Cloister, Boissière strongly believed that people living and working in Silkville would not be required to share a set of beliefs. In a brochure he published to spread the news of his utopian community, he stressed that there would be respect for freedom of thought and action of all, just as each person living in Silkville would want that same respect in return for his ideas. Because of competition within the silk and dairy industries, members in Silkville gradually left to work elsewhere. Boissière sold the land in 1892 and soon after went back to France. Silkville is now a ghost town. The 1960s and 1970s Many equate the term commune with 1960s-style community living as practiced by the so-called

hippies. During the mid- to late-1960s, groups of young people ignored the conventions of society by living together in communal situations on farms or in urban locations, sharing their religious tenets, philosophies, nontraditional lifestyles, and sometimes drugs and/or sexual experimentation. Religion was less of a defining factor for these communities than it was for the utopias of previous generations. Instead, these communes were divided into two groups, the alternate-culture group hoping to change the world by setting an example, and the counter-culture group who wanted to change the world through politics, or sometimes even revolution. Many coming of age in the 1950s and 1960s were discouraged by the lifestyle of Western society. They disapproved of the greed of materialism, the violence of the Vietnam War, competitive behaviors to maintain the capitalistic system, and growing emphasis of individualism over the connectiveness of people. From this discouragement grew an increase in communal societies, where they could form a more simple and uncluttered life, return to the basics, and focus on a shared philosophy of life and survival. The reduced standard of living most often found in communes also freed them from what they saw as an oppressive social system. The 1960s also brought changes to families, such as higher divorce rates and increased mobility, in which families often moved because of a parent’s job. This travel meant leaving behind friends and the challenge of making new ones. Some young adults found communal living a very inviting way to find that sense of family and community that they believed was missing in their lives. Like rural communes during earlier eras, chores were mostly shared and meals were communally held. Buildings were built and shared by the members. In urban communal settings, people often shared an apartment with one or two people, but soon would form a group with others they knew, and each apartment would be shared by all. Communes during this decade were primarily first written about in specialized magazines, meant for the smaller audiences particularly interested in topics of simple living. Eventually, mainstream media such as Life and the New York Times began reporting on this community lifestyle trend. Once the publicity began, the 1960s became a decade of greatest growth of communes in U.S. history. There



were a few communes that did not allow children, but others welcomed them, believing that they played a major role in the commune’s existence and future. Even though parents usually retained responsibility for the well-being and behavior of their children, the community provided multiple caregivers for children, and an extended family of many communal brothers and sisters. Studies have shown that most of the children reared in such communes grow up to be less self-centered and more relaxed around strangers and adults. There were exceptions, and a small percentage grew up without much supervision and took care of themselves at an early age. Occasionally a few had to acquire their own food, drank alcohol, and experimented with illegal drugs. There were some reports of child abuse, though no more frequently than found per capita in society as a whole. Education was most often a major part of a child’s communal life, especially in rural communities. Rural areas offered an abundance of wildlife, fishing, farming, animals that needed daily care, and a variety of landscape and plants. Some children left the communes during the day to attend public school, some were homeschooled, and at times a school would be built within the confines of the community where the children could attend classes together. A number of communal schools accepted students from the surrounding area, with the parents of those children paying a fee to do so. Intentional communities may not be as prevalent as they were in the 1960s and 1970s, but a few have continued into the 21st century. In Louisa County, Virginia, just a few miles east of Charlottesville, lies a commune of about 100 people on over 400 acres of land, known as Twin Oaks. Started in 1967, members earn most of their income by making and selling hammocks. Other income-producing endeavors include making tofu, growing heirloom seeds, and indexing books for academic presses. As a sustainable community, they grow food in a garden stretching over three acres, which they farm together. They also share equally in building and land maintenance, child care, and all the other household duties. The commune invests funds in socially responsible entities, and does not eschew capitalism. Twin Oaks is financially healthy and has a waiting list for those that want to live and work. Twin Oaks is one of six member communities of the Federation of Egalitarian Communities 2013.

Community Property

269

According to the Fellowship for Intentional Communities Web site in early 2014, there were over 1,600 established intentional communities located in the United States. People have formed these communities to live according to their religious, environmental, or social ideals. Antoinette W. Satterfield U.S. Naval Academy See Also: Community Property; Shakers; Utopian Experiments and Communities. Further Readings Miller, Timothy. The 60s Communes: Hippies and Beyond. Syracuse, NY: Syracuse University Press. 1999. Smith, William L. Families and Communes: An Examination of Nontraditional Lifestyles. Thousand Oaks, CA: Sage, 1999. Sutton, Robert P. Communal Utopias and the American Experience, Religious Communities: 1732–2000. Westpoint, CT: Praeger, 2003.

Community Property Community property is a marital property regime that originated from civil law traditions. Its fundamental idea is that marriage is intended to be an equal partnership between spouses. This entails that wages earned during the marriage and any relevant assets, interests, profits, and revenues are equally owned by both parties. However, under community property, property owned by each of the spouses before the marriage, as well as property acquired by each of them during their union either by inheritance or gift, is considered separate property. In some community property states, with the exception of Idaho, Louisiana, Texas, and Wisconsin, any income, including interests, profits, and revenues from separate property, remains separately owned by the parties. In community property states, wages and property acquired during marriage is presumed to be community property. If separate property is commingled with community property, such separate property may be considered separately owned by one of the spouses if the spouse contending for

270

Community Property

separate property is able to provide reliable tracing evidence. For example, if wages earned by one of the spouses prior to the marriage are commingled in a joint bank account or used to contribute for the purchase of a common asset during the marriage, they may remain separate if the court can trace the source of funds. Similarly, separate property exchanged for community property during marriage can be considered separate in case of traceable funds. Thus, proceeds, profits, and revenues from separate property used to buy a common asset during the marriage may retain their separate nature if the party can prove the original separate source of funding. In such an event, the court may determine that the common asset is separately owned by such a party. Different rules apply when separate property is mixed with community property, such as in the case of a separate asset partly funded with community property during marriage. Under the inception right rule, which applies in Texas, the asset retains its separate nature, and thus belongs to the party who acquired it before the marriage. Only payments and interests occurred during the marriage are considered community property, and need to be shared with the other spouse. Under the time of vesting rule, the asset is considered community property because the title to the property only vests when all installment payments have been made. Finally, under the pro rata rule, which applies in California, the payments made with community funds purchase a pro rata share of the title to the property. Since the 1960s, in most states, spouses are entitled to manage community property in the best interest of the community without the consent of the other party. Different rules apply on whether community property can be used to satisfy pecuniary obligations of the individual spouses. Generally, most community property states provide that liabilities incurred by one of the spouses prior to the marriage can only be satisfied by the separate property of that spouse, and thus relevant creditors cannot advance claims to the community property of the couple. However, most community property states also provide that pecuniary obligations contracted by one of the parties prior to the marriage may be satisfied with the share of the community property belonging to that party. The only exception to such position is the state of California, which provides

that community property cannot be used to satisfy premarital liabilities. Different rules also apply in case of pecuniary obligations contracted by the spouses during their union. Some community property states, indeed, provide that community property can only be used to satisfy creditors’ claims if both parties consented to the transaction. Other community property states, such as California and Louisiana, instead provide that community property can be used to settle pecuniary obligations contracted by individual spouses during the marriage. Finally, another set of states, including New Mexico, provide that creditors can advance claims only to a specific portion of the community property. In the case of divorce or separation, most community property states adopt the equitable distribution method, which is generally used by separate property states, providing that the community property shall be equitably divided between the parties. Only a minority of community property states adopt a different approach, allocating an equal share of the community property to each spouse, in addition to the respective separate property. In the event that the couple relocates from a community property state to a separate property state, upon divorce, the relevant property is governed by the law of equitable distribution under that specific jurisdiction. On the other hand, in case of relocation from a separate property state to a community property state, the tracing rules of the property apply. This means that the separate property that belongs to each of the spouses in the original separate property state remain separate in accordance with the rule that the original jurisdiction should determine the ultimate regime governing the property. The majority of the states entitle the spouses to freely convert, or transmute, community property into separate property and vice versa by signing a prenuptial agreement prior to the marriage, or a written agreement, and in a minority of states, even an oral agreement after the marriage. Benedetta Faedi Duramy Golden Gate University School of Law See Also: Common Law Marriage; Divorce and Separation; Egalitarian Marriages; Inheritance; Prenuptial Agreements.

Further Readings Bassett, William W., ed. California Community Property Law. San Francisco: Thomson/West Group, 2003. Blumberg, Grace Ganz. Community Property in California. 6th ed. New York: Wolters Kluwer Law & Business, 2012. Boele-Woelki, Katharina, Jo Miles, and Jens M. Scherpe. The Future of Family Property in Europe. Cambridge, UK: Intersentia, 2011. Reppy, William A., ed. American Community Property Regimes, Durham, NC: School of Law, Duke University, 1993. Salter, David, Charlotte Butruille-Cardew, and Stephen Grant. International Pre-Nuptial and PostNuptial Agreements. Bristol, UK: Jordan Publishing, 2011. Singer, Joseph William. Property Law, Rules, Policies, and Practices, 5th ed. New York: Wolters Kluwer, 2010.

Companionate Marriage

271

The paradigm shift of marriage perceived as more of a private, rather than a public institution, brought with it newfound expectations of what a healthy companionate marital friendship should look like. Couples began to believe and expect that they could find a “soul mate” with whom they could achieve intimacy through connecting on similar social, emotional, mental, spiritual, and physical levels. The research on marital quality appears to have grown out of this quest for intimacy in companionate marriages. Levels of positive bonds, positive and negative interactions, commitment, feeling trapped, divorce proneness, and marital satisfaction are a few of the measureable marital quality constructs that have been studied. Findings from this research indicate that high levels of commitment, positive interaction, and positive bonds are some of the more salient predictors of stable and satisfying marriages. In sum, it is clear that people expect more from their marriage relationships than they did in the

Companionate Marriage The notion of companionate marriage was spearheaded by Judge Ben Lindsey and Wainwright Evans in 1927, when their book The Companionate Marriage sparked widespread controversy. The authors suggested that men and women ought to live together for a year prior to marriage without having children to see if they could get along. If not, the relationship would be easy to dissolve. If the trial period went well, the authors recommended that the marry and have children (if they desired). Proceeding in this cautionary manner would benefit the relationship’s growth and development. Political, religious, and social outrage ensued as the authors were accused of promoting immorality, promiscuity, and the breakdown of marriage and the American family. Some scholars believe that by the mid-20th century, marriage as a social institution began to become a more private institution. What was once a formal status regulated by social norms, public opinion, law, and religion became a private status governed by the heart. As a result, people began to view marriage not as a duty governed by society and patriarchal authority, but as a way to achieve a lifelong friendship characterized by equality and respect.

Judge Ben Lindsey was an American judge, social reformer, and coauthor of the controversial book The Companionate Marriage. The content of the book prompted a number of priests and civic leaders to accuse him of promoting immorality, promiscuity, and free love.

272

Conflict Theory

past. Contemporary marriage relationships are expected to meet multiple intimacy needs and to provide a context for personal growth and development. As a result, some scholars believe that the ideal of companionate marriage is gradually being replaced with a new cultural norm driven by individualism that could be termed the individualistic or disposable marriage. Victor W. Harris University of Florida See Also: Arranged Marriage, Common Law Marriage; Cult of Domesticity; Divorce and Separation; Domestic Masculinity; Egalitarian Marriages. Further Readings Amato, P. R. “Tension Between Institutional and Individual Views of Marriage.” Journal of Marriage and the Family, v.66/4 (2004). Finch, Janet and Penny Summerfield. “Social Reconstruction and the Emergence of Companionate Marriage, 1945–59.” In Marriage, Domestic Life, and Social Change. David Clark, ed. New York: Routledge, 2004. Harris, V. W. Marriage Tips and Traps: 10 Secrets for Nurturing Your Marital Friendship. Plymouth, MI: Hayden-McNeil, 2010. Hirsh, Jennifer S. and Holly Wardlow, eds. Modern Loves: The Anthropology of Romantic Courtship and Companionate Marriage. New York: Macmillan, 2006.

Conflict Theory All societies and social groups experience various levels of conflict. Conflict can occur between individuals, between social groups, or within social groups. Conflict theory is one useful perspective for understanding how and why these disagreements occur. Assumptions of Conflict Theory Conflict theory focuses on differences in power between individuals or social groups. This theory emerges from philosophical perspectives developed by Niccolò Machiavelli, Thomas Hobbes, and

Karl Marx, and emphasizes the idea that people are inherently contentious when competing for resources and power. This theory gained popularity during the civil rights movement as people challenged the fact that white men in U.S. society had long held disproportionate power. Conflict theory assumes that social interaction leads to conflict, and that conflict is an inevitable part of family relationships. Conflict, however, can be beneficial, especially when it spurs useful changes and the resolution of issues. The goal is not to completely prevent conflict, but to prevent conflicts from escalating to the point where members of the group are permanently harmed or feel that remaining in the group is against their self-interest. In the context of the family unit, unresolved conflict can lead to divorce, splitting of extended families into factions, and breaking off of communication between family members. Conflict emerges over struggles for power, influence, and resources. According to conflict theory, limited resources lead to conflict. These resources may be tangible items such as money, food, television, or use of the family car. Resources, however, may also include abstract constructs such as love, affection, or attention. Members of a family may compete for access to these resources. In this view, there is always a scarcity of resources, and therefore there will be conflict because not all people can secure the resources they want. For example, in the traditional family framework, a husband and wife may argue over ways to spend the husband’s salary (e.g., buying a set of golf clubs or an expensive purse), and children may compete for time and attention from their parents. The person or people with the ability to control resources are typically thought of as having the most power. In this context, power can be thought of as the ability to control one’s circumstances or future life outcomes and/or the circumstances or life outcomes of others. Conflict theory also asserts that structural inequality may be a common source of conflict. Structural inequality may be defined as a difference in power, dependent upon the social role or status that one is assigned. In the traditional family structure, the husband or father is prescribed the highest degree of power and control, and has the ultimate authority over resources (e.g., finances). Second in the familial hierarchy is the wife/mother. This inequality can lead to conflict between men

Constitution, U.S.



and women. Last, the children are afforded a degree of power and control, and older children are often given power over younger children. This can lead to conflict because people have differing abilities to secure their desired resources. Gender Roles and Conflict Theory Gender roles are ways in which individuals are expected to act according to their gender, and these prescribed gender roles can lead to conflict because they shape inequality within the family unit. Traditional family structures were comprised of a head of household (man/husband/father) and the homemaker (woman/wife/mother). Men were assigned the highest degree of power because they controlled the resources. However, the traditional family unit has morphed to include families where women may have equal control over resources because of their jobs, or sole control over the resources (as in female-headed households). Despite this change, there still remains inequality within the traditional family framework. Women often work outside of the home for additional income, but are often expected not to pursue a career. Women who are highly ambitious may be met with contention by their husbands. Husbands may feel that they are competing for their wives’ time, and that her primary job should be to take care of the family and not to pursue a career, leading to conflict within the relationship. In her influential book The Second Shift, Arlie Hochschild describes how most working mothers continue to be responsible for the majority of housework, thus assuming a “second shift” after working outside the home during the day. These women become overworked and exhausted. In addition, working mothers still tend to earn less income than their husbands, despite the hours and labor they provide. The inequality in housework contribution and income may create tension and conflict between working parents, and in some cases leads to divorce. Shari Paige David Frederick Chapman University See Also: Breadwinner-Homemaker Families; Breadwinners; Cohabitation; Coparenting; Custody

273

and Guardianship; Divorce and Separation; Domestic Masculinity; Dual-Income Couples/Dual-Earner Families; Egalitarian Marriages; Gender Roles; Hochschild, Arlie; Suburban Families. Further Readings Afifi, Tamara, D. and Paul Schrodt. “Uncertainty and the Avoidance of the State of One’s Family in Stepfamilies, Postdivorce Single-Parent Families, and First-Marriage Families.” Human Communication Research, v.29 (2003). Hochschild, Arlie Russell. The Second Shift. Working Families and the Revolution at Home. New York: Penguin, 2003. Kaufman, Gayle. “Do Gender Role Attitudes Matter? Family Formation and Dissolution Among Traditional and Egalitarian Men and Women.” Journal of Family Issues, v.21 (2000). Witt, Judith LaBorde. “The Gendered Division of Labor in Parental Caretaking: Biology or Socialization.” Journal of Women and Aging, v.6 (1994).

Constitution, U.S. In U.S. society, ideas of politics and family are often inseparable. Politicians use the idea of the family to argue for policy or reform, to make laws or change laws. This has been true from the colonial era through the present day. When viewed through the lens of family, the founding era can be seen as a time when Americans defined republican principles in a way that rejected monarchical and aristocratic notions of family. These principles were embedded in the U.S. Constitution in two separate places. The first appears in Article I, section 9, which reads, in part: “No Title of Nobility shall be granted by the United States.” The second appears in Article III, section 3: “The Congress shall have Power to declare the Punishment of Treason, but no Attainder of Treason shall work Corruption of Blood, or Forfeiture except during the Life of the Person attainted.” In order to understand these two sections of the Constitution, it is important to have a basic understanding of the widely circulating rejection of hereditary power was apparent in the era of the American Revolution. In April 1775, soldiers fired the first shots of what was to become the American Revolutionary War.

274

Constitution, U.S.

The Second Continental Congress then took steps to engage in war with Great Britain. However, when justifying the use of military tactics in 1775, the Continental Congress clearly stated that the colonies did not seek independence. Instead, Congress cast the war in terms of self-defense, and stated its belief that the colonies could continue to exist under the traditional rule of Great Britain, including under the rule of a hereditary monarch. For the first year of the American Revolution, George Washington toasted the health of king and country. Despite the fact that the 1775 Declaration of the Causes and Necessity of Taking up Arms stated that independence was not the goal, some of the colonists disagreed, beginning a year-long debate on the nature of monarchy and hereditary succession. The most important of the arguments against the English form of government came from Thomas Paine, in his widely published pamphlet Common Sense. In Common Sense, Paine rejected the English system, and asked fellow colonists to do the same. Using the language of natural rights inherent in the Enlightenment, Paine argued that hereditary succession was ludicrous because it elevated men to office not on merit, but solely on birth. He asked American colonists to redefine their conceptions about government and to reject older notions of permanent, family-based power. By July 1776, enough people agreed with Paine’s assessment to support independence as a goal of the war. While the Declaration of Independence does not list aristocracy or hereditary succession as reasons for independence, it is telling that the list of grievances contained within it is aimed at King George III, rather than Parliament. In the 1760s and early 1770s, colonists blamed Parliament for enacting laws that took away their rights and privileges as British subjects. By 1774, that changed as colonists began to posit that the king was at the center of a plot to strip colonists of their rights. When it came time to present their reasons for independence to the colonists and the world, the Continental Congress placed the blame squarely on the king. During this era of tumult, Americans used family metaphors to explain their actions. The 13 colonies were referred to as the “children” of “mother Britain” and the king was their “father.” Now the colony “children” had matured and were leaving home, rejecting continued parental rule. The images and language produced in the first years of

the war repeated this message. By the end of the war, American families in all their forms experienced similar rejection of patriarchal authority. Not only had Britain’s colonists disobeyed their “father” or “mother” in forming the United States of America, but, when making decisions about their futures, biological children also disobeyed their fathers and mothers. Many of the state governments that emerged during and immediately after the American Revolution rejected hereditary aristocracy in a number of different ways. Following the Revolution, states passed laws to abolish entails that kept property in a particular branch of a family for generations to come, and primogeniture that gave the exclusive right of inheritance to the first-born son. Virginia went as far as to enact laws dividing a father’s estate equally among male and female children. In his later observations of the United States, Alexis de Tocqueville argued that this practice helped shift governmental principles in the United States to democratic principles, making aristocracy impossible. Some state constitutions directly rejected titles of nobility and hereditary succession. Virginia’s revolutionary constitution made this clear, stating that no man would be set above any others except through his work, and that none of the offices held by men in the Commonwealth of Virginia could be passed on to male heirs. Similarly, the 1780 Massachusetts Constitution stated that men would gain positions of power within the commonwealth solely through merit. Their earned titles could not be inherited nor passed down because the idea of an office passed on through the male line was absurd. In society as in government, some Americans were watchful against any hint of the creation of an American nobility. One of the contentious debates in the early republic was over the Society of the Cincinnati, founded in 1783. The group was formed by officers of the Continental Army. Membership in the society passed to the oldest male descendants of the founders, but people throughout the United States had denounced inheritable membership. The very notion was un-American because it held up some citizens over others solely because of their ancestry. Even a hint of power passed through the male family line was too much for some who protested vehemently that the society opened the door to other forms of nobility, and could potentially lead



to the ruin of a republic based on civic virtue, and increasingly on individualism. Although much of the language of the American Revolution became antinobility and antiaristocracy, not all Americans entirely rejected aristocracy. Instead, Americans like Thomas Jefferson redefined the position held by leading men as one of a natural, rather than hereditary aristocracy. A natural aristocracy, they argued, allowed men to rise to positions of power and privilege through merit and virtue. These Americans argued that government should preserve the benefits of aristocracy, namely stability, without the dangers inherent in democracy. Without an aristocracy, some Americans feared that government would descend into anarchy, allowing a new tyrant or a new form of despotic government to arise. By 1786 and 1787, some governmental leaders worried that their new country was experiencing an excess of democracy that needed to be contained. This was one of the impulses that led to the Constitutional Convention and the ratification of a new form of government by 1788. Whatever the motivation that brought men to Philadelphia, the government they created rejected the idea of family control and inherited power. In Article I of the Constitution, after describing the branches of government, the means of election to office, and the powers of Congress, among other things, the authors reach Section 9, which is a strange conglomeration. In part, Section 9 allows the eventual prohibition of the international slave trade, asserts the principle of habeas corpus, addresses trade among states, and then declares that no titles of nobility shall be granted by the United States, rejecting the handing down of privileges passed through male lines of succession. Privilege could not be inherited, nor could punishment. Article III, Section 3 emphatically states that children could not be held accountable for the treasonous behavior of their ancestors: “no Attainder of Treason shall work Corruption of Blood, or Forfeiture except during the Life of the Person attainted.” Despite these provisions, much of the debate over the Constitution centered on whether or not the government created was too aristocratic, and what types of men would hold office if the new government went into effect. Although government was not inherited, would not government fall into the hands of those with the most money and most

Constitution, U.S.

275

power? Would this not lead to the oppression of those of the middling sort, replacing a hereditary aristocracy with an aristocracy based on wealth and power? Alexander Hamilton tried to calm the critics’ fears, pointing out that Article I, Section 9 prohibited nobility, and remarking that this very fact would keep the government in the hands of the people. Even without term limits, and with a limit to the number of representatives elected to the House, the proposed government was a republic, rejecting monarchy and rejecting hereditary succession. In the end, the state conventions ratified the Constitution, putting in place a new government that rejected the old European notions that power was inherited, rather than earned. Although historians often separate out ideas about family from formal politics, these two sections of the Constitution show that ideas of family shape politics in profound ways. This new republic was a voluntary association of individuals led by representatives, voted on by full citizens. In theory, rank and birth did not matter. Ability and virtue marked the new American citizens. It would take long and sustained efforts to do away with other markers of rank and status like race and gender, but Americans rejected the idea that power passed through family lines. Particularly for African Americans and Native Americans, their birth as members of a disenfranchised race marked them as noncitizens, nonmembers of the new republic. Sarah Swedberg Colorado Mesa University See Also: Inheritance; Social History of American Families: Colonial Era to 1776; Social History of American Families: 1777 to 1789; Social History of American Families: 1790 to 1850. Further Readings Avalon Project. Declaration of the Causes and Necessity of Taking Up Arms. http://avalon.law.yale.edu/18th _century/arms.asp (Accessed December 2013). Center for Constitutional Studies. Source Documents. The Constitution of the United States. http://www .nhinet.org/ccs/docs.htm (Accessed December 2013). Countryman, Edward, ed. What Did the Constitution Mean to Early Americans? Boston: Bedford St. Martin’s, 1999.

276

Constructionist and Poststructuralist Theories

Davies, Wallace Evan. “The Society of the Cincinnati in New England, 1783–1800.” William and Mary Quarterly, v.3–v.5 (1948). de Tocqueville, Alexis. Democracy in America. Henry Reeve, trans. New York: Colonial Press, 1899. Library of Congress. “The Federalist Papers.” http:// thomas.loc.gov/home/histdox/fedpapers.html (Accessed December 2013). Maier, Pauline. From Resistance to Revolution: Colonial Radicals and the Development of American Opposition to Britain, 1765–1776. New York: Knopf, 1972. Maier, Pauline. Ratification: The People Debate the Constitution, 1787–1788. New York: Simon & Schuster, 2010. National Archives and Records Administration. “Declaration of Independence.” http://www.archives .gov/exhibits/charters/declaration_transcript.html (Accessed December 2013). Paine, Thomas. Common Sense. http://www.gutenberg .org/ebooks/147. (Accessed December 2013). Wood, Gordon S. The Creation of the American Republic, 1776–1787. Chapel Hill, NC: University of North Carolina Press, 1969.

Constructionist and Poststructuralist Theories Constructionist and poststructuralist theories are a set of ideas about language, meaning, individuals, and power that interrogate essentialized meanings, final representations, and fixed identity categories. Applied to the institution of the family, these theoretical approaches seek to make apparent its socially constructed nature by situating it within different historical, social, political, cultural, and economic contexts, and explicating the discursive fields within which some formulations of the family appear as more normal and dominant over others. Constructionist Theories Constructionism emerged as a challenge to the prevalent positivist assumptions about knowledge. Positivism holds that direct sensory perceptions are the primary source of data for the production

of knowledge, and that scientific and mathematical logic can discover the final truth. The publication of Structure of Scientific Revolutions by Thomas Kuhn in 1962 inaugurated a critique of the objectivity and universality of scientific knowledge. Kuhn proposed the notion of “paradigm,” which he defined as a set of rules, methods, practices, and gatekeepers within which scientific knowledge was produced and validated. He argued that the different scientific paradigms were incommensurable with each other (i.e., scientists from one paradigm did not have access to the methods of other paradigms). From this insight, it followed that scientific knowledge, as well as the objects that emerged as the focus of scientific study, were contextually bound. Kuhn’s work thus sought to highlight the social contexts within which scientific studies were produced. Constructionism is grounded in such wider postmodern critiques of the notion of absolute truths and objective scientific knowledge. Constructionism sees reality and knowledge as socially constituted; that is, contingent on historical, cultural, political, and economic contexts. Instead of seeking essential meanings, constructionist theorists maintain that meanings are culturally and historically specific, and emphasize the role of interpretation in the construction of truths. Constructionist theories therefore have an antiessentialist and antirepresentational character. One of the key areas where constructionist approach has proven most fruitful is in the understanding of identity categories. Constructionist theorists argue that identity categories based on race, gender, sexuality, nationality, ethnicity, and class, are historically contingent, locally specific, and change over time. Accordingly, what it means to be a woman in one context or historical time period will not be the same in another context. There is therefore no pre-existent quality that these identities are mapped onto. This method of understanding identity categories has been a particularly useful mode of inquiry in feminist and gender studies because it shows the historical contingencies of men and women’s roles in societies. Similarly, social constructionist theorists also argue that ethical standards or notions of justice that some groups assume to be universally applicable to all human beings should be interrogated because they are also time-bound and not



Constructionist and Poststructuralist Theories

inalienable. Accordingly, assumptions about the universal desirability of specific conceptions and formulations of the family, with its attendant divisions of labor, can be challenged. However, constructionist perspectives have been criticized for avoiding to take any ethical or political stances. The anti-essentialist character poses problems for political movements, such as the feminist or human rights movement, that rely on the categories of woman, liberation, and rights, and prioritize specific moral and ethical claims over others. Constructionist theorists have responded to these criticisms by noting that they are more concerned with explicating how normalized positions and categories are produced and maintained, rather than taking ethical stances around them. Poststructuralist Theories Poststructuralism is the name given to theories advanced since the 1960s by a range of French and continental philosophers and theorists, including Jacques Derrida, Jacques Lacan, Michel Foucault, Gilles Deleuze, and Judith Butler, although these scholars have often rejected such a labeling of their work. Together, they have proposed theories of language, discourse, subjectivity, power, and performativity, which interrupt the desire to seek final and essential meanings and disrupt the normative ways of understanding the world through binary oppositions or structures. The emergence of poststructuralist thought is often seen as a response to the structuralist movement. Structuralism assumed that individuals occupied distinct and structured positions, and that the relationships between these positions could be understood either in the form of a series (resemblances), or as structures (patterns). Accordingly, structuralists believed that human cultures followed specific pre-established underlying patterns. Poststructuralist theorists critiqued this assumption by drawing attention to the ways in which such patterns created hierarchies. Drawing on Ferdinand de Saussure’s conceptualization of language, they sought to discover innovative relations between terms, and argued against fixed meanings. This method has allowed poststructuralist theorists to produce new meanings and destabilize naturalized ones. In addition, poststructuralist theorists have built upon and expanded constructionist theories

277

by focusing on the ways in which power circulates within language communities and through language practices to constitute meanings and subjects. Language, Discourse, and Subjectivity Poststructuralist theorists view language as a communal practice, not as a reflection of a pre-existing reality. Drawing on Saussure’s work, poststructuralist theorists understand language (or langue) as a system of classification of thoughts that is a self-contained whole. According to Saussure, there are no ideas prior to language and it is language that brings ideas into being. To explicate how that is so, Saussure introduced the concept of a linguistic “sign,” which is a two-sided entity that unites an idea (signified) with sound images (signifier). The relationship between the signifier and that which is signified by it, however, is arbitrary. Meaning of the signs is established by its difference from other signs. Accordingly, Saussure argued that language produces conceptions of self, identities, and meanings. However, for Saussure, once established, these meanings tend to become fixed patterns. Poststructuralist theorists take Saussurean insights, but following Jacques Derrida, see meaning as always deferred. Like Saussure, Derrida noted that the meaning of specific words depended on linguistic contexts, and was established through difference. This difference created a binary, oppositional set of meanings, which helped establish the meaning of signifiers through reciprocity. This meant that the assumption that language represented something real or natural, or that scientific language reveals pre-existent phenomena, was mistaken. Derrida goes on to note that the meanings that appear natural or fixed are only so within specific historical, social, and linguistic contexts. In a similar vein, poststructuralist theorists also break away from humanist conceptualizations of the individual as a rational and unique entity, and instead see individuals as “subjects,” that is, products of different language practices or discursive fields. They note that individuals occupy different subject positions based on their context, and in doing so, also constitute these positions. For instance, a woman may occupy the subject position of mother in one context, and a scholar in another, making it difficult for the observer to exclusively identitfy her through either discursive fields. In addition, her actions and behaviors in those subject positions constitute the

278

Constructionist and Poststructuralist Theories

discursive field itself; that is, they come to define what “motherhood” or “scholarship” entails. The decentering of the unified human subject then opens up the space for the emergence of different kinds of “subjectivities,” which are historically contingent and change with context. Judith Butler has extended the analysis around subjects and identity by proposing the concept of “performativity.” Butler sees subjects as products of discourses, and argues that what one assumes as essential identity categories are in effect tropes or figures that are produced by discourses or speech acts. Speech acts produce these identities, which are then taken up by individuals as they perform those identities. Performativity is thus the production of these discursive positions, which come to be viewed as ontological realities through repetition and recitation. Power and Resistance A consideration of power and resistance is critical to poststructuralist approaches, and Michel Foucault’s work has been foundational here. In Discipline and Punish (1977), Foucault argues for a conceptualization of power that is consensual, rather than coercive. He notes that, while in premodern societies power and social authority were centered in the sovereign power and force was exercised to elicit compliance, in modern societies power operates through its disciplinary mechanism; i.e., individuals consent to self-regulate themselves according to the internalized norms and rules of the dominant discourses of society. Therefore, self-surveillance and self-regulation, as opposed to force, are the mechanisms of social control in modern societies; discipline becomes the “technique” of power, and the body becomes the object and target of disciplinary power. Power is conceived not as something to be possessed, but is fluid, shifting, and circuitous. Foucault’s conceptualization of power has been taken up by scholars to examine the disciplinary effects of global policies, knowledge, and ideologies. Several studies employ it as a lens through which to examine the effect of macrolevel discourses at the microlevel, or to understand the reproduction of particular kinds of interests and imaginations. For instance, poststructuralist scholars have inquired into the ways in which disciplinary power consolidates patriarchy, colonialism, and neoliberalism. Specifically, feminist poststructuralists have theorized the family as the institution that maintains

the oppression of women by sanctioning men’s control of female sexuality, procreation, and economic power through everyday practices, division of labor, and appeal to other societal institutions such as religious, educational, and legal institutions. Foucault notes that resistance to power should be seen as another form of exercise of power. Since power is diffusive, he notes that resistance to power must also be diffused across the social system and incorporated into the everyday. In the History of Sexuality (1978), Foucault acknowledges that ideologies that exercise disciplinary power can also be taken up by the very individuals they seem to dominate for alternate ends. Citing the discourse of homosexuality in the 19th century, Foucault (1978) explicates how a discourse that might seek to dominate also provides the materials with which that domination can be resisted, or that leads to the formation of a “reverse” discourse. Disciplinary power, thus, operates in more complicated ways than simple domination and control. The notion of reverse discourse has been taken by poststructuralist theorists to critique studies that exclusively emphasize the determining power of societal institutions. They note that resistance is entwined with power. Together, constructionist and poststructuralist theories provide a perspective on knowledge that makes space for multiple, even contradictory, positions to be held as truths. These approaches emphasize the situatedness and constructedness of knowledge, and view it as an enterprise that is entangled with the exercise of power and resistance. Applied to the theme of the “American family,” these theories seek to destabilize any attempt to define it as a stable ontological reality. They point to the many different conceptions of what it means to be an “American” and to the complex and complicated history of what constitutes as a family. Shenila S. Khoja-Moolji Teachers College, Columbia University See Also: Feminism; Feminist Theory; Third Wave Feminism. Further Readings Butler, J., P. Osborne, and L. Segal. “Gender as Performance: An Interview With Judith Butler.”Radical Philosophy, v.67 (1994).

de Saussure, F. Course in General Linguistics [1916]. Wade Baskin, trans. New York: McGraw-Hill, 1966. Foucault, Michel. Discipline and Punishment: The Birth of the Prison. New York: Vintage Books, 1977. Foucault, Michel. History of Sexuality. Vol. 1. New York: Vintage Books, 1978. Kuhn, Thomas. Structure of Scientific Revolutions. Chicago: University of Chicago Press, 1962. 

Contraception: IUDs Contraception has played a large role in the American family. The ability to plan pregnancies and space children apart has been a major concern to women and their partners because of issues surrounding pregnancy and birth, not to mention the financial an social issues of child-rearing. According to the Guttmacher Institute, 49 percent of all pregnancies in the United States each year are unplanned, either because they are mistimed or unwanted, even among women who eventually plan to have children. Thus, women need access to a long-term reversible contraceptive method so that they can make decisions with their partners about family size and birth timing. In the 1920s, intrauterine devices (IUD) became the most effective tool by which women could gain control of their reproductive health. This device allows women to prevent conception when it is inserted, and allows conception when it is removed. Definitions IUDs are contraceptive devices that are inserted into the uterus and control pregnancy by releasing copper or hormones that interfere with the ability of a sperm to join with an egg. They are 99.5 percent effective, and are not subject to user error like other contraceptive methods—thus a woman does not have to do anything once the IUD is in place. IUDs must be put in place and removed by a qualified medical provider. These methods are long acting (from five to 12 years without needing replacement) and reversible (studies indicate that it takes about three months after removal of an IUD for a woman to become pregnant). IUDs can be used by all women who wish to have an effective method of contraception. It can be used by women who are breastfeeding,

Contraception: IUDs

279

are HIV/AIDS positive, have just had an abortion, or have never given birth. IUDs are also the most effective form of emergency contraception available, even more so than the “morning after” pill. When used within five days of unprotected sexual intercourse, the pregnancy rate is 0.09 percent. One caveat with the IUD is the importance of determining that a woman is free from sexually transmitted infection (STI) before insertion; insertion of an IUD without treating an STI can lead to infertility for a woman. In the United States, the most common types of IUDs are Paragard and Mirena. Paragard, the gold standard of copper IUDs, is a T-shaped device that encases a plastic center with copper. It is a nonhormonal contraceptive and works by increasing copper ions in the uterus, which act as a natural spermacide. Mirena is also a T-shaped device but it uses hormones and releases a low dose of progestogen, and is considered an intrauterine system because it combines a device with hormones. The common side effects of using Mirena are amenorrhea and spontaneous device expulsion. Both types are extremely effective as contraception, although neither protects against STIs. Timeline of IUD Development and Use IUDs were first developed in the early 1900s, and have been produced in various shapes and with various chemicals. At first, many used copper, and those IUDs have become the standard because copper interferes with sperm mobility and/or fertilization of the egg. In 1929, the first IUD made of silk sutures was created and used in Germany. Since this IUD used copper-contaminated silver wiring, the device was quite effective. From the 1930s to the 1960s, other IUDs were developed and refined. By the 1970s, the current T-shaped copper IUD was developed and marketed. In addition, Mirena was developed in 1976. These IUDs are still available and used worldwide. Usage of the IUD in the United States IUDs account for only about 5.5 percent of all contraceptives used by women in the United States. Part of this low acceptance rate is due to the upfront costs of inserting the IUD, and part of it is that health care providers and patients have a variety of misconceptions about the method’s effectiveness and impact on a woman’s health. In terms of cost, the IUD can be purchased for about $350 plus the

280

Contraception: Morning-After Pills

cost of an office visit. Once the IUD is inserted and a woman is comfortable with it, there is no more expense, except to have it removed and/or replaced. In the 1980s, one study showed that the Dalkon Shield IUD increased a woman’s risk of pelvic inflammatory disease. Approximately seven women died from the disease, and IUDs suffered a blow to their reputation from which they have never recovered. As a consequence, most women are unaware of the availability and effectiveness of these devices, and approximately 24 percent of women’s health care providers have never inserted one. In addition, many health care providers report not feeling knowledgeable enough to discuss these options with patients. However, studies indicate that when a woman knows about the efficacy of the IUD, she will select that method for contraception—about 25 percent of health care providers note that women who are especially literate about reproductive health options will ask for an IUD. Finally, health care providers who are knowledgeable about the IUD are more likely to provide that option to female patients. Caren J. Frost University of Utah Rachel L. Wright Eastern Washington University See Also: Contraception and the Sexual Revolution; Family Planning; Prenatal Care and Pregnancy. Further Readings American College of Obstetricians and Gynecologists. “Long-Acting Reversible Contraception.” Obstetrics & Gynecology, v.118 (2011). Casey, P. M. and S. Pruthi. “The Latest Contraceptive Options: What You Must Know.” Journal of Family Practice, v.57 (2008). Guttmacher Institute. “Unintended Pregnancy in the United States” (October 2013). http://www .guttmacher.org/pubs/FB-Unintended-Pregnancy -US.html (Accessed December 2013). Rubin, S. E. and I. Winrob. “Urban Female Family Medicine Patients’ Perceptions About Intrauterine Contraception.” Journal of Women’s Health, v.19/4 (2010). Sivin, I. and I. Batar. “State-of-the-Art of Nonhormonal Methods of Contraception: III. Intrauterine Devices.” European Journal of Contraception and Reproductive Health Care, v.15 (2010).

Stoddard, A., C. McNicholas, and J. F. Peipert. “Efficacy and Safety of Long-Acting Reversible Contraception.” Drugs, v.71/8 (2011).

Contraception: Morning-After Pills While most sexually active individuals rely on a method of contraception at some point, consistent contraceptive use is affected by factors such as a dislike of a method’s side effects, lack of information, and cost. Estimates suggest that adolescents and young adults are most likely to be sporadic contraceptive users. The United States also has a high rate of unintended pregnancy, with nearly half of all pregnancies classified as unintended. Emergency contraception (EC) is an option available to reduce the risk of pregnancy following unprotected sexual intercourse or contraception failure. EC is commonly referred to as the “morning-after pill.” There are several forms of it available, and numerous factors influence its use. The morning-after pill is shown to be highly cost effective because it significantly reduces financial and social costs by preventing unintended pregnancy. The use of morning-after pills is considered safe for nearly all women, and is shown to significantly decrease the risk of pregnancy. For instance, normally 80 out of 1,000 women having unprotected sex in the middle of their cycles will become pregnant. If those 80 all take EC, then only about 20 will actually become pregnant. This 75 percent effectiveness is different than saying that women who take EC have only a 25 percent chance of becoming pregnant. Morning-after pills delay or prevent ovulation, and they may additionally inhibit fertilization; however, they do not prevent implantation or impact an established pregnancy. Unlike the suggestion of its name, morning-after pills can be taken up to 120 hours following unprotected intercourse or method failure. However, it is most effective when taken within the first 24 hours. Types of Morning-After Pills The Yuzpe method was developed in the 1970s, and was the earliest form of the morning-after pill. This method consisted of two doses of ethinylestradiol and



Contraception: Morning-After Pills

levonorgestrel (LNG). The first dose was given within 72 hours of unprotected intercourse or method failure, and the second was administered 12 hours later. Research showed that a single dose of LNG was more effective at preventing pregnancy with fewer user side effects than the two-dose method. In 2010, the Federal Drug Administration (FDA) further approved the use of ulipristal acetate (UPA), a single dose of antiprogesterone. UPA is more effective than the LNG option, especially if taken within 72 hours of unprotected intercourse or method failure. Misperceptions and lack of knowledge may continue to hinder use of the morning-after pill. Accurate knowledge of morning-after pills remains low for both men and women. Studies of knowledge, attitudes, and behaviors around the morning-after pill among numerous populations revealed that over 80 percent of people believed that EC pills acted as an abortifacient, that is, that they ended an established pregnancy. An examination of college students’ knowledge about and perceptions of the morningafter pill reported that while 94 percent had heard of EC before, only 5 percent of respondents could identify the correct time period for using it.

EC discussions between patients and healthcare providers are limited to conversations with female patients, the morning-after pill continues to be viewed as a female issue. Systemic barriers to the morning-after pill persist. The constitutional right to contraception was established with the 1965 Supreme Court decision in Griswold v. Connecticut, which recognized access to contraception as a fundamental component of individual privacy. Nonetheless, hospitals are not required to offer the morning-after pill or other forms of EC to victims of sexual assault, and individual states continue to allow insurance plans, health care providers, and pharmacists to refuse the coverage, prescribing, or dispensing of EC due to moral or religious objection.

Impacts on and Barriers to Pill Usage Many factors impact the use of morning-after pills. The information that a woman receives about EC, and her attitudes toward it, are influenced by her healthcare provider’s attitudes and perceptions. The majority of female college students surveyed about their EC knowledge and use reported that they would be more likely to use the morning-after pill if they had heard about it from their provider. However, healthcare providers may be hesitant to discuss the morning-after pill with patients. Although not substantiated, some providers believe that providing an advanced supply of the morning-after pill may encourage risky sexual behavior, result in a decrease in contraceptive use, and promote reliance on the morning-after pill. Currently, little attention is given to males’ experiences accessing the morning-after pill. A study investigating the perceptions and barriers of male access to EC reports that over three-fourths of both male and female respondents believe that men should always be able to purchase EC. However, half of male respondents did not know where to obtain morning-after pills, and approximately 20 percent of males were unaware of EC. Since studies reporting

281

Recent Developments Increased access to EC followed the 2006 Federal Drug Administration (FDA) decision to make one brand of morning-after pills available without a prescription. Interestingly, this increased access to the morning-after pill failed to decreased either unintended pregnancy or abortion rates to any significant degree in the United States. In 2011, the FDA was prepared to approve unrestricted access to the morning-after pill, but Department of Health and Human Services Secretary Kathleen Sebelius overruled this decision. Concerns about increased access to the morning-after pill included its unsupervised use among young girls, and a decrease in parental rights. Following a federal court order in June 2013, unrestricted sale of one brand of the morning-after pill went into effect, providing over-the-counter access to all individuals. Rachel L. Wright Eastern Washington University Caren J. Frost University of Utah See Also: Abortion; Adolescent Pregnancy; Contraception: IUDs; Family Planning; Gender Roles. Further Readings Committee on Adolescence. “Emergency Contraception.” Pediatrics, v.130 (2012). Dalby, Jessica, Ronni Hayon, Elizabeth Paddock, and Sarina Shrager. “Emergency Contraception: An Underutilized Resource.” Journal of Family Practice, v.61 (2012).

282

Contraception and the Sexual Revolution

Gemzell-Danielsson, Kristina, Cecilia Berger, and P. G. L. Lalitkumar, “Emergency Contraception: Mechanisms of Action.” Contraception, v.87 (2013). Raymond, Elizabeth G., James Trussell, and C. Polis. “Population Effect of Increased Access to Emergency Contraceptive Pills.” Obstetrics & Gynecology, v.109 (2007). Trussell, James, Charlotte Ellerston, Felicia Stewart, Elizabeth G. Raymond, and Tara Shochet. “The Role of Emergency Contraception.” American Journal of Obstetrics & Gynecology, v.190 (2004).

Contraception and the Sexual Revolution The sexual revolution is linked in the popular imagination to the invention of oral contraception, or “the pill.” The birth control pill was the first form of contraceptive that could be used separately from the act of sex, making it unobtrusive and liberating. It was also the first form of contraception that lay completely within the control of women. A woman could use the pill without the cooperation, or even the knowledge, of her sexual partner. From the start, therefore, the pill created anxiety, particularly among conservatives, who argued that if women (and men) were freed from the consequences of sex, namely pregnancy, they would engage in it more often, including outside of the bonds of marriage. Because the pill appeared on the market in 1960, around the period associated with the beginning of the sexual revolution, the pill was often understood as a principal cause of the loosening of societal restrictions and attitudes about sexual activity. A causal relationship between the introduction of the birth control pill and the sexual revolution of the 1960s is not, however, supported by scholarship. The Sexual Revolution While the term sexual revolution has existed since at least 1910, it came into popular use during the 1960s to identify what were perceived as new and dangerous changes in sexual attitudes that were becoming mainstream. In the early 1960s, these changes were associated with developments such as the popularity of Playboy magazine (founded in

1953 by Hugh Hefner) and the legalization of oral contraception (in 1960). In just over a decade, the term encompassed the 1967 Summer of Love; a new cultural openness about sexual pleasure exemplified by the 1972 publication of The Joy of Sex, which spent much of the next two years on the New York Times bestseller list; people marrying later; an increased acceptance of premarital sex; and cultural movements such as free love, gay liberation, and, to a certain extent, women’s liberation. The term sexual revolution linked all of these factors (and more) together as a cohesive social shift, one that received sweeping cultural attention. The idea of a sexual revolution captured the popular imagination, but historians of sexuality tend to dispute the idea that this revolution was truly a phenomenon of the 1960s. In the groundbreaking book Intimate Matters: A History of Sexuality in America, John D’Emilio and Estelle B. Freedman point out that many of the behaviors associated with the so-called sexual revolution had long existed in American life. More recently, Beth Bailey’s Sex in the Heartland points out that the term sexual revolution linked many social trends that, in fact, had little to do with each other. She states that the term revolution played up both the danger and the coherence of these changes. For instance, she observes that the concept of free love shares little in common with the relationships of long-term monogamous couples cohabitating outside of marriage. Bailey also suggests that linking minor events, such as a fashion for long hair on men and longterm trends, such as an increased acceptance of and openness about premarital sex, increased the importance of those factors that might otherwise have been regarded as insignificant. It is not that these events did not happen, or that the middle of the 20th century did not see a shift in how Americans understood and lived their sexual lives. It is simply that the changes were often the result of larger and longer trends (greater equality and independence for women), or were isolated incidents that had limited impact on the lives of most Americans (the Summer of Love). Emphasizing those longer trends underscores the extent to which the changes in sexual mores impacted all Americans, not simply those on the radical edges of society. While scholars dispute whether a period of increased relaxation of sexual standards and changing sexual mores is best described as a truly



Contraception and the Sexual Revolution

revolutionary social movement, commentators saw the sexual revolution as real and dangerous. While conservative commentators linked the sexual revolution to a number of social factors, they focused their attention in part on the new mode of contraception. This form of contraception was the first to completely separate the act of sexual intercourse from the prevention of pregnancy. As a result of “the pill,” there was no need to interrupt sexual activity in order to employ birth control, as was the case with the most popular previous methods of contraception. It was also the first form of contraception that was completely left up to women to use as they saw fit. A Brief History of the Pill The Food and Drug Administration approved oral contraception, or the birth control pill, in 1960. This contraceptive was nearly 100 percent effective when correctly used. Women immediately began to avail themselves of the advantages it offered. By 1964, the pill was the most widely used method of birth control in the United States, with 6.5 million married women taking it. While the pill promised (and provided) a transformative means of liberation for women, most of the public debate around oral contraception did not focus on what it would mean for women. Advocates of oral contraception spoke less about its impact on the American woman, and more about its potential for limiting the global population explosion. Its advocates, including doctors who were working to make it available to women, did not necessarily believe in premarital sex, and were quick to say so. They argued that women who were going to have sex outside of marriage would do so, reliable contraception or not. The pill promised to decrease out-of-wedlock births without increasing the amount of sex engaged in outside of marriage. Rather, they hoped that the pill would bolster the American family. Because it would free married women from the constant threat of pregnancy, it would allow couples to control their household economies, both by limiting the number of children they had and by allowing women to take advantage of economic opportunities. Limiting family size would allow more families to enter the middle class, with all of the possibilities for consumption and education that would provide. The planned family was thus

283

presented as the happy family. In addition, couples could have planned families without forgoing the sexual pleasures inherent in companionate marriage. In other words, sex was no longer linked to reproduction, so couples could have both an active sex life and the size of family they wanted. If the pill promised liberation, its early prescribers and advocates saw it as providing that liberation within the bonds of marriage. Indeed, oral contraception made an immense difference within marriage. As an easy and effective method of family planning, married women were able to pursue educational and professional opportunities that would previously have been precluded by childbearing and child-rearing. Not only did such changes increase the upward mobility of American families, but they also allowed women to enter the work force and professions without sacrificing marriage and family. This gave them more economic autonomy within families. The pill empowered women to control their fertility in situations where their husbands or other sexual partners were unwilling take responsibility for birth control. The impact of birth control pills on marriage was reflected in popular culture, particularly in Loretta Lynn’s 1975 controversial hit “The Pill,” a song that told of the disappointment that a woman met when her marriage tied her (but not her husband) to endless childrearing. She dreams of her sexual liberation, reflecting that “feeling good comes easy now I’ve got the pill.” Feeling good does not, however, liberate her from her marriage. Rather, it liberates her within her marriage as the closing couplet, “Daddy don’t you worry none, cause mama’s got the pill” promises that it is sex with her husband that she desires. Nevertheless, conservative commentators worried that severing sex from reproduction (and putting that control in women’s hands) would lead to increased promiscuity. In 1966, the magazine U.S. News and World Report suggested that the pill would bring about not just promiscuity, but also “sexual anarchy.” The article drew examples of this anarchy from the use of contraception by Roman Catholic couples, its presence on college campuses, and the suggestion that cities were exploring the distribution of the pill to its welfare recipients. Without the fear of pregnancy, the magazine expressed the concern that “mating” would become “causal and random, as among animals.” In a 1968 article in Reader’s Digest, Pearl S. Buck suggested

284

Contraception and the Sexual Revolution

that the impact of the pill could easily be as great and devastating as the nuclear bomb. This fear, linking a reliable method of birth control to sexual immorality, reflected a belief that if sex lacked consequences, it would no longer be able to be controlled by institutions such as marriage. Specifically, if women could control their fertility and did not need to fear pregnancy, they would become sexually free and without restraint. Conversely, some worried that women would trick men into believing that they were on the pill, and trap them into marriage by becoming pregnant. In fact, although the pill became immediately popular, it was not necessarily immediately and widely available. Many physicians would not prescribe oral contraception to unmarried women. As a result, unmarried women would access the pill by borrowing engagement rings and telling their doctors that they were “preparing for their marriages.” In many states, it was illegal to prescribe contraception to unmarried women, and in Massachusetts and Connecticut, it was illegal for all women. In 1965, the Supreme Court struck down birth control bans for married individuals in the case Griswold v. Connecticut. In 1972, the Supreme Court decision in Eisenstaedt v. Baird gave unmarried people the right to contraception. Throughout the 1960s, however, the pill was not necessarily legally available to unmarried women. In places where it was illegal, some unmarried women gained access to the pill, but in its early years, most users of the pill were married. Indeed, sexual revolution aside, many single women in the 1960s did not express revolutionary attitudes toward sex. While the marriage age was rising and sex outside of marriage was becoming incrementally more common, even young people did not necessarily approve of sex outside of marriage or explicitly premarital relationships. According to historian Elaine Tyler May, a 1964 poll of 1,900 female students at the University of Kansas revealed that 91 percent of the women believed that it was wrong to have sex with a man to whom one was not engaged. May points out that it is likely that more than 9 percent of the college women were sexually active, and that many probably felt guilty about their behavior. Even as that guilt eased, not everyone agreed that the pill caused the increasing prevalence of premarital sex. Rather, in 1968, Science News reported that an increase in sex-out-of-wedlock was not from the

pill, pointing out that reliable contraception had been available long before the pill, and argued that contraception was not a major factor in young people’s decisions about sex. Similarly, Ira Reiss, a sociologist of sexual behavior, argued that cultural and religious changes caused more of an increase in premarital sex than the pill, and also pointed out that the increase in premarital sex was much smaller than popularly imagined. According to Reiss, in 1968, 60 percent of female college graduates had never had sex, only a slight decrease from before 1960. In addition, a number of studies in the late 1960s and early 1970s suggested that the majority of sexually active teenagers did not have access to birth control, and used it erratically, if at all. In America and the Pill, Elaine Tyler May states that while the pill and the sexual revolution were related, the pill did not cause the sexual revolution. Certainly, the availability of a reliable form of birth control made it more possible for women to engage in sex outside of marriage. Women, however, were slow to change their behavior, and it is likely that the pill only enabled behavioral changes that other social and cultural shifts had already begun to make acceptable. Samira Mehta Emory University See Also: Birth Control Pills; Contraception: IUDs; Family Planning; Feminism; Prenatal Care and Pregnancy. Further Readings D’Emilio, John and Estelle B. Freedman. Intimate Matters: A History of Sexuality in America. 2nd ed. Chicago: University Of Chicago Press, 1998. Marks, Lara. Sexual Chemistry: A History of the Contraceptive Pill. New Haven, CT: Yale University Press, 2010. May, Elaine Tyler. America and the Pill: A History of Promise, Peril, and Liberation. New York: Basic Books, 2010. May, Elaine Tyler. Homeward Bound: American Families in the Cold War Era. Rev. ed. New York: Basic Books, 2008. Watkins, Elizabeth Siegel. On the Pill: A Social History of Oral Contraceptives, 1950–1970. Baltimore, MD: Johns Hopkins University Press, 1998.



Cooperative Extension Service The Cooperative Extension Service (CES) is the unit of the Department of Agriculture responsible for educational outreach in collaboration with the U.S. land-grant university system. With cooperative funding from federal, state, and county governments, the CES was created to share practical research-based information with the general population. The initial audience was farm families, many of whom were disadvantaged due to geographic isolation, poverty, and a lack of formal education. Later, as rural populations shrank, CES programming expanded to address the needs of urban and suburban people. Today, offerings vary by region, but most states offer programming in the areas of home and family, food, 4-H and

Members of 4-H engage in hands-on learning activities in the areas of science, healthy living, and food security. Here some 4-H boys learn about irrigation methods.

Cooperative Extension Service

285

youth, community development, the environment, plant sciences, and agriculture. The CES’s work is planned with local input to meet locally identified needs. Because strong families contribute to the strength and economic stability of the country, the CES’s goal is to help families help themselves, using the resources of science, the government, and the community to solve problems. To that end, its family life education programming focuses on the whole family. Federal Mandate An early American democratic ideal was to have educated citizens. The Morrill Land-Grant College Act of 1862 provided each state with land for universities that were to include instruction in agriculture, home economics, and engineering, making practical higher education available to the general public. In 1887, the Hatch Act allocated federal funding for agricultural experiment stations at every land-grant college, and the Morrill Act of 1890 provided funding for states in the south to establish segregated land-grant colleges. The SmithLever Act of 1914 created a national cooperative extension service, affiliated with the land-grant colleges, to share research findings about agriculture, home economics, and related subjects with people not enrolled in the colleges. The Civil Rights Act of 1964 led to increased enrollment of African Americans in the original land-grant universities, eventually ended segregation of extension services in the southern states, and helped rectify racially based salary discrimination within the service. In 1994, the Elementary and Secondary Reauthorization Act made the 29 Native American Tribal Colleges part of the land-grant system, with a directive to the original land-grant universities to collaborate in developing extension programs addressing the needs of Native Americans. Home Economics In the early 20th century, few American farms had electricity or indoor plumbing. Many farm women endured hard physical labor, often miles from medical facilities, schools, and stores. Unlike urban women, they did not have convenient access to bakeries, commercial laundries, or ready-made clothing. Recognizing the importance of farm families in contributing to the nation’s welfare, Congressman Asbury Lever from South Carolina

286

Cooperative Extension Service

presented his vision for an extension service, and stressed the importance of home economics for improving rural living conditions. Subsequent federal funding for extension home demonstration agents helped launch home economics as a career for women. The National Extension Homemakers Council was formed in 1936, with 30,000 home demonstration clubs and 500,000 members. During World War I and II and the Great Depression, home demonstration agents taught gardening, food preservation, and financial management. The new field of nutrition was part of the Parent Education Movement from 1920 to 1945, through which rural families learned how to overcome child malnutrition with improved diets. CES nutrition guidelines helped keep some families off public relief during the Great Depression. The Expanded Food and Nutrition Educational Program of the 1970s through the 1980s provided nutrition education to low-income families. 4-H Practical agricultural clubs for rural children predated the CES. A 1902 program organized by A. B. Graham in Ohio is considered the beginning of the 4-H program, although the 4-H name (which stands for head, heart, hands, and health) was not used until 1918. By 1924, the CES had formally nationalized the 4-H clubs. The goal of 4-H programming is to promote the development of selfreliant responsible citizens and future community leaders. Its hands-on projects provide young people with opportunities to contribute to their communities. During World War I, the Boys’ Weekly Reserve eased the farm labor shortage created when ablebodied men left to fight in Europe. Rural and urban children were taught to create home gardens and how to preserve excess food. 4-H projects during World War II included sewing, victory gardens, and recycling scrap material. 4-H launched its International Farm Youth Exchange program in 1948. By 2007, over 80 countries had developed programs similar to 4-H, and 4-H had over 6 million participants throughout the United States, making it the country’s largest youth development organization. Contemporary 4-H programming focuses on the sciences, preparing a new generation to cope with the challenges of the 21st century.

Community Development Community development projects benefit entire towns, and beginning in the 1920s, a major CES initiative was to bring electricity to farms. Other projects included grading roads, installing radio receivers, and extending telephone lines. County agents negotiated with railroad companies to build loading platforms at rural stations and to set lower freight rates for hay shipments to central receiving destinations. They also negotiated lower prices for cooperative purchases of cement, feed corn, and nursery stock. In the south, the CES promoted mosquito eradication to lower the risk of malaria. Community development projects in the 1950s and 1960s included paving roads, installing water systems, and establishing public libraries. The Rural Area Development program facilitated loans for building hospitals, small factories, and recreation areas. Contemporary Extension In 1994, the CES and the Cooperative State Research Service merged into the Cooperative State Research, Education, and Extension Service (CSREES). A 2008 Farm Bill amendment to the 1994 Department of Agriculture Reorganization Act led to the creation of the National Institute of Food and Agriculture (NIFA) in 2009, which replaced CSREES. As of 2014, NIFA’s Families, Youth and Communities Institute cooperates with the land-grant university system and other organizations to provide research-based programs to help develop strong families, young leaders, and resilient communities to ensure the prosperity of the country. Betty J. Glass University of Nevada, Reno See Also: Family Farms; Family Life Education; Home Economics; Rural Families. Further Readings Christy, Ralph D. and Lionel Williamson, eds. A Century of Service: Land-Grant Colleges and Universities, 1890–1990. New Brunswick, Canada: Transaction Publishers, 1992. Kelsey, Lincoln D. and Cannon C. Hearne. Cooperative Extension Work, 3rd ed. Ithaca, NY: Comstock Publishing Association, 1963. National Institute of Food and Agriculture. http://www .csrees.usda.gov/index.html (Accessed May 2013).

Rasmussen, Wayne D. Taking the University to the People. Ames, IA: Iowa State University, 1989.

Coparenting Parents have long been considered essential to their children’s success through their roles as caregivers. Scholars recognize that successful caregiving relationships are not limited to the traditional arrangement of mother and father. Caregiving relationships can be established with homosexual parents, grandparents, and stepparents, for example. This relationship between two or more primary caregivers highlights the importance of coparenting processes and how they influence both adults and children. Studies of coparenting are relatively new. In the early 1970s, anthropologists and family therapists began using the term to describe the relationship between caregivers. It was soon adopted by those studying parenting after divorce. In fact, family therapist Connie Ahrons was the first to develop several measures to assess the nature of the coparental relationship among divorced families. Today, scholars use the term to identify all forms of caregiving relationships in which more than one adult has parental responsibility for a child or children, necessitating that all caregivers need to work together for the good of the children. Coparenting has often been identified as a complex set of interactions that address the ways in which caregivers work together for the common benefit of the children, resolving differences and making joint decisions for their well-being. Models of coparenting typically include elements of conflict management, cooperative decision making, and the parent-child relationship. Conflict management reflects the degree to which disagreements emerge about caregiving, and how such disagreements are resolved. Effective coparents are able to manage emotional reactivity and maintain a constructive approach to resolving disagreement. Cooperative decision making goes beyond conflict management and, entails the ability of the coparenting dyad to find common ground and agree on basic decisions affecting the children, such as which school to attend, what religious training to engage in, and

Coparenting

287

who assumes the primary responsibility for medical care. Finally, the parent–child relationship refers to the ways in which both caregivers engage with the child(ren). Strong coparents tend not to interfere or restrict each other’s relationship with the child and do not involve the child as a conduit for communication with the other parent. Although these elements of coparenting are important, scholars have typically focused on an overall assessment of successful coparenting, rather than specifically addressing the various elements. Scholars who examine behaviors that best represent successful coparenting also often assess supportive and undermining behaviors. Support reflects the degree to which parents are able to back each other up with regard to the children, remain positive about the other parent, and provide the other parent with important information and feedback about the child(ren). On the other hand, undermining reflects the degree to which parents hide things from each other, ask the children to keep secrets, and suggest to the other parent that his or her involvement or behavior is unwanted or undesirable. Parents often show varying degrees of support and undermining. Effective coparental interactions are more supportive and less undermining, ineffective coparental interactions are the opposite. Managing levels of support and undermining tend to enhance the parents’ or caregivers’ relationship, and this in turn is linked with better child outcomes. Research since the 1990s shows a strong link between coparenting and child outcomes. The foundation of coparental relationships begins when patterns are established that affect future parenting. Moreover, research finds that coparenting is a primary source of support for the caregivers, and such support is important for romantic relationships, job success, and child outcomes. Still other research shows that collaborative coparenting is linked with positive outcomes for children. In fact, some scholars believe that such collaboration may be more important to successful child outcomes than parenting alone. Specifically, studies show that when parents are more cooperative and supportive of each other, children and adolescents are reported to have fewer externalizing and internalizing behaviors. When coparents are less collaborative and more conflicting, children tend to do more poorly.

288

Council on Contemporary Families

Although coparenting is an important dynamic of all caregiver relationships, these relationships may be particularly relevant for certain family structures. For example, in divorced families, coparenting occurs between former spouses; in stepfamilies, coparenting occurs between both the resident parent and stepparent, as well as stepparent and nonresident parent; and in multiple generation households, coparenting may occur between grandparents and their adult children who are raising children. Such structural variables challenge caregivers to develop and maintain cooperative coparental relationships. These structures provide fertile ground for conflict and involvement of children as conduits of communication because of the ambiguity of roles and family boundaries. If adult members in these coparental relationships cannot successfully make collaborative decisions on behalf of the child(ren), interactions can be clouded with strong negative feelings and growing animosity. Although coparenting relationship research remains in its infancy, the literature is replete with gaps in understanding. Scholars commonly measure coparenting relationships by asking one or both parents about their coparenting experiences, and then treating the couple’s responses as independent reports. Newer methodologies allow scholars to handle data in which two people (e.g., two parents) are reporting on the same phenomena. As of 2014, these methods have been used in only a handful of studies on the coparental relationship. Understanding the coparental relationship is essential to designing effective parenting interventions. Parent education programs have a long history, and some programs show improved parenting skills, diminished child behavior problems, and more positive outcomes for children. More recent programs targeting divorcing parents incorporate content addressing coparenting, and some show favorable results in improving this relationship. In general, parenting programs address the relationship between caregivers as essential to effective parenting. Skills are taught to enhance caregivers’ ability to work together, and include conflict management skills, communication skills, and general parenting skills. Coparental relationships and associated behaviors are important in family life, especially in a context of increasingly involved fathers, working

parents, and diverse family structures. As the understanding of coparenting grows, the implications for future research, parenting interventions, and clinical practice also grow. Daniel J. Puhlman Florida State University See Also: Custody and Guardianship; Parent Education; Parenting; Parenting Styles; Stepparenting. Further Readings Feinberg, Marc. “The Internal Structure and Ecological Context of Coparenting: A Framework for Research and Intervention.” Parenting: Science and Practice, v.3 (2003). McHale, James P. and Kristin M. Lindahl, eds. Coparenting: A Conceptual and Clinical Examination of Family Systems. Washington, DC: American Psychological Association, 2011. McHale, James P., Maureen R. Waller, and Jessica Pearson. “Coparenting Interventions for Fragile Families: What Do We Know and Where Do We Need to Go Next?” Family Process, v.51 (2012). Teubert, Daniela and Martin Pinquart. “The Association Between Coparenting and Child Adjustment: A Meta-Analysis.” Parenting: Science and Practice, v.10 (2010).

Council on Contemporary Families The Council on Contemporary Families (CCF) is a national nonprofit organization with the mission to bring new research and clinical expertise to public conversation about family issues. The council was founded in 1996, and is based at the University of Miami. The Early Years: 1995 to 2000 The years leading up to the founding of the CCF were turbulent for American families and communities. Wives and mothers were entering the workforce in unprecedented numbers, partly in response to new aspirations, partly in response to the economic hardships of the 1980s, when real wages



and job security dramatically fell. Deindustrialization left blighted urban centers in its wake. Rates of unwed motherhood were rising, and although divorce rates had peaked by 1981, growing numbers of commentators were blaming urban decay and rising crime rates on absent fathers or working mothers. In 1992, Vice President Dan Quayle created a media storm by attributing urban disorder to the “bad example” set by television character Murphy Brown, who was portrayed on the show as a single mother. The media were bombarded with press releases from organizations claiming that divorce, single motherhood, working mothers, and the use of child care centers were the cause of everything from child abuse to crime, poverty, and urban decay. As the culture wars continued, ideology on both sides often masqueraded as social science. No easily accessible and legitimate sources of information were available to reporters to help them evaluate the claims thrown at them. This situation frustrated social scientists and therapists alike, stimulating a small group of them to come together in 1995 and 1996 to discuss how to help the press and public find more accurate information about America’s changing families. The prime movers behind what was to become the CCF were Constance Ahrons, a professor of sociology at the University of Southern California, and Marianne Walters, a feminist family therapist and director of the Washington, D.C., based Family Therapy Practice Center. Walters provided startup funds to establish the organization. In 1996, the small group of founders held meetings in living rooms in the San Francisco Bay area. Founding members included sociology professors Barrie Thorne Arlene Skolnick and Evelyn Nakano Glenn from the University of California, Berkeley; sociology professors Judith Stacey and Carol Joffee, who were both at the University of California, Davis; psychology professors Phil Cowan and Carolyn Cowan from the University of California, Berkeley, and Robert-Jay Greene from the California School of Professional Psychology; social work professor Donna Franklin from the University of Chicago; and Lillian Rubin, a sociologist and psychologist with a private practice in San Francisco. After a few meetings, they were joined by Stephanie Coontz, a professor of history at Evergreen State College. The group’s initial goal was to develop a small

Council on Contemporary Families

289

membership cadre of senior family scholars from the fields of sociology, psychology, history, and law, as well as clinicians with expertise in understanding issues facing contemporary families. The hope was that these people could develop links with journalists and provide them with more accurate information on family issues than reporters were receiving from ideologically oriented think tanks. When the group began to discuss its name, however, a disagreement developed between those who wanted to advocate on behalf of nontraditional families by answering the claims of conservative commentators and proposing policies to support alternative family arrangements, and those who believed that the CCF should provide evidence-based information about new research and best-practice findings for all families, countering distortions of evidence but remaining nonpolitical and nonpartisan. The debate was resolved when the group voted to become the Council on Contemporary Families, rather than the Council for Contemporary Families. Participants in these early meetings decided to create a national organization, inviting prominent family scholars and practitioners across the country to join. The organization made its public debut with a conference titled Reframing the Politics of Family Values. Held at the Washington Jewish Community Center, November 11–14, 1997, and attended by 70 participants, the program pioneered a format that continues to the present day. Instead of long formal presentations by invited speakers and panels that leave little time for discussion, most panels feature three or four speakers, each limited to 12-minute presentations. These sessions last for approximately two hours, leaving plenty of time for discussion among panelists and with the audience. Subsequent conferences in the early years were held at Fordham and New York universities. Each conference since 1997 has included workshops to help CCF members translate their academic findings and therapeutic experience into clear language and to craft their messages in a format that is more accessible to public discussion. Occasionally, speakers from the media discuss what they need from academics and researchers in order to cover familyrelated issues. In the early years, the CCF functioned as a national organization through “sweat equity.” There was a post office box, but neither a paid staff nor an office. Between the yearly conferences, board meetings

290

Council on Contemporary Families

were held in the Cowans’ living room in Berkeley, and occasionally in other board members’ homes. Cochairs in the early years included Marianne Walters, Constance Ahrons, Stephen Mintz, John Gillis, and Ellen Pulleyblank Coffey. Other CCF activities of the CCF in the early years consisted of establishing contacts with reporters and demonstrating that the CCF could be counted on to provide a balanced and fair summary of debates and emerging research topics among social scientists and family practitioners. At first, the CCF focused on trying to counter inaccurate information or the misuse of social science in media reports. However, the organization soon found that this was not an effective technique for informing the public about research or clinical evidence. Once a story was out, few reporters seemed interested in focusing on the errors it contained. Gradually, the CCF began to appear in front of the news cycle by issuing periodic briefing papers or fact sheets on new research or emerging topics of interest to the press. The CCF does not have a monolithic position on family issues, and continues to remain nonpartisan. Its working mantra has become, in the words of CCF’s Director of Research and Public Education Stephanie Coontz, that “the right research question in today’s world is not what kind of family do we wish people lived in but what do we know about how to help every family draw on its potential strengths and counter its distinctive vulnerabilities.” Twenty-First-Century Organizational Growth Through University Affiliation The CCF’s transition to an organization with more permanency was accomplished under the leadership of Barbara Risman, who served as cochair with Coontz from 2000 until 2006. Risman negotiated space and modest support from the sociology department at North Carolina State University, where she was on the faculty. With a graduate student worker and a university address, the CCF began to grow. In 2006, the CCF moved to the University of Illinois at Chicago (UIC) when Risman joined that faculty as head of the Sociology Department. She was appointed CCF’s executive director, and the UIC provided generous administrative support and events management. Most board meetings and conferences were held on the UIC campus between 2006 and 2012. Under the leadership of historian Coontz and sociology professor Virginia Rutter, of Framingham

State University, the CCF’s public media and outreach program moved beyond answering media queries to regularly producing and disseminating numerous publications. At the same time, the CCF embarked on a more extensive membership campaign, recruiting junior family researchers and graduate students, as well as younger family practitioners. Soon, the CCF’s membership increased to more than 100, and it continued to grow into the 21st century. A student internship program was launched to help train both undergraduate and graduate students as family scholars. The CCF continues to hold annual conferences, covering topics such as work-family balance, changing gender relations in families, father involvement, families and the justice system, and families and youth. Each year, psychologist Joshua Coleman collaborates with Coontz to create an annual round-up of new or underreported research findings and clinical experiences, which are issued before each conference under the title Unconventional Wisdom. The organization’s conference on work-family balance was featured as a special section titled “Mother Load: Why Can’t America Have a Family-Friendly Workplace?” in American Prospect magazine in March 2007. In 2012, at the suggestion of CCF board member and psychology professor Etiony Aldorando (University of Miami), that school offered generous administrative and technological support, as well as a graduate assistant. The CCF moved to the University of Miami, and Aldorando was appointed executive director. The CCF has continued to grow and flourish there under his leadership. The first conference at the University of Miami in 2013 focused on immigration and families. The second annual conference explored how technologies are changing relationships and family life. Going Forward: Making a Real Difference While the CCF’s organizational capacity and annual conferences continued to grow, so did the media program. By 2012 and 2013, the CCF released a briefing paper or symposium on new research nearly every month. These were extensively covered with 104 media hits, 82 mentions of the CCF, and hundreds of mentions in wire service stories in that year. CCF members also published 19 columns and op-ed pieces in this same period. The major outlets for the CCF’s research and activities included CNN.com, the New York Times, National Public

Courtship



Radio (NPR), Time magazine, USA Today, and the Washington Times. In March 2014, a briefing paper on cohabitation, authored by sociology professor Arielle Kuperberg (University of North Carolina, Greensboro), with commentaries by sociologists Sharon Sassler and Kristi Williams, economist Evelyn Lehrer, and historian Stephanie Coontz, was covered by NPR, the LiveScience Web site, Time, the Christian Science Monitor, and the Today show. In 2014, the CCF joined forces with the online magazine The Society Pages to publish briefing papers and symposia there, as well as disseminate them to the press. The organization also began to work with nonprofits and private companies that wanted to raise awareness about the diversity of American families. In addition, the CCF had been asked to consult with the influential youth media outlet MTV on a new antibias campaign the channel was developing. Barbara J. Risman University of Illinois, Chicago Carolyn Cowan Philip Cowan University of California, Berkeley See Also: American Association for Marriage and Family Therapy; American Family Association; American Family Therapy Academy; Family Research Council; National Council on Family Relations. Further Readings Council on Contemporary Families. http://www .contemporaryfamilies.org (Accessed December 2013). Coontz, Stephanie. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books, 1993. Risman, Barbara J., ed. Families as They Really Are. New York: W. W. Norton, 2010.

Courtship Courtship and romantic dating in the United States are intertwined with the concepts of sexual intimacy, gender, race, class, and family life. Across the centuries, cultural changes, economic transitions from agrarian to wage labor, and geographic

291

transitions from rural to urban have all been catalysts for significant changes in courtship practices. Early Settlement Native American tribes engaged in a wealth of diversity in sexual expression and intimate relationships. Many tribes accepted polygamy, homosexuality, transsexuals, and the equality of women. Dissolving a pair bond was not stigmatized, and was easily accomplished. European settlers, in contrast, imposed a restricted view of sexuality with a strong association between sex and procreation. Early on, courtship for European settlers in the colonies was informal and precipitated by the couple. Young adults chose their mates, emphasizing candor, sincerity, and the ability to provide an economically stable home life. Many future partners grew up together; long courtships and chaperones were rare. Home and commerce were intertwined, and gender roles were interdependent with relative equality between the sexes. Sexual passion was supposed to be contained; still, the premarital conception rate was as high as 30 percent, and was not negatively sanctioned as long as the couple married before the child was born. Parents supervised courting activities in the home and community. There is almost no documentation of same-sex romantic relationships during this period. Samesex desire, seen as unnatural and sinful, was punished by the community; the idea of a homosexual identity had not yet emerged. Because communities were so tightly interwoven, it was difficult to even imagine a separate life as a same-sex couple. Courtship practices involved many race and class variations. African slaves retained practices of polygamy and premarital intercourse, although as the ratio of men to women evened out, slaves shifted to stable monogamous relationships. Ideas about who was kin were expansive, and community ties were strong. Interracial relationships were harshly sanctioned, and antimiscegenation laws were passed. Europeans portrayed the sexuality and relationships of both Native and African peoples as aggressive and savage, which justified their enslavement and the eradication of their cultures. Nineteenth Century Major social changes had an enormous effect on courtship during this era. The connection between sex and procreation began to erode just slightly,

292

Courtship

as some heterosexual couples learned to control their fertility, and an emphasis on passionate love developed. African American couples were allowed to marry with the advent of emancipation, yet the marriage of Chinese immigrants was sharply curtailed. Heterosexual gender roles became more rigid, and there was an increasing emphasis on separate spheres for men and women, especially in urban areas. As urban areas grew, the expression of same-sex desire became a possibility for the first time, although it remained hidden from mainstream society. Economics fueled many of these changes. Earlier reliance on tightly knit communities shifted to reliance on the nuclear family. Men worked in the market economy, and women were relegated to home. For middle-class whites, the ideology of separate spheres emphasized differences between men and women while romantic love flourished. Women looked for a man who could be a good provider, and men looked for women of virtue and an angelic nature. Ultimately, this ideology of separate spheres was reserved for wealthier families who were supported by the domestic and factory labor of working-­class/poor women, slaves, indentured servants, and immigrants. Although romantic love emphasized passion, sexuality was something to be controlled. Women became the keepers of virtue, who had to tame men’s passions. Intercourse was reserved for marriage, and the premarital conception rate declined. Poor white women, slaves, and nonwhite women, in contrast, were viewed as loose and morally inferior. The mid-19th century marked early references to same-sex romance. Women’s passionate friendships were encouraged, because of the value placed on the female domestic sphere. Scholars believe that these friendships included sexual interaction, although there are few historical records of such activity. There is evidence that as men and women moved into cities and into the West, the expression of same-sex desire became a possibility, as the close scrutiny of the family was absent. Same-sex relationships were documented among soldiers, cowboys, and working-class individuals in cities. On the east coast, European immigrants brought with them a history of tightly knit communities and heterosexual marriage customs that included parental oversight and marriage within one’s ethnic group. In the West, once exclusion laws eased,

some Chinese men employed matchmakers who sent back home for a bride to create an auspicious match. Native American men gave gifts to a romantic partner’s parents, and partners exchanged tokens symbolizing their ability to care for each other. Spanish customs included public courting at fiestas, open expression of sensuality, and women’s rights of inheritance and property. Courtship customs among African slaves included displays of men’s verbal prowess and wit, acceptance of sexual interaction, jumping the broom, and the importance of love and attraction. Mothers emphasized to daughters the importance of delaying childbearing, reflecting the horrible reality of having one’s children owned by a white master. Emancipation brought many changes to these customs. Former slaves eagerly married, chastity before marriage was prized, and parental consent and supervision were emphasized. Middle-class blacks emulated white customs as a way to gain respectability and status. Black couples rejected patriarchy in favor of a strong emphasis on women’s equality, intelligence, and education. Still, white backlash emphasized the control of black sexuality; intermarriage was prohibited, social segregation abounded, fear of black male sexuality burgeoned, and exploitation of black female sexuality continued with impunity. Middle-class whites were relatively free to choose a mate, and although parents kept a watchful eye, couples were allowed private time together in the home. New customs included the exchange of rings, formal engagement and wedding announcements, and white wedding dresses as symbols of purity. In contrast, courting among white urban working-class couples was conducted in public. As young people moved to cities for work, they were freed from the scrutiny of family. In urban areas, they spent time together in parks, dance halls, and cabarets. Twentieth Century This era was marked by deep shifts in ideas about marriage and sexuality. The link between sex and procreation completely eroded in the late 20th century with the ability to control fertility through effective and affordable birth control. Earlier, to varying degrees, women continued to demand their rights, including education, property, divorce, and suffrage; and their entrance into the paid labor force in great



numbers shifted the balance of male-female power. Gender roles still emphasized male and female differences, although they were more egalitarian than in the past. By the mid-20th century, there was more acceptance of female sexuality outside of marriage, and sexual intercourse during engagement was accepted by many. Still, women were supposed to remain pure for marriage, and “loose” women were blamed for men’s sexual transgressions. Profound changes were also occurring in samesex relationships. The opportunity for same-sex relationships arose as cities and the market economy grew. However, same-sex desire was still cast as deviant, and these relationships remained largely hidden. Spaces for men to develop relationships in secret arose in bathhouses, drag bars, clubs, and resorts. Rooming houses provided a private space for women to develop same-sex relationships. White heterosexual courtship became more formal for the wealthy, with elaborate rituals of formal introduction, gentlemen callers, chaperones, and supervised interaction. These customs were not available for working-class couples, who courted in public places, and developed the system of dating and going out to dance halls and amusement parks as a way to get to know one another better. As courtship increasingly took place in the public sphere, wealthier youth also began engaging in the practice of dating. Dating arose in concert with the creation of adolescence and mass culture, increases in high school and college attendance, population shifts from rural to urban, and the advent of automobiles and movies. Beth Bailey characterizes this shift as going “from front porch to back seat.” Dating was linked to a girl’s or boy’s popularity, while courtship was a search for a mature future mate. Eventually, the stage of “going steady” became one of the gradual steps between casual dating and engagement, in which both partners agreed to exclusively see each other. Dating activities were not, however, equally available to all because they required money and access to entertainment. Poor black and white rural youth instead often met at church and community functions. World War II interrupted these patterns. College enrollment declined, dating became less formal, and courtship was put on hold as men entered the military and women entered factories. These shifts also opened space for gay and lesbian romantic partnerships as large groups of men and women

Courtship

293

were brought together. This continued after the war with the development of gay/lesbian communities, bars, clubs, and friendship networks. The sexuality of blacks was still demonized, and the African American family was viewed as pathological. Miscegenation laws and lynching were enforced when black men dared to interact with white women. At the end of the war, the first challenge to interracial marriage was successful in California, in part as a response to Japanese war brides. The postwar period was a demographic anomaly, with a reduction in divorce, younger age at first marriage, and high fertility. During the 1950s, the cult of feminine domesticity reemerged for white middleclass women. Courtship patterns reflected separated gender roles, with an emphasis on the importance of dating during junior and senior high school, and new rituals for going steady (e.g., a boy entrusting his girlfriend with his class ring). For college women, being engaged by senior year of high school was a commonly stated goal. This emphasis on traditional heterosexual marriage belied two realities: that these relationships were not as happy as purported, and these trends failed to capture the realities of poor, working class, and racial/ethnic families. Further advances in reproductive technologies and the women’s movement shifted patterns once again in the 1960s. The sexual revolution paved the way for significant increases in female sexual expression; by the turn of the century, over 80 percent of women had engaged in intercourse by age 20. Egalitarian relationships were prized, rates of female college attendance soared, and emerging adults delayed marriage until their mid-20s or later. Dating was informal, and cohabitation emerged as a new component of the courtship system, with more than half of young adults cohabiting before age 30. Internet technologies and online matchmaking services opened the pool of potential partners, and changed interaction in unforeseen ways. Same-sex romantic relationships underwent significant changes as well. In the 1950s, scholars documented the widespread extent of homosexual behavior, and stable gay communities and subcultures began developing. Gays and lesbians were still stigmatized, and they became scapegoats in the backlash of the cold war. With the advent of the gay rights movement of the 1970s, same-sex relationships came out in the open, and partners were able to express their love and affection in

294

Covenant Marriage

ways unimaginable in earlier times. By the end of the century, the debate over same-sex marriage was in full gear, and in the early 21st century, several states began to grant marriage rights to gays and lesbians. Ultimately, courtship and romantic partnering across U.S. history are marked by both continuity and change. Heterosexual courtship changed as parental control loosened; sites changed from the home to the public sphere; dating required access to money; women gained rights; and sex, love, and procreation became disconnected. Same-sex couples gained social respect and the ability to form legal relationships. Sexual expression became a critical component of adult happiness. Courtship and marriage between racial and ethnic minorities became both legal and social accepted. Miscegenation laws were overturned, and interracial dating and marriage became common. However, there are clear continuities. American youth have always enjoyed relative autonomy in their mate choice, and love and sexual affection have been consistently important. This emphasis on passion and romance will likely remain the centerpiece of courtship and romantic partnering well into the 21st century. Sally Lloyd Miami University See Also: Birth Control; African American Families; Cohabitation; Common Law Marriage; Dating; Feminism; Gender Roles; Hooking Up; Interracial Marriage; Miscegenation; Multiracial Families; Same-Sex Marriage; Weddings. Further Readings Bailey, Beth L. From Front Porch to Back Seat: Courtship in Twentieth-Century America. Baltimore, MD: Johns Hopkins University Press, 1988. Coontz, Stephanie. Marriage, a History: From Obedience to Intimacy or How Love Conquered Marriage. New York: Viking, 2005. D’Emilio, John, and Estelle Freedman. Intimate Matters: A History of Sexuality in America. New York: Harper & Row, 1988. Ogolsky, Brian, Sally Lloyd, and Rodney =Cate. “The History of Romantic Partnering.” In The Developmental Course of Romantic Relationships. Thousand Oaks CA: Sage, 2013.

Covenant Marriage In some states today, engaged and already-married couples can select a covenant marriage, a state-recognized form of marriage that mandates premarital counseling and limits grounds for divorce. In 1996, Louisiana was the first state to adopt covenant marriage, and Arizona and Arkansas also provide the option. As of 2014, only a few hundred couples per year in these states elect covenant marriage. What Is a Covenant Marriage? While state laws vary, a covenant marriage is marked by a number of features that distinguish it from other forms of marriage. Prior to the marriage, a couple must receive counseling from a member of the clergy; a marriage educator approved by the officiant; or licensed counselor, therapist, or psychologist. Premarital counseling emphasizes the lifelong commitment, and may require couples to seek counseling, should their marriage falter. After completing counseling and demonstrating that they understand the nature and legal burden of a covenant marriage, the couple sign a notarized declaration of intent to enter into a covenant marriage. A declaration of intent to contract a covenant marriage must be made on the marriage license and appear on the marriage certificate. An already-married couple can revise their marriage into a covenant marriage by undergoing a similar process, though not all states require them to receive counseling about the nature of covenant marriage. Covenant marriage laws severely limit the grounds for separation and divorce. While state laws vary, legal separation and divorce are generally only permissible if a spouse was impotent at the time of marriage, committed adultery prior to or during the marriage, committed a felony, has been sentenced to death or imprisonment, has physically or sexually abused the other or a child, has endangered the life of the other person or a dependent, has made life intolerable, has a long-term drinking problem, or if the partners have been living apart a lengthy period of time without reconciliation. Additionally, divorce may be granted if one spouse is living in an institution because of incurable insanity, though the divorcing partner must provide for the other’s care. Covenant marriage laws also shape the details of divorce, often limiting the issues about which

Credit Cards



spouses can sue each other to contracts; property; nullification, separation, and divorce; and child and spousal support. Justification for Covenant Marriage Covenant marriage is part of a larger marriage promotion movement that stresses the social good of lifelong intact marriages. More broadly, this movement encourages those who have had children together to marry, and discourages single parenthood. Advocates often cite research that indicates that, when compared to children reared in stable intact two-parent families, children in both singleparent and stepfamilies are more likely to face poor educational, behavioral, emotional, mental health, and economic outcomes. Additionally, the marriage promotion movement notes the high economic cost of divorce and nonmarriage, especially for women and children. Finally, advocates point to the high economic cost that society pays for divorce and nonmarriage, including the costs of enforcing child custody and increased use of social welfare services such as state health insurance for children. Indirect costs, such as lost productivity at work due to stress, also contribute to the social cost of divorce. The solution, they propose, is strengthening the social contract of marriage. Advocates of covenant marriage argue that it will lower these social costs by encouraging healthy, stable, lifelong marriages, first by coaching people about marriage before they enter it, and second by disallowing no-fault divorce, which they argue makes divorce too easy for couples and disincentivizes efforts to strengthen a weak marriage. Covenant marriage proponents thus do not simply advocate that any marriage is better than nonmarriage or divorce, but argue that strong marriages are preferable to weak ones, cohabitation, and divorce. For this reason, covenant marriage laws still allow for divorce in situations where marital conflict is likely to be so damaging to a family that divorce remains preferable, such as in cases of abuse. Outcomes for Covenant Marriages In states where covenant marriage is now an option, few couples—no more than 1 percent— select it. As outlined in the law, covenant marriages are not explicitly religious, yet most of those who enter into them have high levels of religiosity, give attention to preparation for marriage, and report a high

295

commitment to the ideal of marriage. Because the first covenant marriage laws took effect in 1997, and relatively few people have taken advantage of them, data on the long-term success of the law is inconclusive. However, early indications suggest that covenant-married couples are likelier to reach their early anniversaries, though findings about marriages beyond the first four years are contradictory. For social scientists, the question is whether this is a result of covenant marriage laws, or if couples who select covenant marriages are more likely than others to be strongly committed to marriage from the start. Rebecca Barrett-Fox Arkansas State University See Also: “Family Values”; Divorce and Religion; Divorce and Separation; Evangelicals; Family Development Theory; Family Mediation/Divorce Mediation; Healthy Marriage Initiative; No-Fault Divorce; Prenuptial Agreements; Single-Parent Families. Further Readings Baker, Elizabeth H., Laura A. Sanchez, Steven L. Nock, and James D. Wright. “Covenant Marriage and the Sanctification of Gendered Marital Roles.” Family Journal, v.18 (2010). Brotherson, Sean E. and William C. Duncan. “Rebinding the Ties That Bind: Government Efforts to Preserve and Promote Marriage.” Family Relations, v.53 (2004). Drewianka, Scott. “How Will Reforms of Marital Institutions Influence Marital Commitment? A Theoretical Analysis.” Review of Economics of the Household, v.2 (2004). Folberg, Jay, Ann Milne, and Peter Salem. Divorce and Family Mediation: Models, Techniques, and Applications. New York: Guilford Press, 2004.

Credit Cards Credit has become an economic reality for a vast majority of American families. As the costs of maintaining a household continue to increase, families are turning to consumer credit to obtain what they need and want when it is necessary or convenient. Credit

296

Credit Cards

means obtaining goods or services ahead of paying for them. Typically, this is accomplished by means of a credit card, usually issued by a bank or another financial institution, which immediately pays the creditor and allows the debtor to pay the bank or financial institution for the purchase at a later date, often with interest added and penalties for late payment. Though it is common in today’s economy, credit did not become popular for the average consumer until the middle of the 20th century. A Brief History American families have not always used consumer credit as it is known today. In the early years of the United States, purchasing items on credit or borrowing were not seen as desirable financial strategies. Additionally, the legal policy of the time did not allow institutions to lend money to individuals. Borrowing on credit was reserved for businesses that had assets, making them better able to repay any loans that had been negotiated. It was not until later when a series of events prompted significant political and social change in the approach to consumer lending. Since the dawn of civilization, humans have bartered or borrowed to obtain what they need to survive. Before currency, bartering was the common method of obtaining goods. For example, at harvest time, a farmer would acquire household goods, such as fabric for clothing, by trading the portion of crops that he grew in excess of what he needed to feed himself and his family. Even after countries developed common currencies, bartering was still used as a method for conducting business between groups where money was scarce or there was no common currency. Early settlers to North America bartered with the Native Americans with supplies brought from England. As local communities grew, they began to establish local economies. In the 19th century, America’s national economy was booming, and cities like New York were rapidly expanding. Immigrants were moving into urban and rural areas, seeking opportunities in a new country. In cities, residents turned to neighborhood stores to provide the goods that they needed, and these stores became the foundation for the burgeoning retail economy. To meet consumer demand, small businesses required financial capital to obtain sufficient stock, and thus commercial lending through banking institutions became a common practice.

However, lending was exclusively for business use; individual lending was still not allowed. In the retail trade, however, informal lending structures arose to help consumers. Store owners would occasionally act as a form of lending institution by selling goods to consumers on credit. As part of the transaction, consumers agreed to pay the store via installments, or by making a full payment of the balance at a later time. This practice was more common in rural areas, where farming was the main source of income. Farmers, unable to pay for goods during growing seasons, would buy supplies on credit, and pay the balance after the harvest when they were paid. Though helpful to the customer, sales on credit were difficult for store owners because they still needed money to operate their businesses. Additionally, the responsibility to recover the outstanding accounts belonged to the store owners, which made more work for them. Therefore, store owners were hesitant to allow purchases on credit, unless they had a trusting relationship with the customer and were confident in their ability to pay their “tab.” By the 20th century, the American Industrial Revolution had ignited urban growth by drawing families from rural areas looking for work that was more consistent than agriculture. With the United States entering World War I, working in a factory mass producing goods provided a higher standard of living than many Americans had previously known. With a steady paycheck, households were now better able to afford more of the goods and services produced. The demand for big-ticket items, such as automobiles and houses, was growing. However, to buy these items, families needed access to larger amounts of cash. Increased production, along with increased demand for durable goods led to a whole new political and commercial view of consumer lending. Lending money to consumers could now be a potential business. In the years during and following World War II, banks and government officials saw the potential for profit in lending money to families for automobiles and homes, especially as soldiers were returning home and were ready to return to civilian life. With factories producing goods at a rate never seen before in peacetime, families were becoming active consumers. The manufacturing capacity of factories that helped the United States and its allies win World War II were converted back



to consumer goods by the 1950s, and households across America were ready to buy. Home ownership became the American dream, and fueled the rapid growth of suburbia. Houses, although affordable given rising incomes, were still too expensive for the typical family to buy with their savings. However, with easily attainable credit, families could secure a mortgage to buy a house with a belief that they were building financial security. In exchange for a home, families made a down payment and monthly mortgage payments comprised of principal and interest to the bank, which technically owned the house until the family paid off the balance. Banks and government agencies became willing to lend money to families and individuals because the interest on the loans meant more operating capital for them. With the lure of easy credit, borrowing became the way for families to make large purchases without having to save up the total cost beforehand. Consumers could buy what they wanted in exchange for a down payment, business owners received full payment for the item, and the lending institution made a profit from the interest. Becoming a Cashless Society In the 21st century, consumers still take advantage of credit to pay for goods and services, much like they did in the 1950s. However, the lending landscape has changed. Credit that was once the responsibility of shop owners has been taken over by third-party financial lending institutions. Cash used to be a concrete form of payment, and was the cornerstone of face-to-face sales transactions. Nowadays, money is just as likely to be represented by numbers on a digital screen as it is in the form of cash. Purchases of goods or services no longer need to be conducted face to face. Rather, transactions can be made instantly from anywhere in the world using charge cards, or as they are better known, debit or credit cards. Early in the 20th century, department stores started giving customers access to lines of credit if they could demonstrate the ability to pay for purchases by an agreed-upon time in the near future. This process was streamlined by offering eligible customers store charge cards. Items could be purchased without payment by presenting the charge card (initially a small metal plate with the customer’s signature on one side, and a number stamped on the other), and a bill would be issued at a later date.

Credit Cards

297

The charge plate could only be used at the store that issued it, and it spared individuals the inconvenience of having to carry money while shopping. This development changed how families shopped. As store charge cards became more popular, bankers discovered the potential market for providing charge cards that could be used at more than one business. Thus, the Diner’s Card was one of the first cards to be issued by a third-party lender. Members were charged an annual membership fee for the convenience of having the card, and businesses were charged a separate rate to accept the card at their establishments. This arrangement signaled a trend in consumerism where a lender, other than a business owner or a bank, could make a profit from consumer purchases. Banks received not only the annual fee from cardholders, but also the interest on all purchases made using the card. Interest on credit card purchases tended to be higher than interest on traditional bank loans. By the 1970s, credit card use had significantly increased in consumer purchases. A key Supreme Court ruling, referred to as the Marquette ruling, influenced how banks approached the issuance of credit cards. First, banks could charge interest on loans at the maximum rate set by the state in which the bank was located or chartered. This meant that a bank could be headquartered in a state with higher interest caps and still charge high interest to customers in other states. A second outcome of this decision was that states started modifying interest caps to help keep banks from moving to other states. Finally, banks were allowed to solicit credit card customers with bulk mailings. By the end of the decade, banks had more leeway with interest rates and had the ability to market their credit cards to more potential customers. Along with the easing of federal regulations regarding consumer credit came an increased number of credit-card holders. Credit cards quickly became a major method of purchasing goods and services. In the 1980s, overall consumer spending was on the rise. By 1990, it averaged over $51,000 per year per household. By early 1990s, credit purchases made up over $400 billion, with households averaging $395 in credit card purchases per month. The economic stability of the decade had laid the groundwork for consumer borrowing habits, both through loans and credit cards, for future decades.

298

C-Sections

Credit cards have become important parts of family purchasing habits, and they offer a limited sense of financial security. Mortgages, auto loans, student loans, and the cost of raising children are stretching family budgets to near unsustainable limits. For example, conservative estimates indicate that raising one child through the age of 18 costs a family around $234,000. This estimate does not include the cost of tuition, if parents are able to help their child pay for a college education. For many, the cost of living has outpaced the earnings of most families, and credit cards provide a method of receiving necessary goods and services when cash is limited. When families experience unintended events, such as unemployment or illness, credit lines act as a temporary financial safety net. However, repayment of the debt becomes the sole responsibility of the cardholder. As a result, more families find themselves faced with large amounts of credit card debt, along with mortgages and auto loans. Credit card debt is not just for working-class families. In the early 1990s, credit-card companies began sending offers to a wide variety of potential customers, including higher-risk customers, such as low-income families and college students. Some cardholders paid off credit balances every month, whereas others made smaller monthly payments on the balance. Credit-card companies made billions on fees and interest from customers who carried a monthly balance, so future customers who were likely to fit that profile were actively recruited. If customers missed a payment, they were penalized with an additional percentage added to their balance, and interest rates were raised. Credit lines were also raised to entice customers to make more purchases. Consequently, by the mid-1990s, families with credit cards had an average balance of $2,700, which was up from $570 only a few years earlier. Unfortunately, under such an arrangement, debt could easily accumulate to insurmountable levels. In 2009, President Barack Obama signed into law legislation that protected credit-card customers from unfair banking practices. The Credit Card Accountability, Responsibility, and Disclosure (CARD) Act of 2009 restricted credit-card companies from unfairly raising interest rates without notifying the customer beforehand in a timely manner. It also limited the amount of fees that can

be applied to credit cards accounts. Finally, the legislation added protection for younger credit card holders against unfair lending practices. Bruce Covey Central Michigan University See Also: Budgeting; Family Consumption; Home Economics; Homemaker. Further Readings Bernthal, Matthew, J. David Crockett, and Randall L. Rose. “Credit Cards as Lifestyle Facilitators.” Journal of Consumer Research, v.32/1 (2005). Evans, David and Richard Schmalensee. Paying With Plastic: The Digital Revolution in Buying and Borrowing, 2nd ed. Cambridge, MA: MIT Press, 2005. Hyman, Louis. Borrow: The American Way of Debt. New York: Vintage, 2005. Taylor, Aram. Working Class Credit and Community Since 1918. Basingstoke, UK: Palgrave Macmillan, 2002.

C-Sections A C-section (also referred to as a cesarean section or sometimes simply as a “section”) is a surgical procedure that involves the removal of the fetus through an incision made in the mother’s abdomen. The mother is given an anesthetic prior to the procedure so that she does not feel pain during the procedure. After making an incision just above the pubic bone and opening the abdominal wall, the surgeon then moves the abdominal muscles apart in order to access the uterus. An incision is then made in the uterine wall, the fetus is removed, and then the placenta. The incision in the uterus is then sutured, as is the incision in the abdominal wall. The procedure is then complete, and the recovery process begins. History The C-section procedure has been used in the United States since at least the mid-19th century. While early C-sections carried significant risks such as hemorrhaging, infection, and even death, such risks have lessened over the years due to



refinement of the procedure, as well as changes in medical practice and medical technology. Attitudes about C-sections have greatly varied over the years. Medical professionals, mothers, insurance providers, and various other interest groups have weighed in on the debate, offering varied perspectives on the relative safety, risks, and outcomes of the procedure, as well as acceptable circumstances under which C-sections should be used. Given the highly contentions nature of the debate at present, coupled with the high demand for C-sections in the United States, it is unlikely that this issue will be settled any time soon. Controversy and Risks At present, there is significant controversy surrounding the practice of C-sections in the United States. Whereas the procedure was rarely performed in the early decades of the 20th century and only as a “last resort,” by the 21st century, approximately one-third of all births in the United States took place via C-section. While it is recognized that the procedure is medically warranted in some cases, C-sections are medically necessary in some cases, particularly in high-risk pregnancies and when the life of the mother or the offspring is as stake. The controversy centers on procedures that are performed for the sake of convenience or in order to “preserve” the woman’s perineum. The argument for lowering the percentage of births that occur via cesarean sections primarily stems from concerns about the medical risks to mothers and newborns that are associated with the procedure, the high costs associated with C-sections, and the way in which a highly medicalized model of birth—which included medically unnecessary C-sections—disempowers women and fails to honor the physiological process of birth. Costs associated with C-sections are significantly higher than those associated with vaginal delivery, particularly due to the costs of surgery and longer hospital stay with cesarean births. C-sections carry both short-term and long-term risks. Births that occur via C-section result in higher mortality and morbidity rates for both women and offspring than vaginal births. For women who undergo a C-section, immediate risks include those typically associated with a major abdominal surgery: increased rates of infections and hemorrhaging, cutting or “nicking” internal organs during surgery,

C-Sections

299

pulmonary embolism, stroke, complications from anesthesia, pain, adhesions and scarring, psychological trauma, and death. In the long term, women who have a C-section may experience ongoing pain from the incision site and adhesions (thick, painful scar tissue). They are also more likely to experience uterine rupture during subsequent pregnancies, endometriosis (cells from the uterine lining that travel and grow outside of the uterus), medical complications in the year following their C-section, and negative mental-health outcomes in relation to their birth experience. Once a woman has had a birth via cesarean section, it is now standard practice in the United States to not allow her to attempt a vaginal birth after cesarean (VBAC); thus, once a woman has a C-section, she can expect all of her future pregnancies to be delivered via C-section. For newborns, short-term risks associated with cesarean section include preterm delivery, respiratory complications, readmission to hospital, being cut or nicked during the surgery, and increased risk of death in the first month of life. Long-term risks for infants include childhood respiratory problems (such as asthma), increased risk of childhood obesity, and impaired immunity. There is much evidence that cesarean birth has a negative impact on breastfeeding because infants born via C-section are typically separated from their mothers in the immediate postpartum hours, rather than being put in direct contact with their mothers and provided with the opportunity to immediately breastfeed. Benefits There are a variety of reasons offered by proponents of C-sections in the United States. The procedure can be very effective as a response to high-risk situations, including when the life or well-being of the mother and/or fetus is at stake. It is especially useful as a means of quickly delivering the fetus, something that may save the life of the fetus in cases of fetal distress, placental abruption, or when there are problems with the umbilical cord. C-section is also an effective means of delivery in high-risk pregnancies such as multiple fetuses, severely preterm labor, or when the mother has eclampsia or a similarly severe medical condition. However, the majority of C-sections in the United States are not preformed out of medical necessity. Some cesarean sections are done in the name of convenience, so the expectant mother can

300

Cult of Domesticity

schedule the delivery of the child in accordance with other activities and demands in her life, or so that the woman’s health care provider can schedule the delivery during the normal work day, rather than having to be on call and available during evenings and weekends. Others are performed with the intent of avoiding the pain of labor and delivery. Still others are performed out of vanity, a phenomenon referred to as “too posh to push,” which includes women’s desire to preserve their perineum by avoiding a vaginal delivery. However, it seems that many C-sections occur because women are not adequately supported during labor, because they feel pressured to have a C-section, or because they are not fully informed about the risks associated with delivery via C-section. There is also evidence to suggest that the procedure may be performed as a means of avoiding litigation, particularly because the obstetrician can use the C-section as evidence that they took all possible measures to preserve the health and well-being of mother and offspring. Jillian M. Duquaine-Watson University of Texas at Dallas See Also: Abortion; Adolescent Pregnancy; Artificial Insemination; Assisted Reproduction Technology; Birth Control Pills; Breastfeeding; Contraception; IUDs; Contraception: Morning After Pills; Fertility; Infertility; Maternity Leaves. Further Readings Block, Jennifer. Pushed: The Painful Truth About Childbirth and Modern Maternity Care. Cambridge, MA: De Capo Press, 2007. DeClerq, Eugene, et al. Listening to Mothers III: Pregnancy and Birth, Report of the Third National U.S. Survey of Women’s Childbearing Experiences. New York: Childbirth Connection. http://transform .childbirthconnection.org/wp-content/uploads/ 2013/06/LTM-III_Pregnancy-and-Birth.pdf (Accessed September 2013). Leavitt, Judith Walzer. Brought to Bed: Child-Bearing in America, 1750–1950. New York: Oxford University Press, 1986. Murphy, Magnus. Choosing Cesarean: A Natural Birth Plan. New York: Prometheus Books, 2012. Sewell, Jane Eliot. “Cesarean Section: A Brief History.” Washington, DC: American College of Obstetrics

and Gynecologists. http://www.nlm.nih.gov/ exhibition/cesarean (Accessed September 2013).

Cult of Domesticity The new middle class in the 19th century was influenced by the “separate sphere” ideology. This is the idea that men are rightfully the primary breadwinners in a family, and women should be homemakers. The separate sphere ideology brought forth a new ideal of womanhood that focused on domesticity, that is, family and home life. This cult of domesticity, also known as the “cult of true womanhood,” was reinforced in popular culture at the time, for example, in magazines such as Godey’s Lady’s Book, and later in Ladies Home Journal. The cult of domesticity is still prevalent in the 21st century, and centers on notions of intensive mothering, even as women’s paid employment continues to rise. Motherhood is based on a socially constructed set of activities and relationships that involve the nurturing and caring for children. Throughout history, womanhood and motherhood have been seen as synonymous, thus defining a woman’s gender identity. However, the definitions and practices of motherhood are socially variable, rather than natural or universal. By definition, mothers share a set of activities—nurturing and protecting children. The cult of true womanhood emerged in the 19th century as the dominant value system, shaping ideas about femininity, proper mothering, and being a wife. The cult of domesticity was seen as having both a religious and biological basis that was tied to the separate sphere ideology. Proper wives and mothers should have certain virtues, such as piety, purity, submissiveness, and domesticity. Piety referred to women’s religious devotion, while purity reinforced the notion of abstinence until marriage. Men were considered superior, and women were supposed to be submissive or obedient to them. Domesticity specifically referred to women’s work in the home. Only women, with their “natural” abilities to do housework and tend to children, could be seen as having high moral values and low self-interest. Women were celebrated as mothers because they shaped their children’s character. Godey’s Lady’s



An advertisement for Palmolive soap in Ladies’ Home Journal, in 1922. The magazine was first published in 1883, and eventually became one of the leading women’s magazines.

Book regularly featured essays on women’s domestic duties, fashion patterns, and crafting ideas, all of which reinforced the cult of domesticity. The separate sphere ideology and the cult of domesticity reinforce a separation of work and home, whereby women are to create a haven from the heartless world for their working husbands. These dominant ideologies coincided with a change from public families to a more private home life, with separation or seclusion from the larger community. The privatization of families reinforced women’s obligation to raise children and maintain the household. Despite the importance of this work, it was afforded little to no status as work. Paid Employment and the Cult of Domesticity The two-parent family is a normative model that shapes decisions about paid and unpaid work. Sociologist Dorothy Smith refers to this as the standard North American family and suggests that

Cult of Domesticity

301

it rests on the idea of a breadwinner–homemaker dichotomy that is intertwined in the separate sphere ideology and cult of domesticity. Middleand upper-class women have had class advantage, and were able to adhere to the cultural images and expectations of the gender-based cult of domesticity. These dominant gender ideologies involve both stereotypes and ideals that reflect perceptions of how men and women should feel and act. This corresponds to the cultural constructions of motherhood and fatherhood. The standard North American family model has not been the reality for most families, and the economic conditions on which it is based are fading. For example, both parents in working-class and minority families have historically worked for pay. Working-class and minority women have had to construct unique notions of womanhood and motherhood, based on both their labor market and domestic obligations. More recently, the middleclass ideal has become out of reach, even for the middle class. Paid employment for mothers of young children, especially married mothers, was once part of a deviancy discourse, where mothers were expected to follow normative standards of intensive mothering. This affected women’s labor force participation and created economic dependence on their husbands. Mothers who did not conform to this standard of true womanhood were often seen as deviant, especially young, single, minority mothers who have historically worked for pay. These historical patterns of paid employment for mothers call into question the cult of domesticity and a unitary model of mothering. In Black Feminist Thought, sociologist Patricia Hill Collins argues that the labor market experiences of African Americans have significantly diverged from the breadwinner–homemaker model, indicating that mothering takes place within specific historical contexts, framed by interlocking structures of race, class, and gender. This affects the meaning of mothering to women, and the availability of resources impacts mothers’ responses to their children and other familial obligations. Intensive Mothering and the Cult of Domesticity The cult of true womanhood gave way to intensive mothering, which remains the norm today. Sociologist Sharon Hays describes intensive mothering as

302

Cults

when women serve as the primary caregiver; childrearing techniques are based on expert knowledge with mothering as child-centered, time-consuming, and inspired by love; and children are priceless and motherhood is sacred and self-sacrificing. These elements of intensive mothering romanticize mothering and reinforce white middle-class ideals embedded in the cult of domesticity. Maternal practices and ideologies differ by social class; however, intensive mothering is still the overall cultural image that mothers often emulate. Popular images in the late 20th century and the beginning of the 20th century portray the ideal woman as the “super mom” who is capable of both housework and paid labor. Mothers use time-saving devices to do routine housework, but at the same time, do-it-yourself projects are regaining popularity. As of 2014, the majority of married mothers of children under 18 are in some type of paid employment. Even as women have entered the labor force, they still do the majority of housework. However, research suggests that women’s time spent doing housework is slowly declining, whereas men’s time is slightly increasing, specifically in regard to child-rearing responsibilities. These shifting patterns in household labor are a result of larger economic conditions and changing gender ideologies. Nonetheless, characteristics of the cult of domesticity and intensive mothering still permeate family life today. The cult of domesticity and intensive mothering are still an underlying theme in research on women who are primary breadwinners. This line of research suggests that women are still unable to bring their earnings to the bargaining table in regards to the division of household labor. Women often downplay their economic contributions by emphasizing the “power” they derive from being gatekeepers to their children and the organizers of the household. This is consistent with the concept of domestic feminism, which supports the acceptance of traditional gender exceptions of a female identity based on nurturing and intensive mothering, and may indicate a new cult of domesticity that continues to serve as a cultural image of mothering for 21stcentury mothers. Andrea N. Hunt University of North Alabama

See Also: Breadwinner-Homemaker Families; Domestic Ideology; Intensive Mothering; Marital Division of Labor; Separate Sphere Ideology; Standard North American Families. Further Readings Ardenell, Terry. “Conceiving and Investigating Motherhood: The Decade’s Scholarship.” Journal of Marriage and the Family, v.62 (2000). Collins, Patricia Hill. Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment, rev. ed. New York: Routledge, 2000. Hays, Sharon. The Cultural Contradictions of Motherhood. New Haven, CT: Yale University Press, 1996. Mintz, Steven and Susan Kellogg. Domestic Revolutions: A Social History of American Family Life. New York: The Free Press, 1988.

Cults American history has seen quite a number of religious cults, some of which later developed into mainstream religions and others that did not. In ancient history, cults were groups of devotees of a god or goddess, such as the cult of Dionysus. The modern use of the term dates from the late 19th century and is quite different than its ancient meaning. As Charles W. Ferguson, quoted by Philip Jenkins, noted in 1928, “America has always been the sanctuary of amazing cults.” Usually when the term is used today, it is not a compliment. American cults have greatly varied from those that provide great benefits for its members (like the Father Divine Movement), those that promise what they ultimately cannot deliver (like the Millerites) to those who engage in the direct harm of its members (like the People’s Temple, the Branch Davidians, or Heaven’s Gate). Most American cults have risen from some form of Christianity, although their values and beliefs vary widely. Groups such as the Shakers had at their core a strong biblical center, whereas others have not. The modern cult is usually described as a new religious group that forms around a charismatic leader, who requires the absolute and unquestioning obedience of the cult’s members. Often in conflict



with the general public, they tend to separate themselves and their members from the surrounding society, maintaining both a physical and social distance. They exist outside the mainstream norms of the larger society, and they are often seen as both extreme and dangerous. Cults also often promote an apocalyptic message, some type of end time that is rapidly approaching. Cult members are required to repress their individual desires and needs and submit to their leader, who usually demands that they cut all ties with normative society, including those to friends and family outside of the cult. Some cults do not require that people cut themselves off, but they insist on conformity to their particular values and norms. Often all possessions, including all money, become group property, is used at the discretion of the leader. Most cults do not physically restrain members from leaving; however, once a devotee is fully initiated into a cult, it becomes extremely difficult to return to normative society without a job, money, or connections to family and friends. If family members have also joined the cult, leaving also means severing connections with them. Some new religions may start as cults, but as time goes on, they become integrated into society and may reach the point of being considered mainstream. Examples of cults that have assimilated into mainstream society include the SeventhDay Adventists, and the Church of Jesus Christ of Latter-day Saints (Mormons). Others, for a variety of different reasons, fragment and disappear. During the Great Awakening of the 18th century, an assortment of leaders created communal religious settlements in the United States. The following are some examples of typical cults found throughout that time. Shakers The Shakers, an example of a utopian cult, began in England when it broke away from a Quaker group. Originally called the Shaking Quakers because of their frenzied celebrations that involved shouting, dancing, and speaking in tongues, the Shakers came to Albany, New York, from England in 1774, led by Mother Ann Lee. The official name of the group (Shakers was a derogatory term) was the United Society of Believers in Christ’s Second Appearing, or The Believers for short. The Shakers predicted that a woman woud arrive as the Second Coming

Cults

303

of Christ, and Mother Ann was deeply venerated as this messiah. Originally married and the mother of four children who died in infancy, her beliefs on sex, marriage, and parenthood were at odds with mainstream society. Identifying the pains and dangers of childbirth and the death of her children as a sign of God’s displeasure, Mother Ann required members of The Believers to repress all sexual activity, as well as marriage and parenthood. With a zero birth rate, The Believers sought to increase their membership through conversion, reaching out to orphans, widows, and the disabled who had no place to go at a time when there was no public assistance. Whole families were encouraged to join the Shaker communities. Members were required to renounce private property, live in sex-segregated areas, allow the disciplining of their children (often severe) by the elders, and practice celibacy with all physical contact between the sexes prohibited. Many believed Mother Ann to be antiChristian, she was accused of drunkenness and lechery, and fines, jail sentences, expulsions from communities, beatings, and stoning were among the sanctions doled out against the early Shaker community and its founder. Mother Ann died in 1784, after she was assaulted by a mob that dragged her feet first down the stairs. Distrustful of education, intellectualism, and artistic behavior, The Believers nonetheless became known for their craftsmanship of their furniture and the contributions of folk songs like “Simple Gifts.” The Believers reached their peak in the mid-19th century, but a changing economy and attitudes about sexuality and the prohibition on having children resulted in its decline. In 2011, there were three remaining Believers living in the Sabbathday Lake community in Maine, although many buildings in former settlements are still in excellent shape and continue to host visitors. Oneida Community John Humphrey Noyes, the founder of this cult, promoted a true economic communist society, collective marriage, free love, selective breeding, and child bearing only with permission. Begun in Vermont and later moved to New York State, the Oneida Community, like the Shakers, was formed during the time of the Great Awakening. Spreading a doctrine of perfectionism, Noyes felt that it was possible for people to live without sin, and he

304

Cults

believed that Christ had already returned for his second coming in 70 c.e. According to Noyes, a properly controlled environment including communal living with shared property and decision making could provide the opportunity to live without sin. It was not truly democratic, however, because Noyes was the final authority on all issues related to the community. Eventually, the group moved toward pure communism, with work, food, living quarters, schooling, religious instruction, and experiences shared among members. Noyes and his wife lost four of their five children in a six-year period. Deeply grieved, he came to believe that births should be controlled and limited to select individuals, a form of eugenics, and this did limit births. Noyes also taught that spouses should be shared. Outsiders did not share Noyes’s beliefs, and he was indicted by Vermont on charges of adultery. When released on bail, he fled to New York State, settling on Oneida Creek, where he established the Oneida Community. Successful both socially and economically for many years, the wellknown Oneida silverware company began there, and provided economic support for the movement. Eventually, pressures began to splinter the community. Noyes left for Canada in 1877, never to return. On January 1, 1881, the Oneida Community ceased to exist, although the silverware business, today known simply as Oneida Ltd., continues. Father Divine Movement (Peace Mission Movement) This predominantly African American movement was founded by Father Divine, born George Baker in 1878, the child of ex-slaves. Beginning with a Christian focus, Baker was drawn to the religious philosophy of Samuel Morris, who preached the idea that God dwells in each individual. By 1907, Morris and Baker teamed up and presented themselves as a shared deity, with Morris as Father Jehovia and Baker as the Messenger. Later, they were joined by a third co-deity, John Hickerson, who took the name St. John the Vine. The three broke up in 1912, and the Messenger traveled to the south, where he met considerable resistance but gathered converts. In 1915, the Messenger and his converts, including Sister Penny, his chief angel, established a church in Brooklyn with the Messenger as the single God. His persuasive speaking style drew supporters, and in

the developing church he set himself up as the single authority figure that made all rules and decisions. The movement’s members lived communally with an employment service for domestic help, a series of grocery stores called Peace Missions, and eventually a hotel to provide economic support for all members. Everyone, even those who worked outside of the movement’s businesses, was required to give the Messenger their paychecks, but he paid for rent, food, and other necessaries. Soon, the Messenger changed his name to M. J. Divine, and then to Father Divine. While Father Divine ran the Peace Mission, Sister Penny, his spiritual (but not legal) wife, ran the household. Their spiritual marriage did not include a sexual relationship since Father Divine believed that sexual relations were unclean; members were to maintain sexual abstinence, and did not marry. Operations moved to Long Island, where membership steadily grew. Eventually, the influx of people and cars, and the loud singing, clapping, and shouting irritated neighbors. Police eventually arrested Father Divine at a Sunday service in 1931 for obstructing traffic, disturbing the peace, and being a public nuisance. Father Divine was found guilty and sentenced to a year in prison and a fine of $500. Three days later the presiding judge died of a heart attack. When Father Divine was asked about it, he replied, “I hated to do it.” Soon, the appellate court overturned the conviction and fine, citing “prejudicial comments.” Interest in the movement skyrocketed, with people coming to Father Divine for healing and solutions to their other problems. The movement’s religious services included banquets of free food. During the Depression, when public assistance programs were few, the movement fed tens of thousands of people spiritually and physically every year. In 1942, a disgruntled ex-community member took Father Divine to court to recoup the $5,000 she had given to the movement when she joined. Refusing to pay, he moved the Peace Mission movement to Philadelphia, a significant loss for the poor of New York. Sister Penny died (a fact hidden for a number of years), and in 1946, he married Edna Rose Ritchings, a 21-year-old white woman, who he introduced as the reincarnation of Sister Penny, and who became known as Mother Divine. After Father Divine’s death in 1965, Mother Divine acted as the movement’s caretaker. Jim Jones of the People’s Temple cult, attempted to convince Mother Divine that he



was the reincarnation of her dead husband, but was unsuccessful in taking over the Peace Mission Movement. Still alive as of 2014, Mother Divine continues to lead the Peace Mission. People’s Temple The People’s Temple is a religious cult founded by Jim Jones in the 1950s, with headquarters in San Francisco. It is best known for the mass suicide of over 900 members, as well as the murders of several journalists and Congressman Leo Ryan at their Guyana based Jonestown (People’s Temple Agricultural Project) in 1978. With a powerful evangelical style of preaching, Jones combined elements of Christian egalitarianism, racial integration, faith healings, and help for the poor with Marxist ideas. He prohibited sexual relationships; promoted the adoption of children; required members to distance themselves from family and friends who did not belong to the group, submit to severe humiliation and corporal punishment, to work to the point of sleep deprivation, and to accept sexual abuse of both men and women. At the beginning, Jones was a rising star in political circles, but when the media began to investigate rumors of abuse at the temple, Jones quickly encouraged remaining members to move to the Guyana location. Becoming increasingly paranoid, on November 18, 1978, he ordered his followers to drink cyanide-laced Flavor Aid, and to feed it to their children. A total of 918 members died, 276 of them children. Twentieth- and Twenty-First-Century American Cults The 20th century has also had its fair share of cults. The Manson Family, the People’s Temple, Heaven’s Gate, and the Branch Davidians have all been involved in episodes of violence that have captured the nation’s attention. Many American religious cults that began in the 20th century are still active in the 21st century, including groups like the Father Divine Movement, the Hare Krishnas, and the Church of Scientology. Conclusion The history of American cults is long and varied, from utopian communities to mass suicides, from subcultures to religious organizations that ultimately enter into the American mainstream. The

Cultural Stereotypes in Media

305

list of cults continues to grow, and they are not limited to the United States. Laura Chilberg Black Hills State University See Also: Church of Jesus Christ of Latter-day Saints; Communes; Shakers; Utopian Experiments and Communities. Further Readings Jenkins, Philip. Mystics and Messiahs: Cults and New Religions in American History. New York: Oxford University Press, 2000. Knight, George R. Millennial Fever and the End of the World. Boise, ID: Pacific Press, 1993. Lundskow, George. The Sociology of Religion: A Substantive and Transdisciplinary Approach. Thousand Oaks, CA: Pine Forge Press, 2008. Schaefer, Richard T. and William Zellner. Extraordinary Groups: An Examination of Unconventional Lifestyles, 9th ed. New York: Worth Publishers, 2011.

Cultural Stereotypes in Media Cultural stereotypes are intricately threaded through media and entertainment, and many of these stereotypes pertain to the American family. Race, ethnicity, class, sexuality, and religious practice are factors in these stereotypes, along with occupation, socioeconomic class, and geographic location. Different forms of mass media support and reinforce cultural stereotypes through the characters that are routinely depicted on television, in film, and in the stories that are covered through mass media outlets. Minorities who are not seen, heard, or covered are presumably not part of the majority of consumers of such media. Television reinforces cultural stereotypes by supporting hegemonic notions of the American family through identifying and defining both “American” and “family.” What is broadcast over the airwaves is a direct reflection of what the mainstream culture presumes, particularly when dealing with minority races, ethnicities, gender roles, and family

306

Cultural Stereotypes in Media

dynamics. Films produced in Hollywood, largely a reflection of mainstream tastes and the present social and cultural climate, also reinforce cultural stereotypes. Additionally, news stories or specialinterest pieces developed by major news outlets also create and perpetuate cultural stereotypes that often serve to silence minorities who do not fit the perceived stereotype. Cultural stereotyping in media, particularly in the entertainment realm, is exported around the globe, and is problematic because it generalizes groups who do not have white privilege, and denies many other races and ethnicities, many of whom are mixed, a true voice in American culture. This might start within a family and grow outward toward a larger community, or be gleaned from the larger culture and perpetuated within a family. Families who do not see themselves reflected in the media become, in a sense, invisible to the hegemonic culture. Thus, this cycle becomes a type of perpetuated discrimination or “othering.” Additionally, cultural stereotypes may insulate and generalize people and their life experiences, taking the element of individuality out of the equation, a highly regarded value in American culture. History of Stereotypes in Entertainment In many films, beginning with the first “talkie”— The Jazz Singer (1927)—and including countless Westerns, epic romances, and war films, and even slapstick comedies, stereotypes were abundant and were not considered a problem. Depictions of Jews, African Americans, slaves prior to emancipation, Native Americans (who were referred to as “Injuns” in many films), Asian immigrants, the woman of loose morals (exemplified by Mae West), and even the “fancy boy” were all rendered as stereotypes through clothing styles, speech patterns, occupations, where they lived, the religion they practiced, and how they interacted with other characters, both higher and lower in social class and position. Such stereotypes also found their way into novels, theater, popular songs, radio, and even advertising. As the civil rights movement progressed, it became increasingly problematic to blatantly stereotype races and ethnicities that were not considered white. By the end of the 20th century, more LGBT characters were included in mainstream film, characters were often multiracial, and cultural stereotyping was something that still occurred, but

it was more likely to be noticed and criticized by cultural critics. Independent films typically rely on stereotypical characters less, possibly because these films are intended for an audience with a high degree of awareness about the hegemonic dominant culture. Families on Television At the dawn of television in the early 1950s, the American nuclear family was almost always portrayed as white, middle class, and suburban. Television shows like Ozzie and Harriet, The Andy Griffith Show, Father Knows Best, Leave It to Beaver, and The Dick Van Dyke Show in the 1960s stereotypically depicted families. In the 1970s, 1980s, and 1990s, shows like Happy Days, Little House on the Prairie, Family Ties, and Everybody Loves Raymond continued to showcase white middle-class families with two living parents, with a breadwinning father, and successful children. These families are wholesome, if not overtly religious, and are framed as upstanding citizens who are successful in most aspects of their lives. In shows featuring African American families, such as The Cosby Show, Family Matters, The Fresh Prince of Bel Air, and My Wife and Kids, the model remains in place, with wholesome families situated in the middle or upper classes. Both examples reinforce the cultural stereotypes of the nuclear family, unthreatened by divorce, unemployment, problem children, or crime. In the African American families’ cases, their affluence, success, and “whiteness” break the stereotypes of broken families, working-class families, and delinquency that is often covered in the media and exaggerated in TV and film. In the 1970s, however, numerous shows challenged prevailing stereotypes with respect to both white and African American families. All in the Family, The Jeffersons, Maude, Good Times, One Day at a Time, Sanford and Son, The Bob Newhart Show, and Diff ’rent Strokes, were all successful sitcoms (most of which were produced by Norman Lear) that showed urban African American families, successful African American families, workingclass white families, divorced spouses with children, multiracial families, and successful urban couples with no children. My Three Sons, The Brady Bunch, Full House, and Step By Step dealt with families that had



suffered the loss of a parent or had a blended families because of remarriage, which helped break down the stereotype of the intact American family. Shows that portray divorced parents, such as Grace Under Fire and Reba, largely adhere to the stereotype of the mother having custody of the children. These shows started to become popular on network television in the early to mid-1990s, and often centered around families who lived in more rural areas of the United States. Some television shows featuring African American families focused on the working-class aspect of their lives. Bernie Mac and Everybody Hates Chris were shows that featured good fathers who tirelessly worked to make ends meet. This reinforces the stereotype of the African American male who has not been able to achieve financial success or stability, but deconstructs the stereotype of the deadbeat dad. Negative stereotypes were often portrayed on television dramas, specifically crime shows set in urban areas. Stereotypes in these shows range from African American juvenile delinquents, to low-income people of color in precarious situations such as prostitution and drug deals living in treacherous neighborhoods, and illegal immigrants living in ethnically centered neighborhoods. Common stereotypes involve African Americans, Italian Americans, Irish Americans, Greek Americans, Hispanics, Asian Americans, Indian Americans, Arab Americans, Native Americans, and members of the LGBT community. Of note is the under-­ representation of minorities, including LGBT individuals, who appear as fully developed characters, individuals who are not portrayed as a mere aspect of their culture, but as complex, well rounded, and unique. The long-running Saturday Night Live remains an outlet that pokes fun at every stereotype, although it is still seen as offensive when the show pushes any stereotype too far. Cultural Stereotypes and Film Families Cultural stereotypes of the family are noticeable in many Hollywood productions. Many of the same situations are played out as much the on big screen as on television, but notably, film, due to the rating system, can get away with an increased about of blatant stereotyping. Often, in movies that are conceived as parodies or spoofs of more serious works, stereotyping of cultural practices and racial, ethnic, religious, and social elements can

Cultural Stereotypes in Media

307

be even more overexposed and derogatory. The extremes of the family, whether nuclear, blended, broken, or dysfunctional, is often the core of the plot, making it possible to more acutely reinforce stereotypes rather than deconstruct them because of the time limitations of a movie. Films such as Beloved, Coming to America, Fiddler on the Roof, Fargo, and a wide array of Disney animated films adhere to many cultural stereotypes, some of which are rich in heritage and tradition, and some which subtly preach the desirability of cultural hegemony through plot, dialogue, and costume design. Tyler Perry is a filmmaker and producer who is both lauded and criticized for how African American families are portrayed in his films. The absence of Native American characters in film, and as actors, directors, and screenwriters is an example of who is not given power to make films in Hollywood. Asian American directors and actors are also highly marginalized or grossly stereotyped (for example, Jackie Chan). The cultural stereotype of the Jewish mogul is perpetuated in Hollywood, and extends to the entire entertainment business, from Broadway to the big screen. Cultural Stereotypes in the Media Media coverage of violence or civil unrest is rarely intended to be humorous or entertaining. However, with the rise of the Internet, specifically social media Web sites, even news stories have become part of the larger popular culture entertainment sphere. Viral videos have highlighted some of the most notorious stereotypes perpetrated by real media stories. Antoine Dodson became a viral sensation when his direct address to the Alabama public about the sexual assault on his sister while she was asleep in her bed was remixed and autotuned. The YouTube viral video was an example of the exploitation of crime in the ghetto. Additionally, Dodson’s viral video strongly reinforced rape culture because viewers did not take the actual news story of attempted assault seriously. Rather than a nuclear family that is shaken after such an attack and respectfully given privacy, these adult siblings received attention through parody. While parody is meant in good fun, its presence serves as an extension of the 24-hour news cycle, where news stories are analyzed to an extreme degree, and often facts are misrepresented because of the speed at which information is passed from

308

Curfews

the location of the story through the media and on to social media. Additionally, the persistence of the media reporting negative news that perpetuates stereotypes is problematic because it tells the stories but often fizzles out as soon as a more newsworthy events occur. Violent events in 2012 and 2013 include the Sandy Hook school shootings and the Boston Marathon bombings. To what extent these news stories were reported, how in depth they were covered, and who was initially profiled as responsible speak to how cultural stereotypes influence media reporting in the United States, and how that reporting, in turn, informs, influences, and determines community and individual responses. In particular, the controversy of same-sex marriage in the media has been extremely important in challenging hegemonic notions of the family and marriage. Stereotypes associated with gay men, lesbians, transgendered people, and bisexual individuals have been highlighted in the media as states consider legalizing same-sex marriage. Protests have been plentiful, but are often not covered in mainstream media; these stands against marriage discrimination and inequality have been closely followed through social media outlets, with a majority of Americans in favor of marriage equality and of allowing same-sex partners to raise children. This upheaval has challenged the hegemonic definition of concepts such as family, caretaker, mother, father, community, love, commitment, and marriage. In response to this, films, television, musical artists, and authors, among many other creative individuals, are publicly taking these social and political cues to heart and creating platforms where those who are not heterosexually normative can be seen, heard, and reflected back into the popular culture and media. Stephanie Salerno Lara Lengel Bowling Green State University See Also: Children’s Television Act; Disney/ Disneyland/Amusement Parks; Gender Roles in Mass Media; Magazines, Children’s; Magazines, Women’s; Newspapers; Radio; Reality Television; Same-Sex Marriage; School Shootings/Mass Shootings; Television; Theater; Twenty-Four-Hour News Reporting and Effect on Families/Children; Video Games.

Further Readings Ballam, Stacy M. and Paul F. Granello. “Confronting Sex in the Media: Implications and Counseling Recommendations.” Family Journal, v.19/4 (2011). Carter, Derrais. “Blackness, Animation, and the Politics of Black Fatherhood in The Cleveland Show.” Journal of African American Studies, v.14/4 (2010). Descartes, Lara and Conrad P. Kottak. Media and Middle Class Moms: Images and Realities of Work and Family. New York: Routledge, 2009. Greer, Colleen, and Debra Peterson. “Balancing Act? Cultural Representations of Work–Family Balance in the News Media.” Sociological Spectrum, v.33/2 (2013). Harwood, Sarah. Family Fictions: Representations of the Family in 1980s Hollywood Cinema. New York: St Martin’s Press, 1997. Janning, Michelle. “Public Spectacles of Private Spheres.” Journal of Family Issues, v.29/4 (2008). Landau, Jamie. “Straightening Out (the Politics of ) Same-Sex Parenting: Representing Gay Families in U.S. Print News Stories and Photographs.” Critical Studies In Media Communication, v.26/1 (2009). Levy, Emanuel. “The American Dream of Family in Film: From Decline to a Comeback.” Journal of Comparative Family Studies, v.22/2 (1991).

Curfews Curfews are rules or laws that restrict the movements of certain people, usually within certain time spans. The most common form of curfew encountered today is juvenile curfew. Juvenile curfews only apply to people under a certain age, usually 17 or 18 years old, and state that under most conditions, young people must be off the streets after a specified hour unless working, in transit to and from work, or under various court-approved exceptions. Curfew laws may also be invoked by executive power in certain emergency situations, and can apply universally, that is, to anyone not engaged in emergency work. Such laws might specify that under riot conditions, alcohol, firearms, ammunition, and gasoline not be sold in order to discourage further civil disorder and to prevent the manufacture of incendiary devices. More typically, emergency curfews



are used in situations following natural disasters to discourage looting and injury. On such occasions, military units such as the National Guard might be called out by executive order to enforce the conditions of the curfew. History Curfews have been used throughout history to suppress disorder and the subordinate classes, and to control their movements. During the Middle Ages when the church and nobility held great power over the masses, universal curfews enjoined all to remain in their homes and cover their fires between dusk and dawn (the term curfew comes from the French cur feu, which means “cover fires”). Tolling church bells announced the commencement of curfew, and the town or city gates would be locked. A prime reason for covering fires was to prevent a conflagration from destroying the cramped wooden buildings that comprised the town. It also served the interests of the ruling class to avoid having groups of peasants and townspeople congregating in taverns, and later, coffee shops, in drunken, unruly, and riotous frames of mind. The authorities sought to control the movements of Jews, tinkers, gypsies, and travelers in general. A special concern as the Middle Ages gave way to more modern conditions was that apprentices, servants, and later, factory workers be prevented from congregating and organizing opposition to the status quo and engaging in criminal activities. Urban criminal gangs were well established in the late 1700s in major European and American coastal cities. In this same period, extremely restrictive curfews were attached to bondsmen, apprentices, and slaves in British America, and when the United States became independent, slaves continued to be the subject of curfews, backed with extreme prejudice. Slaves, for example, who were found off the plantation or outside their masters’ aegis by white patrollers could be whipped or worse. All blacks and indentured servants were therefore required to possess a written pass from their masters or manumission papers. Most patrollers at this time were poor whites who were drafted to perform this task by local politicians and slaveholders. As the apprenticeship system dissolved under the pressures of factory labor in the Industrial Revolution, young people were forced into shift work in factories. In this situation, it became impractical and

Curfews

309

against the interests of the factory owners to enforce curfew laws. The inception of child labor laws, while freeing young people from sweat shops and wageslave working conditions, rendered curfews economically redundant, literally putting children on the street without adult supervision or institutions for recreation. Lacking the control and nurturance of a patron or master craftsman, as under the apprenticeship context, young men congregated in groups that reinforced the the existing criminally oriented gangs. By 1900, curfew laws were being applied or reapplied in many urban American areas aimed at controlling criminal gangs. This was also seen as a way to control and curb the precocious sexuality of young men and to prevent young girls from becoming “ruined” and lured into prostitution. The suppression of prostitution and gangs went hand in hand in the early 1900s. Helping often immigrant parents gain control over their unruly children by keeping them tethered in tenements was the unspoken rational hope of these laws. As concern over gangs mounted in the 1950s and 1960s, curfews were more frequently invoked as a way to control wayward youth. Recent Examples of Curfews Today, law-enforcement agencies see juvenile curfews in a similar light as in the past; that is, they believe that curfews work, and at least three-quarters of American cities have such laws. That being said, few criminologists support juvenile curfews because they have been demonstrably ineffective at curbing juvenile crime. One objection is that they simply displace juvenile crime to hours when juveniles are on the street and adults are at work. That means that homes are at risk for burglary, and can be used as hangouts for drug use and other criminal activity during the day. Another issue is that juvenile curfews allow police to harass any young person at will on “fishing expeditions,” which are often selectively applied to minority males. An example of this tendency is the use of “stop and frisk” policies in New York City—a situation that has inflamed the minority community. Finally, they operate under the assumption that parents want to, or are able to, keep young people at home during the hours in question. Some juveniles are beyond parental control for a variety of reasons. Punishing parents in juvenile court for the delinquency of their progeny, as some curfew laws demand, can be seen as unfair and unrealistic.

310

Custody and Guardianship

Still, parents, some juveniles, educators, and law enforcement cling to belief that curfews work, and they remain very popular with conservative politicians. It is possible that enforced curfews deter some crimes, and that a few at-risk children can be identified and helped by social agencies. It is also true that enforcing curfews requires few if any additional resources be allocated to police by municipal authorities. However, the overwhelming body of criminological evidence indicates that curfews do not deter juvenile offenses. This is because little juvenile crime occurs during the hours when curfews are in effect. Another objection is that juvenile curfew laws discriminate against the young, particularly minority youth. Civil libertarians have raised this issue for decades, but have found little sympathy in the courts for their position. Francis Frederick Hawley Western Carolina University See Also: Bullying; Delinquency; Discipline; Parental Supervision; Parenting; Slave Families. Further Resources Adams, K. “The Effectiveness of Juvenile Curfews at Crime Prevention.” Annals of the American Academy of Political and Social Science, v.587 (2003). Howell, J. Preventing and Reducing Juvenile Delinquency: A Comprehensive Framework, 2nd ed. Thousand Oaks, CA: Sage, 2009. Mays, L. and L. Winfree. Juvenile Justice. New York: McGraw-Hill, 2000. U.S. Department of Justice, Office of Justice Programs, Office of Juvenile Justice and Delinquency Prevention. “Curfew: An Answer to Juvenile Delinquency and Victimization?” Juvenile Justice Bulletin, 1996. https://www.ncjrs.gov/pdffiles/ curfew.pdf (Accessed August 2013).

Custody and Guardianship Custody and guardianship are legal arrangements that identify legal parameters for the residency, care, and protection of children, adults with disabilities,

and aging adults. Custody and guardianship can be understood in both historical and social contexts. Legal statutes clarify a process for the courts to determine appropriate custodianship and the factors under consideration. Policies for guardianship of children in the foster care system, adults with disabilities, and aging adults have begun to adapt and offer flexibility that supports positive growth and physical and mental wellbeing. Custody is a legal arrangement that clarifies the relationship between a parent and a child, and establishes the child’s residency and the individuals responsible for making decisions on behalf of the child. Parents who are married and listed on a child’s birth certificate do not have to secure legal custody of their children. A typical exception is when there is disagreement between the parents on who has the right to make decisions in regard to the residence, health care, education, and religious upbringing of the child. Custody is often established to resolve disputes following the separation or divorce of the parents. It is a proactive process, meaning that in most cases custody is negotiated prior to separation and is required in many states prior to the finalization of divorce. Other reasons for custody agreements vary on a case-by-case basis but could include when the parents are not married, when one parent prohibits the other from seeing the child, or when a parent is potentially leaving the state. While some parents may have informal arrangements, other parents prefer to have a state-sanctified custody agreement. The agreement ensures that parental rights are afforded. Legal custody agreements will also regulate which parent, and during what years, a parent can claim the child on his or her tax return. Types of Custody Several types of custody are available in the United States that are established through the family court system. The most common form of custody is joint physical (residential) and legal custody (authority in decision making) of the child, awarded by the court. In joint custody cases, both parents provide care for and spend time with the child, according to an agreed-upon schedule arranged by the court. In these cases, parents share both physical and legal custody of the child. Sole custody is when one parent is awarded physical and legal custody of the child by the family court. These types of custody



can also be used in combination. For example, one parent can be awarded sole physical custody while the parents share legal custody. In this instance, the child will live with the custodial parent (the one assigned sole physical custody), who is then responsible for the care of the child. The noncustodial parent would then have a shared responsibility with the custodial parent for all decisions regarding the child, including health, well-being, and education. In cases of sole physical custody, noncustodial parents historically were awarded visitation. However, visitation is now subsumed under the custody arrangement because it is considered the right of the child to have ongoing, consistent, and meaningful contact with the noncustodial parent. Trends in the Determination of Custody Determination of custody can be historically traced back to Roman times. According to Roman law, children were the property of their fathers, and mothers had no legal rights. English law provided fathers with absolute power over children but also identified the legal obligation of parents to protect and support the child. Custody following divorce was awarded to fathers. The British Act of 1839 was the first legislation that identified the importance of the “tender years” in the growth and development of the child. This legislation encouraged maternal custody until age 7 and visitation after age 7. In America in the 17th and 18th centuries, a patriarchal legal system supported a paternal preference in postdivorce custody arrangements. This patriarchal preference dominated until the late 1800s, when a growing concern for children’s welfare and the influence of the Industrial Revolution on the function of families began to weaken the paternal preference in custody decisions. During the Industrial Revolution, men began seeking employment outside of the farm or local community. The division of labor within the household began to shift to a division between wage earners and caregivers. The responsibility of nurturing and caretaking became the main role of women. This shift in responsibilities within the family was coupled with an increase in the legal status of women in the 19th and 20th centuries. By 1920, a maternal preference in child custody had taken a strong hold in the legal system, and resulted in courts most often favoring women as primary custodians of the children. This preference was also supported in the research on

Custody and Guardianship

311

family at the time. Freud’s work also helped establish the importance of the mother–child relationship in the first five years of life, and attachment theorists identified the mother as the primary attachment figure, who was responsible for the physical and emotional caregiving within the family. The maternal preference in child custody began to be challenged in the 1960s, when a significant rise in divorce rates sparked national attention. Claims of sex discrimination by men, a focus on equal protection under the Constitution, and the feminist movement called for more gender-neutral laws. In addition, later research on infant attachment demonstrated that infants form meaningful bonds to both parents. The Uniform Marriage and Divorce Act of 1970 established the need to consider the best interests and needs of the child in custody decisions. Research on fatherhood and parenting styles later facilitated a shift in understanding and valuing the role of fathers in parenting. This shift in thinking led to legal cases claiming gender discrimination against fathers in the custody process, which resulted in greater use of joint physical and legal custody. Some states enacted statutes that specifically forbade gender-based preferences in custody determinations. Custody schedules followed a more gender-neutral approach regarding shared time, even though the schedules can vary from a 90–10 split of time to a 50–50 split of time. Today, custody disputes are resolved in the court using the standard of what is in the best interest of the child. The determination is made via courtappointed counselors and health professionals, and has been differently interpreted over time based on the historical and social contexts in the United States. Research has expanded knowledge on the best practices of coparenting, and indicates that children’s development is facilitated by a reduction of observable negative conflict between the parents, the psychological well-being of the custodial parent, and the maintenance of the relationship with the noncustodial parent. Even though the best interest of the child now focuses on maintaining meaningful contact with both parents, mothers are typically awarded more custody time than fathers. Gendered trends in the division of labor in the home indicate that females still provide a majority of the caregiving. This greater level of caregiving provided by females is correlated to

312

Custody and Guardianship

increased levels of custody for mothers. Trends also show that noncustodial fathers tend to reduce the amount of time that they spend with children as time progresses. Formation of stepfamilies also influences the level of contact and custody with the noncustodial parent. Research indicates that the remarriage of the noncustodial parent is correlated with reduced contact with children. Determining the Best Interests of the Child While there is no standard definition of the “best interests of the child,” the courts consider multiple factors when deliberating who is best able to care for the child. While all states have statutes that mandate the consideration of the best interests of the child, there is great variation in their depth and specificity. Preference is sometimes given to maintaining the child’s residence. Custody agreements and timing will consider the geographic locations of both parents. The quality of the relationships between the child and each parent, as well as the history and division of caregiving within the family, are also primary considerations. Preferences of the children who are of appropriate age and maturity will also be considered. Other factors include the mental and physical well-being of the child; the mental and physical well-being of each parent; the maintenance of sibling and other close family bonds; and the capacity of the parents to provide food, clothing, medical care, and a safe home free of violence. While no state has all of these factors included in their actual statutes, in many cases, the statutes direct the courts to consider “all relevant factors” that may or may not be specifically identified in the statute. If custody is contested by one parent, a government agency, or an interested third party, a parent must be found unfit to lose all custody of the child. Courts will often consider a change of circumstances as a rationale to reopen the assessment of the best interest of a child and the custody agreement. The ability to refile for custody is often a factor for incarcerated parents who have successfully been rehabilitated, or for parents who were incapacitated and no longer are, and have no history of abusing the child. This process also allows courts to consider new factors and current standards for custody decisions that may no longer be relevant, or are based on outdated standards. This same flexibility can have negative consequences if it perpetuates

a highly contested battle for custody that can continue for years. A loss of custody is distinguished from a loss of parental rights in that a loss of parental rights because of abuse, neglect, or abandonment cannot be overturned in the same way that a loss of custody can be reconsidered by the courts. Furthermore, a loss of custody does not always relinquish the parent’s right to see the child. Court-supervised visitation may be an option for parents who have no legal custody of their child. Third-Party Custodians When both parents are either incapacitated or unfit to care for the child, a third-party custodian may be identified. Social service agencies prefer to find a relative to serve in this capacity. Third-party custodians are designated to provide physical custody and care, but are not often given legal custody for decisions regarding the child. In the absence of blood kin or fictive kin, agencies will identify a foster parent to serve as custodian. Grandparents are often informally identified as custodians by their adult children, in lieu of foster placements. Informal custody arrangements do not have legal standing in the courts because the parents have not relinquished formal legal custody. In these cases, grandparents and other relatives often do not apply for custody through formal processes out of concern for the parents’ preferences, and because they do not want to force a loss of custody. Thirdparty custodial arrangements are often temporary, regardless of whether they are adjudicated in the courts or agreed upon in an informal process. Relatives and grandparents do not have legally recognized rights to see the child, and are only provided those rights when custody is formally awarded through the courts. Guardianship Guardianship is another court-mandated relationship in which an adult is appointed to provide care for a minor child, or ward, whose circumstances warrant supervision by the courts. Legal guardianship is a complex arrangement that allows for kinship care (care by a relative) that can offer stability beyond what a custody agreement can offer, without relinquishing parental rights. Guardians must act in accordance with the best interest of the ward, and report to the court at least annually. The legal



guardian is appointed to make decisions regarding health, support, and education. Guardianship can be awarded with or without physical custody. Just as in custody of biological children to married parents, parents are considered the natural guardians of the child, and do not need to obtain legal guardianship. On the other hand, guardians can be assigned to oversee the best interests of the child, even if the parents maintain physical custody. Guardianship originated as a legal relationship when the child inherited property. In 1935, legal guardianship was offered to all children who lacked protection of their biological parents, regardless of whether property ownership was involved. By the mid-1940s public guardianship was established to grant guardianship to a public agency when a child was dependent without parents, neglected, or abused. The current foster care system oversees the welfare of children requiring public guardianship. Minor children in the foster care system will have a foster parent who has physical custody, while the state maintains legal guardianship over the child. The child is then considered a ward of the state. In 1980, the Adoption Assistance and Child Welfare Act recognized legal guardianship as a permanent option for children when adoption was not appropriate or unavailable. A child would then exit the foster care system to a guardianship arrangement, in which the guardian has both physical custody and legal guardianship over the child. While no federal money was provided to support the guardianship relationship, some states attempted to provide financial support to families. The amounts were far less than what was paid to foster families, and as result, very few children exited foster care to go to guardianship families. During the 1980s and 1990s, the number of children cared for by family members significantly rose. The Adoption and Safe families Act of 1997 recognized kin guardianship as a permanent option, but did not provide for funding and support services. In 2008, the Fostering Connections to Success and Increasing Adoptions Act provided federal reimbursement to states that provide ongoing assistance and financial support on behalf of children who exit the foster care system to guardianship by a relative. The children are also provided Medicaid coverage for health care. Payments are not permitted to exceed what would be paid to a foster parent, had the child remained in the foster care system.

Custody and Guardianship

313

Subsidized guardianship would provide financial support to legal guardians that may reduce the barriers of some family members from accepting guardianship. The goal in any arrangement is to provide a safe permanent option for the child. Permanent placement is linked to higher academic achievement, better social and emotional development, and greater physical well-being of the children. Guardianship relinquishes once the minor child turns 18. Guardianship for Adults With Disabilities and Aging Adults

Individuals with disabilities have historically been provided with guardians to ensure both their care and protection. Guardianship in the case of an adult with a disability has often included relinquishment of parental rights in order for the adult child to meet financial eligibility requirements for Medicaid and other support services. Best practices in services and guardianship for adults with disabilities incorporates a philosophy of person-centered planning. Consideration and deference in decision making is given to the individual’s goals and preferences. Aging adults experiencing physical and/or mental limitations that have impaired their capacity to care for themselves and make appropriate decisions may request a guardian, or be appointed a guardian by the courts. Guardians for aging adults are expected to follow the wishes outlined in any documents written prior, or at the initial onset of, incapacitation. Current trends in policy have considered more flexibility in the parameters and options with guardianship. Legal guardianship can encompass all levels of decision making and care, or can be limited to certain areas such as financial, medical, or personal care. Best practices incorporate a philosophy that includes consideration for the most inclusive settings that allow for the highest level of functioning and independence on the part of the aging adult or adult with a disability. Karen L. Doneker Mancini Towson University See Also: “Best Interests of the Child” Doctrine; Child Custody; Disability (Parents); Divorce and Separation; Family Mediation/Divorce Mediation; Foster Families; Shared Custody.

314

Custody and Guardianship

Further Readings Mason, M. A., M. A. Fine, and S. Carnochan. “Family Law for Changing Families in the New Millennium.” In Handbook of Contemporary Families: Considering the Past, Contemplating the Future, M. Coleman and L.å Ganong, eds. Thousand Oaks, CA: Sage, 2004. Stewart, Susan D. “Marriage and Child Well-Being: Research and Policy Perspectives.” Journal of Marriage and Family, v.72/5 (2010).

U.S. Department of Health and Human Services, Children’s Bureau, Child Welfare Information Gateway. Determining the Best Interests of the Child. https://www.childwelfare.gov/systemwide/ laws_policies/statutes/best_interest.cfm (Accessed December 2013).

D Date Nights In U.S. family history, the concept of “date nights” is a relatively recent phenomenon that reflects couples’ attempts to maintain a romantic relationship in an increasingly busy and responsibilityladen culture. In the late 1980s and early 1990s, nontraditional dating and relationship variations emerged, such as online dating services, speed dating, and living apart together. In the 21st century, young people may engage less in traditional forms of dating, and more in newer arrangements, such as group dating, “hooking up,” and “friends with benefits.” With these numerous variations on traditional dating growing in popularity, dating as a premarital phenomenon has declined in salience. Simultaneously, date nights have become popular among married partners as a means to strengthen healthy relationships and improve spousal appreciation and intimate communication. Thus, the function of dating has transitioned from a type of premarital courtship to a marital leisure activity. Premarital Dating From the colonial period to the early industrial period, the United States was a largely agrarian society. Families in rural communities knew each other well, and romantic relationships often developed between young people who interacted at various community events, such as church picnics or

local festivals. In this context, courtship occurred under the watchful eye of both families and others in the community. As the nation shifted to a more urban industrialized economy in the early 20th century, some dramatic societal shifts altered courtship patterns. The rise of the automobile and mass transportation allowed young adults the freedom to pursue leisure activities away from the watchful eyes of parents or to move to urban areas and live independently. Young men and women worked outside the home and earned incomes. Friday and Saturday became known as date nights, when young adults engaged in leisure activities in a romantic context. For some individuals, date nights were reflective of a casual relationship status. A young man or woman might date several different individuals simultaneously, having dinner one week with one partner and the next week with someone else. Thus, dating was an opportunity to socialize and get to know a number of people on a friendly basis that may or may not include romantic overtones. For others, date nights were part of an exclusive relationship status. A couple agreed to date only each other; this arrangement was typically a means to assess potential compatibility for marriage. During the Great Depression, common dating activities were inexpensive or free. This trend continued during World War II, when millions of men 315

316

Date Nights together, either out of convenience or as an alternative to marrying. In the 1990s, technology made it possible for individuals to meet new partners online. Thus, individuals who had previously used date nights as opportunities to initiate new relationships now had alternative venues. In addition, young adults accepted more fluid definitions of friendships and romances. Young people with an established friendship may temporarily become romantic partners and then resume their friendship afterward; these boundaries in this “friends with benefits” option may remain flexible, depending on each others’ circumstances. These options, however, have never entirely replaced date nights as a social practice leading to marriage.

Date nights are popular for couples, often including activities such as dinner, movies, walks in the park, or simply cooking a meal together at home.

(and thousands of women) left the United States for military duty. Relationships sometimes continued during this long separation through letters, and sometimes they fizzled out. After the war, as suburbia boomed, date nights became popular for a new generation of youth. Since the 1960s, the range of alternatives to traditional dating has increased as a result of the civil rights and women’s rights movements. The advocacy for equal rights raised questions about the status quo, and supported greater freedom. This sense of freedom expanded the range of life choices available to individuals. Thus, some people rejected date nights as an outdated form of socialization and pathway to marriage. Instead, these youth increasingly engaged in a less formal arrangement, known as “hanging out.” Many who were distrustful of marriage in an age of rising divorce rates chose to live

Marital Dating Since the 1990s, date nights have been a popular activity for married or long-established couples. The premise is that spouses need to allocate time to remove themselves from daily stressors and focus on creating new memories together as a way of strengthening their relationship. Couples may return to the date-night patterns of their courtship, which may remind couples of what they initially liked about each other. Alternatively, spouses can try novel or challenging activities. This approach may help alleviate the boredom that sometimes plagues long-term relationships. Date nights have been identified as a valuable strategy for maintaining healthy relationships, as well as repairing distressed relationships. A night out can break cycles of negativity and foster positive communication. However, some researchers have argued that the traditional structure of a date is unworkable for some couples. Couples might benefit from activities that can be more easily conducted at home, such as watching a movie or cooking a nice meal together. In addition, the time demands of date nights might not fit some family structures. For example, couples might face significant barriers if they are in long-distance relationships, have discrepant work schedules, or are responsible for caring for young children or other family members who require persistent care. A variation of date nights, known as “mate moments,” has been recommended in these situations. Mate moments may be shorter than the typical date, but they can be more frequent; they

Dating



can also be conducted via technology for those in long-distance relationships. These moments fulfill the same functions as date night (e.g., intimacy and appreciation), but they are more achievable for couples with barriers to shared face-to-face time. Jacki Fitzpatrick Texas Tech University See Also: Courtship; Dating; Hooking Up. Further Readings Bailey, B. From Front Porch to Back Seat: Courtship in Twentieth-Century America. Baltimore, MD: Johns Hopkins University Press, 1989. Bogle, K. “The Shift From Dating to Hooking Up in College: What Scholars Have Missed.” Sociology Compass, v.1 (2007). Fitzpatrick, T. “Making Marriage Work.” Michigan Bar Journal, v.87 (2008).

Dating The term dating arose in the beginning of the 20th century, when young men and women would set up a date, time, and place that they could meet to socialize. The understanding was that dating was an essential part of courtship and mate selection. Youth and adults today expect to find fulfilling social, emotional, mental, physical, and spiritual connections with those they choose to date. With high rates of marital separation and divorce in the 21st century, those who date today appear to be increasingly more cautious about making the commitment to tie the knot than those in previous generations. What may be a surprise to some is that most young people still espouse marriage as the eventual ideal. The social history of dating in the United States has shown some unique trends throughout the centuries that have been closely tied to changing values and behaviors, and have been influenced by historical events, economics, ethnicity and race, culture and religion, the media, and life experience. What was once an evolutionary necessity for safety, security, and survival has now become for many Americans a quest for higher-order need fulfillment and

317

self-actualization. Research indicates that the playing field has changed from times past, with reduced stigmas and increased opportunities for Americans to pursue careers and goals with or without a romantic partner, to choose a romantic partner of the same or of the opposite gender, or to enjoy the perceived benefits of marriage with or without a marriage license. The 17th Through 19th Centuries As far back as colonial times, men appear to have been more interested in courtship and marriage than women. Men typically controlled more resources, and wielded power in relationships, and therefore were expected to initiate courtship. Dating was not a part of mate selection processes during this time. While beauty was a bonus, men were encouraged to find women who were industrious, hardworking, and sensible. Women were encouraged to find men who treated them well, were good providers, and did not drink, or at least did not drink to excess. Safety, security, and survival were at the forefront of courtship and marriage considerations, and for many, love was expected to come afterward. Religious and social gatherings provided some of the most common circumstances where men and women could meet. Social and courtship encounters were heavily monitored by parents and religious leaders, although amorous couples often found many opportunities to escape these watchful eyes so that they could be alone. Courting almost always occurred at the woman’s home. Because homes were generally small and not very private, a bundle board was sometimes used to separate the couples so that they could sit on a bed or a couch and talk while maintaining a sense of propriety. Marriage was often seen as a family affair that united families and resources during this age of agriculture, when most worked on a farm. When families did not approve of the match, some lovesick couples chose to steal away to a justice of the peace and elope. The 20th and 21st Centuries The turn of the century and the age of industrialization brought increased opportunities for men and women to get together and socialize. With the move from farms to factories, couples were now able to more frequently spend time together, informally

318

Dating

and casually, and as a result, the notion of dating was born. Traditional values and norms were perpetuated from the pulpit and through the media from the 1900s to the 1960s through shows like Father Knows Best, My Three Sons, and the Dick Van Dyke Show. Men were expected to pay for dates, couples were expected to “go steady” if they were exclusively dating, and mixers—informal social dances— became one of the top ways that young men and women could meet each other. “Cruising” became popular as the automobile became synonymous with freedom and opportunity. Sometimes known as the “bedroom on wheels,” automobiles not only provided couples more opportunities to be alone, but they also provided them more opportunities to experiment with intimacy and sexuality. The 1960s and 1970s brought an attitude of rebellion against traditional mores. During these decades, Americans reexamined their values and began to look at love and relationships in a different light. Civil rights, the “no-fault” divorce, the birth control pill, and rapidly changing cultural mores changed how people approached dating and marriage. Intercultural, interreligious, interracial, and same-sex dating gradually became more tolerated and accepted. Divorce rates skyrocketed, leading to an overall wariness and pessimism on the part of some Americans about whether or not dating could lead to a stable and satisfying marriage. As a result, cohabitation became an increasingly popular way to “try out” a relationship to see if it could stand the tests of time and marriage. However, cohabitation has generally not proven to be a good predictor of enduring, stable, and satisfying relationships for either couples or children who may be a part of the household. In fact, for heterosexual couples, cohabitation has made it much easier for men to walk away from the roles of provider and father with relatively few repercussions. Some couples engage in “stayovers”; that is, they maintain a low commitment dating/cohabiting relationship by each maintaining a residence. When couples do this long term, it is known as living apart together. The technology revolution at the end of the 20th century had a profound impact on dating relationships. Technologies such as the Internet, smartphones, texting, and e-mail, along with social media sites such as Facebook and Match.com, have changed the way that people form, think about,

and interact in their romantic relationships. Terms such as hanging out, hooking up, and friends with benefits have become common to describe various levels of intimacy and commitment in contemporary relationships. Healthy Dating and Marriage While the rate of people entering into marriage has decreased during the last half century, there is still strong support among contemporary Americans for achieving healthy dating relationships that can lead to finding a life partner. As a result, understanding what healthy dating relationships look like is important. Healthy dating today can be characterized as a stable and satisfying relationship built upon varying levels of friendship, safety, security, love, passion, commitment, respect, and trust. This means that both partners can negotiate differences and resolve conflicts without resorting to violence. By this definition, dating has evolved in the United States to reflect an expectation of a relationship that can provide a social, emotional, mental, physical, and spiritual connection to another person. Connection levels in each of these five areas can be characterized as a good reflection of the overall level of intimacy in a romantic relationship. Researchers have produced a general integrative framework of some of the primary factors that can have a predictive influence on the development of healthy adult romantic relationships, such as those that lead to marriage. They have divided their framework into three general areas: (1) antecedent conditions; (2) adolescent attitudes, beliefs, and relationship behaviors; and (3) adult circumstances and relationships. • Antecedent conditions: These include background factors such as ethnicity/race, culture/religion, socioeconomic status, neighborhood environments, and education/professional opportunities that may influence adolescents and their ability to develop healthy dating and adult romantic relationships. Immediate influences, such as peer groups, school environments, family structure, relationships with family members, and exposure to stress, along with individual factors such as intelligence, personality, attachment style, self-esteem,

Dating



delinquency, and substance abuse also influence a person’s ability to develop healthy romantic relationships. • Attitudes, beliefs, and relationship behaviors: Attitudes adopted about dating and marriage, beliefs about sex and childbearing, and relationships that individuals have been exposed to, specifically through media and their families of origin, shape people’s ability to develop healthy romantic relationships. Additionally, the timing of the romantic relationship, the intensity of these relationships, their duration and number, partner choices, sexual behavior choices, and potential consequences that may be associated with these choices such as violence, sexually transmitted diseases (STDs, and/or pregnancy are also relevant factors that influence romantic relationships and future relationship quality, stability, and satisfaction. • Adult contexts and relationships: Adult life circumstances include contextual factors such as employment, mental health, educational attainment, and exposure to stress that may influence dating and romantic relationships. Antecedent conditions and the attitudes, beliefs, and relationship behaviors developed in adolescence and early adulthood also exert varying levels of influence on the subsequent quality, stability, and satisfaction experienced in adult romantic and marital relationships. An understanding of the information below may be helpful in developing strategies for supporting healthy dating relationships that can, in turn, lead to healthy adult romantic and marriage relationships. Attitudes, Beliefs, and Relationship Behaviors While it may appear that today’s youth espouse something other than marriage, the reality is that their general attitudes and beliefs across race, ethnicity, and gender support the goals associated with healthy adult romantic and marital relationships. In fact, most plan on marrying at some point during their lifetime. Interestingly, males tend to support the notion of marrying and marrying at a later age more than females. Hispanic, white, then black

319

males are most supportive of marriage, respectively. Unwed teen mothers tend to strongly support marriage. Although a majority of adolescents disapprove of divorce, they are realistic about the possibility that their marriages might end in divorce. Increased acceptance of cohabitation appears to have filled the void left by the absence of people getting married at an early age. Early romantic relationships in middle school tend to focus on physical attraction, whereas later high school relationships tend to focus more on commitment and intimacy. The duration of romantic relationships generally increases with age and maturity, although some early romantic relationships may endure more than a year (and some highschool sweethearts end up married). Expressions of love and other emotions are common, including behaviors that express love, such as the giving and receiving of gifts. Friendship-building typically moves from samesex to mixed-gender to romantic relationships with age. Sexual behavior is most likely to be accepted and occur in romantic relationships. Interestingly, while most high-school teens believe that sexual intercourse is “inappropriate,” a majority become sexually active by the time they graduate from high school. Most report that they were “monogamous” sexually with only one partner within the past year. Low socioeconomic status adolescents are more likely to have multiple sexual partners, thus increasing their risks of pregnancy and STDs. Approximately 10 percent of teens report being a part of violent romantic relationships, with blacks, Hispanics, and whites more likely to be involved in relationship violence, respectively. Conclusion The history of dating in the United States reflects an evolution of values, behaviors, and expectations for what dating is, and what healthy romantic relationships should, look like. Research suggests that contemporary Americans are generally looking for romantic relationships that meet more than their needs for safety, security, and survival. In fact, it would appear that Americans today are generally looking for higher levels of intimacy in their romantic relationships when compared to previous generations. This quest to find intimate and selfactualizing romantic relationships may help explain the paradox of modern dating and marriage. On the

320

Dating Web Sites

one hand, divorce rates are high, which has generally led to a mistrust in the traditional process of dating that leads to courtship and marriage. On the other hand, the high cohabitation and divorce rates reflect the notions that Americans today may be unwilling to settle for romantic relationships that do not meet and fulfill their social, emotional, mental, physical, and spiritual needs. Victor W. Harris Universirty of Florida See Also: Courtship; Date Nights; Dating Web Sites; Hooking Up. Further Readings Benokraitis, N. V. Marriages and Families. Englewood Cliffs, NJ: Prentice Hall, 1993. Harris, S. M., et al. Twogether in Texas: Baseline Report on Marriage in the Lone Star State. Austin, TX: Health and Human Services Commission, 2008. Harris, V. W. Marriage Tips and Traps: 10 Secrets for Nurturing Your Marital Friendship. Plymouth, MI: Hayden-McNeil, 2010. Karney, B. R., M. K. Beckett, R. L. Collins, and R. Shaw. “Adolescent Romantic Relationships as Precursors to Healthy Adult Marriage: Executive Summary.” Rand Corporation and the Department of Health and Human Services. http://www.rand.org/content/dam/ rand/pubs/technical_reports/2007/RAND_TR488. pdf. (Accessed December 2013). Maslow, A. H.. Toward a Psychology of Being, 2nd ed. Princeton, NJ: Van Nostrand, 1968.

Dating Web Sites According to the Web site Statistic Brain, roughly 40 million people in the United States have tried online dating. Older adults also use online dating services, but it is much more common among those in their mid-20s through mid-40s. The online dating industry makes over $1 billion annually, with the average dating Web site user spending $239 per year in 2012. Of the approximately 2,500 online dating sites in the United States, just a few dominate the industry and have over 1 million active unique members. The largest of these are Match.com, eHarmony,

and OkCupid. Meanwhile, sites located outside the United States also have millions of U.S. members, such as Plenty of Fish (POF.com). Online dating originated with the rise of the Internet in the 1990s, and were preceded by computer dating systems that were popular in the 1970s and 1980s. The idea of matching people electronically dates back to 1959, when two Stanford students, Jim Harvey and Phil Fialer, created a class project to match people using the IBM 650; they called it Happy Families Planning Services. While this project was merely experimental, by the 1960s, some colleges used computers to facilitate members of the opposite sex meeting at dances and other social events. Psychology professors, math students, and computer programmers soon began using computers to generate income by matching date seekers. Clients would provide data by filling out and mailing in a form or questionnaire; their responses were then transferred to punch cards and fed into the computer’s database. The computer would generate possible matches for the client to contact. While the technology has changed in the past half century, the premise is the same. Today’s dating Web sites suggest matches based on a member’s preferences. Before the World Wide Web, bulletin-board services and newsgroups were popular online places to meet people of similar interests, similar to newspaper personal ads that had been popular for decades. However, these services did not provide any matching technology; individuals were responsible for representing themselves and sorting through replies from anyone who answered. Eventually, computers became faster and more powerful; home computers with Internet access became commonplace in the mid-1990s. Match.com launched in 1995, and touted itself as the first Internet dating Web site, with Yahoo! Personals following closely behind. Both of these sites acted as personal advertisements, where people posted profiles that described the demographics, personalities, and interests of people they hoped to meet. They could describe themselves as they wished, and provide a picture that would be included with their profile. This method has not changed over the years: personal profiles are gathered into an online “dating”base that other people can browse or sort through. Many Web sites host chat rooms, where people can message each other in real-time, and a successful



online meeting often paves the way for a meeting “in real life.” By 1996, dating Web site sites were starting to appear in directories and search engines like Yahoo! With the proliferation of people on the Internet, niche dating Web sites began to appear, targeting particular age groups, political ideologies, religion, and sexual orientation. Just about any Web site where people connect or chat, even if not for dating, can be used as a dating Web site. By the turn of the century, dating sites began adding additional features to help narrow search results, including keyword searches, icebreaker messages, voice and video greetings, instant messaging, and “winks” that can be sent with a single click to suggest one’s interest in another. Web site added more content to entice users to spend more time on a site, such as advice columns and personality tests. Generally, dating sites are unmoderated, meaning that it is up to the individual to make contact with others, though many services suggest people to contact, sometimes with e-mails once a day or once a week, or through alerts that let a user know that new members have signed up for the service who may match a person’s posted criteria. Many dating Web sites are aware that perfect matches are rare, and will suggest possible contacts from among those whose traits do not perfectly match the client’s specified parameters. In 2000, Neil Clark Warren launched eHarmony .com, which distinguishes itself from other sites by claiming to use a scientific approach to matching. Members complete lengthy questionnaires, and their data is analyzed via a proprietary algorithm to find ideal matches, all for a monthly fee. Initially, photos were not revealed until communication between both members had occurred via the mediated methods on the site, but eHarmony has since changed this and allows pictures to be seen upfront. Other Web sites followed eHarmony’s lead and began formulating algorithms to help the user find the perfect match. Independent research and development firms grew out of this new marketing ploy, such as weAttract.com, which has developed systems for Match.com and Yahoo.com, and boasts a scientific advisory board. Taking science a step further, GenePartner claims to have the ability to help users find true love via genetic testing. However, this claim has been contentious among the scientific community. In fact, claiming that science has a hand in dating algorithms

Dating Web Sites

321

at all has been widely debated. Eli Finkel, associate professor of social psychology at Northwestern University in Chicago, has spoken out against algorithms and the science of dating Web sites, stating that these sites do not adhere to scientific standards, and that in principle their algorithms are unlikely to work. Researchers have urged the government to examine online dating sites as a form of consumer fraud. Public attitudes toward dating Web sites were somewhat negative in the 1990s. Many believed that those who used them must be lonely or desperate, somehow incapable of making a real-world connection. Media attention often focused on the negatives of dating Web sites, such as those who were caught lying about themselves with regard to marital status, weight, height, and other characteristics, or those who turned out to be predators. In 1998, some of the stigma associated with online dating was alleviated when the Tom Hanks and Meg Ryan movie You’ve Got Mail brought the issue into the mainstream. Some psychologists and sociologists posit that online dating sites create a shopping mentality, in which people think having more choices will increase the likelihood of finding a better option down the road, entice people to trade up, cause them to devalue their current partner, or treat people as commodities. Some people still believe that online dating is a “meat market”—a term once used for singles bars—and that habitual profile-browsing interferes with building real romantic relationships. Nevertheless, the general outlook on dating Web sites has greatly improved since the 1990s. Given the proliferation of Web-capable devices and the rise of social media, joining an online dating Web site is natural to those who have grown up with computers, smartphones, tablets, and other electronic devices. Many of these individuals do not see joining a dating site as anything other than another way to meet people. Online dating sites offer a larger pool of eligible partners than one could hope to find by relying on family and friends for matchmaking. Customers often reveal more personal and private information on these sites than they might to someone they meet in real life for the first time. While this may make them more attractive to potential dates, it can also lure predators and those who may only want a one-night stand, instead of a lasting relationship. Spammers may also use online dating sites to gather e-mail addresses. In 2014, online

322

Day Care

dating sites were not regulated by federal law; however, some states had bills to regulate these sites with regard to criminals or predators, such as requiring the sites to announce their lack of a background check on users. Customers who do not keep their accounts active may be surprised to learn that their online profiles are permanent. Though a customer can cancel an account, most dating sites retain the person’s information to boost their statistics. Other dating sites will create ghost profiles, or fake profiles, in order to lure in new members, claiming these “people” have shown interest in the potential member’s temporary profile. This ploy is used to convince the person that finding a partner will be so simple and quick that he or she joins on the spot. The dating Web site landscape is changing. Newer sites like HowAboutWe.com feature people proposing a date idea, and asking if anyone wants to join them. Thus, instead of rejecting a person, potential dates reject the date. For Sparkology, an invitation-only site, the idea is to strive for quality rather than quantity, and ghost profiles (members who are no longer active, or who were fake to begin with) are eradicated. The future of dating Web sites will include more active participation, the inclusion of social elements, and mobile dating—for instance, moving the site to an app, or offering location-based meet-ups where an app connects a user to people who are in the vicinity so users can meet. Michelle Martinez Sam Houston State University See Also: Dating; Hooking Up; Speed Dating. Further Readings Finnkel, Eli J., Paul W. Eastwick, Benjamin R. Karney, Harry T. Reis, and Susan Sprecher. “Online Dating: A Critical Analysis From the Perspective of Psychological Science.” Psychological Science in the Public Interest, v. 13/1 (2012). Privacy Rights Clearinghouse. “Fact Sheet 37: The Perils and Pitfalls of Online Dating: How to Protect Yourself.” https://www.privacyrights.org/fs/fs37 -online-dating.htm (Accessed May 2013). Slater, Dan. Love in the Time of Algorithms: What Technology Does to Meeting and Mating. New York: Penguin, 2013.

Sprecher, S., A. Wenzel, and J. Harvey, eds., Handbook of the Initiation of Relationships. New York: Psychology Press of Taylor and Francis, 2008. Statistic Brain Research Institute. “Online Dating Statistics—Statistic Brain.” http://www.statisticbrain .com/online-dating-statistics (Accessed April 2013).

Day Care The term day care is commonly used to describe outof-home care for young children, typically while one or both parents are at work. Although the terms day care and child care are often interchangeably used, many who work in the field prefer the term child care because day care reflects when care is provided, not care providers’ responsibilities. The term child care is also more inclusive, given that some parents work nontraditional schedules and need to make arrangements for the care of their children during the early mornings, evenings, or even at night. In addition, some parents prefer for their child to be cared for in the family home, and rely on relatives or nannies. Child care may also refer to arrangements for older children, which take place before or after the typical school day. However, day care here is defined as out-of-home care during standard business hours for children under the age of 5. Over 60 percent of children under the age of 5 experience some form of routine child care. Because nonparental care is so common, understanding the effects of day care on children’s cognitive and social development is critical. Although one of the primary goals of day care is to allow parents to work, children also gain opportunities for early peer interactions and the chance to hone skills that will help them adjust to school. Historical Trends and Current Statistics For most of the country’s history, a child’s mother was typically his or her sole caregiver. Prior to the 1950s, women took care of the home and children. This began to slowly change around World War II, when women began entering the workforce in growing numbers as men fought overseas. This trend reversed in the 1950s, but then picked up again in the 1960s and 1970s due to the efforts of the women’s movement. By 2010, women made up 47 percent of



the total U.S. labor force. In fact, recent estimates suggest that 51 percent of American women went back to work within four months of giving birth to their first child. The Bureau of Labor Statistics estimates that approximately 65 percent of mothers with children younger than 6 years of age are employed or are currently looking for work. As more mothers have entered the workforce, they have made arrangements for their young children to enter day care. According to the 2011 Survey of Income and Program Participation (SIPP), 12.5 million children under the age of 5 are in some type of regular child care. Approximately 25 percent of these young children were in center-based day care. On average, children are in day care for 33 hours a week; however, the rates are often higher for children of working mothers. Many day care centers have set hours for children to accommodate the typical work schedule, and parents pay on a weekly or monthly basis, rather than paying by the hour. Thus, the hours a child spends in day care may depend more on the center’s policies and operating hours, rather than the parents’ schedules. Day care is a substantial cost for families with young children. Estimates from the Census Bureau indicate that the average cost for day care for children under 5 is $179 per week, which translates to $9,300 per year. However, there is wide variability in the costs of different care arrangements. Highquality day care tends to cost considerably more, and because of financial necessity, some parents select less expensive day care options. Depending on the provider, such low-cost options may be of questionable quality. Quality of Care High-quality day care supports optimal development and is often associated with more positive child outcomes. Quality of day care is frequently assessed with rating scales, such as the Early Childhood Environment Rating Scale, or the Caregiver Interaction Scale. In evaluating quality of care, both structural and process features of the program are considered. Structural features are quantifiable and can be subject to regulations, such as caregiver–child ratios, group size, and the training and education of care providers. Determinations of whether the center successfully meets health and safety requirements are noted in evaluations of the structural features of care. States differ in

Day Care

323

the extent to which they monitor structural features, and it is the responsibility of each state to develop their minimum standards for day care centers. Process features, on the other hand, refer to factors experienced by the child. Examples of process features associated with high-quality day care include positive interactions with teachers and peers, access to quality materials, and exposure to developmentally appropriate learning activities. Process quality significantly matters for child cognitive and social outcomes. Many researchers propose that structural factors affect process features, which in turn affect child outcomes. For instance, providers responsible for just a few children tend to be more sensitive, responsive, and warm to those in their care. Furthermore, with lower child-to-caregiver ratios, care providers are less likely to use negative control, and are more likely to lead developmentally appropriate activities. Lower ratios are linked to child behavior in that children in smaller classes tend to display less anxiety and aggression. Training is another structural characteristic that can greatly influence teacher–child interactions. Providers with advanced training are less authoritarian and more child centered than those without training, which has positive effects on child outcomes. Whereas low-quality care is characterized by negative interactions with caregivers, unfriendly exchanges with peers, and lack of cognitive stimulation, high-quality care involves supportive child– caregiver interactions, opportunities to interact with peers in positive ways, and exposure to cognitively stimulating materials and activities. Children seem to thrive and be happier in high-quality day care environments, and some researchers have found that this is reflected physiologically through decreases in the stress hormone cortisol across the day. Conversely, those in low-quality day care demonstrate increases in cortisol across the day. Additionally, children in high-quality day care have more advanced language and cognitive abilities compared to those in lower-quality care. Some research also suggests that children in higher-quality care demonstrate more advanced social skills. The effects of high-quality child care appear to be longlasting, according to the National Institute of Child Health and Human Development (NICHD) Study of Early Child Care and Youth Development.

324

Day Care

This study was a longitudinal investigation of more than 1,300 children and their families. These children and their families were followed from birth through ninth grade, with repeated assessments of day care characteristics and family variables. Analyses of this data suggest that there are long-term effects of day care; and higher quality of day care was associated with positive child outcomes in fifth and sixth grades, including more advanced vocabulary scores. The effects of day care may depend on a child’s characteristics. For instance, lower-quality care may be more detrimental to children with difficult temperaments; at the same time, higher quality care may be more beneficial for children with difficult temperaments, as opposed to an eas-going nature. A great deal of the research in this area has focused on children considered to be at-risk; high-quality care may be especially beneficial for those children living in poverty, and those with troubled family lives. In situations such as these, high-quality day care may serve a protective function and attenuate the effects of other risk factors. Several experimental and quasi-experimental studies have suggested that high-quality day care can help buffer against other risk factors. The Perry Preschool Study is one of the best-known investigations of the effects of high-quality day care for an at-risk sample. This investigation targeted children living in poverty who were considered to be at elevated risk for school failure. Participants were randomly assigned to a high-quality day care program or the control group, which did not receive care. In adulthood, children who received high-quality day care were more successful on average as indexed by more years of schooling, less criminal activity, and higher salaries. Quantity and Type of Care The number of hours in day care and the type of care have also been associated with child outcomes. Although some have suggested that extensive day care may put children at risk for insecure attachments, the NICHD Study of Early Child Care and Youth Development suggests that the number of hours in care does not affect attachment if the mother is sensitive. Extensive day care, on the other hand, may be a risk factor for later behavior problems. Children who were in day care for longer hours are rated by both parents and teachers as having more externalizing problems in kindergarten.

These findings are further supported with data from the Early Childhood Longitudinal Study—Birth Cohort, which suggest that children in full-time day care display more externalizing problems than those in part-time care. Hours in center-based day care, in particular, may predict externalizing problems. Findings from the NICHD Study of Early Child Care and Youth Development suggest that those children with extensive, early center-based day care exhibited greater behavior problems through sixth grade. Those in homebased care do not show this pattern of elevated externalizing behaviors. The exact mechanism underlying this finding is not known. Some researchers suggest that the larger groups that tend to characterize center-based day care may be stressful for young children and care providers. Day care centers often have more children, but also more structured activities than less formal child care arrangements. Thus, although center-based day care may be associated with more problem behaviors, it may also be associated with more advanced linguistic and cognitive outcomes as compared to other care arrangements. Policy Implications Studies investigating features of the parent­–child relationship and day care characteristics tend to find that parenting is the most important predictor of child outcomes. Still, the quality, amount, and type of day care arrangement has a significant effect on development of linguistic, cognitive, and social competencies. Findings from the NICHD Study of Early Child Care and Youth Development and other investigations suggest that it is important to move beyond examining the effects of whether or not a child is simply in day care to examining the quality of the care arrangement. High-quality care is associated with more advanced linguistic, cognitive, and socioemotional outcomes. Some researchers suggest that government intervention and policy change are necessary to improve the quality of day care in the United States. However, others argue that merely improving structural features of care—such as supporting care provider training—can have a positive impact on the daily experiences of children in day care. Parents indicate that they would like their children to attend day cares centers with warm caregivers, well-trained staff, and a play-based curriculum. However, families often need to make trade-offs in

Deadbeat Dads



selecting a care arrangement. Research suggests that cost often matters more than features of care, such as provider warmth, in parents’ final day care selection decisions. In fact, the cost of high-quality day care is substantial, thus families with limited financial means may be forced to rely on suboptimal care. Programs such as Head Start ensure high-quality care for children living in poverty. It is important to focus on improving the quality of day care arrangements with particular attention to ensuring that all families have access to quality care. At-risk children may be those most likely to benefit from high-quality day care. Lisa H. Rosen Maysa Budri Lauren Heiman Texas Woman’s University See Also: Child Care; Head Start; Montessori; Mothers in the Workforce. Further Readings Belsky, J., et al. “The NICHD Early Child Care Research Network: Are There Long-Term Effects of Early Child Care?” Child Development, v.78 (2007). . Laughlin, L. “Who’s Minding the Kids? Child Care Arrangements: Spring 2011.” U.S. Census Bureau. http://www.census.gov/prod/2013pubs/p70-135.pdf (Accessed March 2014). NICHD Early Child Care Research Network. “Child Care Effect Sizes for the NICHD Study of Early Child Care and Youth Development.” American Psychologist, v.61 (2006). Owen, M. T. and K. L. Bub. “Child Care and Schools.” In Social Development, M. K. Underwood and L. H. Rosen, eds. New York: Guilford Press, 2011. Vandell, D. L. “Early Child Care: The Known and the Unknown.” Merrill-Palmer Quarterly, v.50 (2004).

Deadbeat Dads Deadbeat dads are fathers who do not stay financially involved with their children. Most often, this term refers to divorced or never-married fathers who do not regularly pay child support. A number of family, legal, economic, and policy scholars have attempted

325

to improve deadbeat dads’ financial investment in their children, with minor success. Policies regarding payment of child support have become more punitive over the last 20 to 30 years, with some states withholding drivers’ licenses, paychecks, tax refunds, or enforcing jail time until child payment is made. Others have argued against jail time and withholding drivers’ licenses because it severely limits fathers’ abilities to work, earn money, and pay what they owe. Some mothers—informally, without court order to do so—withhold contact between the father and child until child support is paid. Some scholars have found that because of high unemployment and a lack of jobs, “deadbeat dads” are “dead broke,” and cannot afford to pay because of a lack of income. Still others find that some fathers who cannot afford to pay their child support buy necessities for children (e.g., diapers and clothes) as they are able to afford it. Finally, some scholars argue that fathers who are financial “deadbeats” may be emotionally invested in their children whereas some fathers who consistently pay their child support may be emotional “deadbeats.” Emotional deadbeats, however, seems to carry far less of a cultural stigma compared to financial deadbeats. Definition of Deadbeat Dads The term deadbeat dad refers to fathers who have shirked their financial responsibilities to their children. This term is rarely applied to married fathers. Instead, most use the term when referencing divorced or never-married fathers who do not consistently pay their child support. It should be noted that there are deadbeat moms, though that phrase has not become as popular as deadbeat dads. Men who remain financially, but not emotionally, involved with their children generally escape this negative label. Many methods used to enforce and legislate collection of child support do not appear to encourage payment. These methods differ based on the state, but can include wage garnishment, revoking drivers’ licenses, or even jail time. In 1996, the Personal Responsibility and Work Opportunity Reconciliation Act was passed. Part of this act requires employers to report newly hired employees’ names, addresses, and Social Security numbers to help state agencies collect child support from nonpayers. This act also requires employers to withhold income from employees found to be

326

Deadbeat Dads

delinquent in child support. Some researchers, however, have found that some fathers simply are not able to pay their child support because they are unemployed or underemployed (working, but not earning enough money to cover all expenses), which has inspired the term deadbroke dads to help explain why some fathers may be deadbeats. One study found that about a third of men required to pay child support were unable because of a lack of funds. Another study of deadbeat dads found that of fathers who owe child support, their average annual income was $6,349 and they owed $300 each month. Importance of Emotional Involvement Researchers find that fathers who remain involved with their children generally continue to financially support them. This suggests that these fathers need support as parents that does not center only on their ability to pay child support, but that also keeps them engaged with, and responsible for, their children. Thus, fathers’ emotional investment in their children is critical to children’s development. Many Americans, including academicians, believe that children benefit from frequent contact with their fathers who are divorced or single, but researchers who have tested this assumption have found little or no relationship between child well-being and frequency of father contact. Researchers have found, however, that engaging in responsive parenting techniques (i.e., responding to children in nurturing and developmentally appropriate ways) as well as authoritative parenting styles (i.e., high relationship quality and warmth, praising children’s accomplishments, responsive control, and limit-setting) improves child wellbeing. In other words, when divorced fathers primarily or only engage in leisure activities with their children, they are not parenting in ways that benefit their children even though they are spending time with them. When they parent in responsive ways, they are able to better monitor, develop close affective bonds, teach, and communicate with their children. Divorced fathers who use authoritative parenting styles have children with fewer internalizing and externalizing problems and who display less emotional distress. Responsive and authoritative fathering also benefit children’s IQ scores and school performance, and lowers the risk of adolescent delinquent behavior and drug use.

Depiction of Deadbeat Dads Despite the importance of fathers’ physical and emotional involvement in their children’s lives, however, societal concerns have primarily focused on fathers’ financial responsibilities. For example, newspapers across the country have reinforced this breadwinning norm with headlines such as “Deadbeat Dad Dragnet: Feds Nab Well-Off Men Whose Kids Live in Poverty,” and “City’s Deadbeat Dads’ Hall of Shame: Millions Owed by the Men Who Shirk Child Support.” Sometimes, the names and pictures of fathers who have not paid child support are printed with the articles. Divorced and never-married fathers who pay child support. but withhold emotional resources. generally have escaped these negative views, and some scholars doubt that the term emotional deadbeat will ever be as popular as financial deadbeat. This has the effect of strengthening societal assumptions that fathers’ payment of child support is essential, but their emotional and physical involvements are not as important. A recent study on stereotypes, however, found that young adults believe that fathers who are financially uninvolved are also emotionally uninvolved. For example, these young adults associated deadbeat dads with being lazy, uninterested in their families, financially irresponsible, and unable or unwilling to support their families. This may be because expectations for fathers have changed. Fathers in the 21st century are expected to be emotionally and physically involved with their children, on top of financially invested in them, and thus deadbeat may be more broadly defined. Jessica Troilo West Virginia University See Also: Alimony and Child Support; Child Support; Child Support Enforcement; Cultural Stereotypes in Media; New Fatherhood; Office of Child Support Enforcement. Further Readings Mandell, Deena. Deadbeat Dads: Subjectivity and Social Construction. Toronto: University of Toronto Press, 2002. Troilo, Jessica and Marilyn Coleman. “College Student Perceptions of the Content of Father Stereotypes.” Journal of Marriage and Family, v.72 (2008).

Wimbley, Catherine. “Deadbeat Dads, Welfare Moms, and Uncle Sam: How the Child Support Recovery Act Punishes Single-Mother Families.” Stanford Law Review, v. 53 (2000).

Death and Dying Death is a universal in society, yet one’s life expectancy and feelings about death and dying are shaped by time and place, ethnicity, culture, and social status. Medical advances have vastly increased life expectancy in many parts of the world, which has also influenced people’s relationship to death and dying. However, death affects the life of all families. The loss of a parent, child, or sibling changes the composition of a family, and means that important roles are sometimes left vacant. Death on a large scale from warfare or plague can alter societal ideas about family. Colonial Era Death was one of the first consequences of the encounters between Europeans and Native Americans. Sometimes, Native Americans encountered death by contracting diseases from Europeans for which they had no immunity. Viruses like smallpox spread along traditional trade routes. Europeans sometimes interpreted the spread of epidemics among Native American people as proof of divine sanction for conquest. In the colonial era, both Native Americans and Europeans faced violent death at the hands of one another. The tenor of these deaths varied, and often depended on shifting alliances in a rapidly changing world. Warfare was used to assert authority, or resist incursion, and arose as a result of shifting alliances. English settlers used mass extermination techniques against the Pequots, culminating in the Mystic Massacre in May 1637. This war broke the power of the Pequots, and deprived them of their allies. Death by war, disease, or natural causes affected family structure. Although military tactics like the Mystic Massacre targeted all members of a society, in most battles, men were more likely to be both soldiers and victims. In some Native American societies, family members killed in battle were

Death and Dying

327

replaced by captives who would be adopted into a family and a tribe. In European American communities, Protestant Christianity shaped families’ reactions to the deaths of their loved ones. Their religious convictions affected the way that they faced their deaths and their understandings about the proper mourning for loved ones who had died. Family and community members monitored each others’ behaviors, expressing concerns when their actions after the death of a relative or friend seemed to exceed the bounds of what was deemed appropriate. This was in part the purpose of the social aspects of mourning: demanding expressions of grief from family members, and imposing limits on those same expressions of grief. In exchange, religion offered comfort in the notion of an afterlife, where loved ones would be reunited in a place devoid of the sorrows of Earth. Few written records exist from the colonial era from slaves on their beliefs about death and dying, but archaeological records and accounts of slave burials by whites give us a glimpse into how African Americans handled death. In many plantation communities, slaves buried their dead following African burial traditions, including the placement of the bodies and the inclusion of objects in and around the grave. Whites remarked on the clamor of slave funerals, suggesting that African American practices were diametrically opposed to the comparatively quiet, and controlled mourning practices at white funerals. Nineteenth Century By the 19th century, Protestant American culture emphasized the “good death.” In a good death, the dying would be surrounded by family members and would show signs of peace and acceptance. If the dying person was still conscious, a loved one would ask if he or she felt assured of salvation. If no longer able to speak, the family member reported if the dying person’s demeanor pointed toward salvation. The good death was reflected in the rural cemetery movement. In the colonial period, the dead were buried in the middle of towns or in churchyards. In 1831, the Massachusetts Horticultural Society purchased 72 acres of land in Cambridge and Watertown in which to bury the dead in a garden setting. Mount Auburn Cemetery became an idealized final resting place. According to advocates for rural cemeteries, death was a natural part of life; these advocates

328

Death and Dying

emphasized bodily, rather than spiritual, corruption, and spoke openly about post-death decay. At the same time, these rural cemeteries provided a beautiful place for family members to visit and remember their loved ones. Memorial Day visits to cemeteries became a time for family members and friends to gather, decorate graves with flowers, have picnics, and socialize. Many cultures embrace mourning practices, and in the 19th century, wealthier European American families staged lavish funerals to both honor the dead and show the status of the living. In order to demonstrate their grief, widows wore “widow’s weeds”: an ensemble of a black dress, veil, and bonnet, trimmed only in black crepe. Wearing jewelry was allowed if it was also black and somber. Brooches or other ornaments were decorated with symbols of death, or were fashioned from the hair of the deceased as a constant reminder of that loss. Disease continued to kill millions. A yellow fever epidemic killed New York residents in 1803, and another hit the south hard in 1841. In the 1830s and 1840s, people throughout the world died during a cholera pandemic. In the late 1830s, smallpox devastated Plains Indians populations. Wealthier American families left the cities during epidemics, seeking to escape the disease by biding their time in the country. Poorer residents had no choice but to take their chances. Within poor households, family members served as caregivers and nurses, curing or ushering their loved ones to death. When the poor died, their bodies were often left unburied for days before they were interred in mass graves, leaving family members no memorial site to visit. As white settlers continued to push west, Native Americans resisted encroachment on their land. The defeat of many Native American tribes in 19th-century warfare, and the withdrawal of British troops from the frontiers after the War of 1812, led to further territorial expansion by whites and further death of Native Americans, as witnessed during the Trail of Tears, where approximately 4,000 Cherokees died in their forced march into Indian territory. Resettled into Indian territory, these tribes needed to rebuild families and remake familial traditions in a drastically different setting. The largest American war of the 19th century was the U.S. Civil War, which changed Americans’ relationship to death. The military institutions and the governments of the United States and the

The Smithsonian Institution invited the NAMES Project to display a portion of the AIDS Memorial Quilt in Washington, D.C., in July 2012. Family and friends of AIDS victims have created panels to remember their loved ones since 1987. The quilt was last displayed in its entirety in 1996 when it covered the entire National Mall.

Confederacy were ill-equipped to deal with the hundreds of thousands of soldiers who died on and off the battlefield. Despite soldiers’ efforts to recreate the good death, they were often buried in mass unmarked graves, and if they died in battle, their bodies were often so dismembered that identification was impossible. Family members of the dead on both sides worked to locate remains of their loved ones and bring them back to their native soil for burial. Twentieth and Twenty-First Centuries As in earlier centuries, death and dying remained tied to factors such as race, geography, and class. In the south, African Americans with economic or political power were targeted for lynching. In 1955, when 14-year-old Emmett Till was murdered in Money, Mississippi, for flirting with a white woman, his mother, Mamie Till Bradley, decided to hold an open casket funeral to show the world what had happened to her son. The African American press ran pictures of Till’s mutilated body and the sympathetic white press reported on the reactions to both the funeral and the trial that followed.

Death and Dying



Situating this death within the family, showing that Emmett was not just an African American boy, but also the son of a grieving mother, helped gain sympathy for the cause. By the 1960s, murders of civil rights activists, depicted as the sons and daughters of loving families, were mainstream news, helping fuel a growing civil rights movement. Between 1900 and 1978, life expectancy in the United States increased by 26 years, from 47 to 73 years, and then rose to 78 years by 2010. This was largely a result of medical advances that conquered many childhood illnesses and provided better prenatal care and safer births. While the average life expectancy has risen in all race and socioeconomic categories, life expectancy remains lower for nonwhites and those in the lowest income brackets. Besides increased life expectancy, the biggest change in the 20th century was the depersonalization of death and movement of death from the home to the hospital, a trend that continued through much of the century. By 1989, nearly half of American deaths occurred in the hospital. In earlier centuries, care of the dying was the responsibility of the family, with watchers attending the last hours of a person’s life, if possible. It was also intensely personal, as family members cared for the dying and the corpse. Now, because medical care can prolong life, end-of-life care is often provided by medical professionals in the hospital, nursing home, or with the help of hospice care. Once dead, the body is typically handled by funeral home companies. Reacting to the depersonalization of death, by the end of the 20th century, many Americans expressed the desire to die at home, surrounded by family members. Palliative care, to lessen the severity of pain, makes it possible for patients to spend more time at home. The modern hospice movement emerged in the late 1960s in Europe, and the first hospice in the United States opened in 1974. Hospice care is designed to support people in the final stage of life, when treatment is no longer an option. Hospice helps the dying to experience as little physical or psychological pain as possible. Despite the expressed wish to die at home, even hospice cannot help everyone to achieve this goal. As of 2007, only 24 percent of Americans managed to achieve a home death, while 40 percent of those over the age of 85 died in nursing homes or other

329

long-term care facilities. The home does not always provide what is needed to help the dying; therefore, hospice care is often provided in hospitals or separate facilities. While these settings take the patient out of the home, they create a home-like environment where family members and friends can gather, providing love and support for the dying and each other. Funerals and other mourning rituals allow families to mourn the passing of their loved ones and assign meanings to those deaths. In Jewish tradition, immediate family members sit shiva for seven days. According to custom, in this period, mourners sometimes cover mirrors. When the shiva is over, mourners are expected to say Kaddish, or prayers, sometimes as many as three times a day for a period of a year. Kaddish reminds mourners to praise God, despite the loss of their loved ones. Families can also find comfort in public memorials. Family members and other loved ones leave tokens by public memorials such as the Vietnam Veterans Memorial Wall. During the worst of the AIDS crisis in the United States, lesbian, gay, bisexual, and transgender (LGBT) people and others remembered their loved ones through the creation of quilt panels for the AIDS Memorial Quilt. By doing so, LGBT people claimed kinship with lovers and friends in ways that mainstream society often denied. The terrorist attacks of September 11, 2001, had a direct impact on hundreds of American families. As the American public publicly and collectively mourned, holding memorials around the country for those who lost their lives in the attacks, family members of the victims came together. One of the organizations that emerged was the September 11th Family Association, which puts a human face on the tragedy, reminding people that death resonates in individual families, as well as in the nation.

Sarah L. Swedberg Colorado Mesa University

See Also: African American Families; Caring for the Elderly; Civil Rights Movement; Demographic Changes: Aging of America; Native American Families; Widowhood. Further Readings Ariès, Philippe. The Hour of Our Death. Helen Weaver, trans. New York: Oxford University Press, 1981.

330

Defense of Marriage Act

Faust, Drew Gilpin. This Republic of Suffering: Death and the American Civil War. New York: Knopf, 2008. Isenberg, Nancy and Andrew Burstein, eds. Mortal Remains: Death in Early America. Philadelphia: University of Pennsylvania Press, 2002. Laderman, Gary. The Sacred Remains: American Attitudes Toward Death, 1799–1883. New Haven, CT: Yale University Press, 1996. September 11 Families’ Association. http://911families .org/about-us (Accessed July 2013).

Defense of Marriage Act On September 21, 1996, President Bill Clinton signed into law the Defense of Marriage Act (DOMA), marking the first time that the federal government defined the parameters of civil marriage as between one man and one woman. Passage of DOMA was also the first time that gender and sexual orientation status were utilized in federal legislation to pass a law based on marriage rights. Debate surrounding couples and family life is nothing new, and policymakers continue to erect parameters for how people can live their lives, but law and policy evolve as a reflection of changing values and attitudes. In the years since DOMA was passed, the United States has shifted in its perception of same-sex marriage. When DOMA was signed into legislation, 27 percent of Americans believed that same-sex marriage should be legal. By 2012, 53 percent of Americans believed that samesex marriage should be legal, and that those involved deserve the same benefits given to those in opposite-sex marriages. Defenders of traditional marriage—those who support the nuclear family ideal of a married man and woman with children—endorse DOMA as necessary to protect children and the institution of marriage. Opponents of DOMA cite the law’s unconstitutionality and the unnecessary strain that it places on families that do not mirror the government’s definition of marriage. History of DOMA The Defense of Marriage Act emerged in response to Baehr v. Miike, a case in Hawaii that challenged the state’s ability to exclude same-sex couples from obtaining a marriage license. During the 1996 U.S. presidential race—the same year that Hawaii became

the first state to secure civil marriage for gay and lesbian couples—same-sex marriage was a divisive topic. At the Iowa Caucus later that year, nearly all of the Republican presidential candidates signed a pledge to protect traditional marriage. Same-sex marriage stood in direct opposition to conservative family ideals, and the Republican Party mobilized around the fight to maintain the widely held image of the heterosexual nuclear family. Thus, a bill was introduced in the 104th Congress in order to create a standard definition of marriage, and thereby family, for U.S. citizens. Considered an act to define and protect the institution of marriage, DOMA is a succinct piece of federal legislation containing only two substantive sections. The first pertinent section is written to allow states the right to deny marriages between same-sex couples that occur in other states. States may also pass legislation that specifically bars samesex couples in their state from obtaining a civil marriage by defining marriage as between one man and one woman. In 2014, more than states had such legislation in place. The second portion of DOMA is specifically tied to the federal government. The repudiation of same-sex marriages that occur in states where such acts are legal is sustained by this section of DOMA. Therefore, same-sex marriages that legally occur are never recognized by the federal government, and can be ignored by state governments. A final piece of DOMA further clarifies the definition of spouse as referring only to a person of the opposite sex who is either a husband or a wife. The actions taken by the 104th Congress to halt same-sex marriage encompassed a variety of repercussions for diverse family forms. The Government Accountability Office (GAO), an extension of the legislative branch for government oversight, recognizes 1,138 federal statutory provisions that are contingent on marital status, and are therefore denied to same-sex couples and their families. Some of these federal provisions include access to employment benefits (including health care, pensions, and social security), parenting and adoption rights, joint filings on federal taxes, immigration and naturalization rights, and the ability to determine medical care. The Defense of Marriage Act targets families diverging from the traditional images of family, especially gay and lesbian families and polygamous

Delinquency



families. Research on polygamous families, however, is still limited and rarely discussed by DOMA opponents. What is known, however, is that estimates project that 600,000 same-sex couples and somewhere between 50,000 and 150,000 polygamous families are living in the United States. Many of these families include children, who may experience stress and anxiety because their families are threatened under legislation such as DOMA. DOMA Today On March 27, 2013, 17 years after DOMA passed, U.S. v. Windsor was argued before the U.S. Supreme Court, contesting the constitutionality of DOMA. As recognized by the state of New York, Edith Windsor and her wife were legally married. Upon her wife’s death, Edith was required to pay a large sum of money on the inheritance of her wife’s estate. If the U.S. government had recognized their marriage, as is the case with opposite-sex couples, she would not have had to pay federal estate taxes. On June 26, 2013, the U.S. Supreme Court ruled that DOMA was unconstitutional under the due process clause of the Fifth Amendment, which prevents the government from arbitrary denial of life, liberty, or property. Justice Anthony Kennedy wrote the majority opinion, which stated that “DOMA’s principal effect is to identify a subset of state-sanctioned marriages and make them unequal.” President Obama lauded the Court’s decision as a “victory for American democracy.” In 2013 alone, seven states legalized same-sex marriage; four of them— New Jersey, Hawai‘i, Illinois, and New Mexico— since DOMA was overturned. Society, of which the family is a crucial component, is in a constant state of flux. Values and attitudes evolve, and the state of the American family is often central in these debates. Eventually, it is hoped that U.S. policies will come to accurately reflect the diversity of individuals and family life within its borders. Katie M. Barrow Katherine R. Allen Virginia Tech See Also: Civil Unions; Domestic Partner Benefits; Gay and Lesbian Marriage Laws; Same-Sex Marriage; Transgender Marriage.

331

Further Readings Defense of Marriage Act, H.R. 3396, 104th Congress, 2nd Sess. http://www.gpo.gov/fdsys/pkg/BILLS -104hr3396enr/pdf/BILLS-104hr3396enr.pdf (Accessed June 2013). United States v. Windsor 570 U.S. (2013). http://www .supremecourt.gov/Search.aspx?FileName=/ docketfiles/12-307.htm (Accessed December 2013). U.S. Government Accountability Office. “Defense of Marriage Act: Update to Prior Report.” http://www .gao.gov/products/GAO-04-353R (Accessed June 2013). Rimmerman, Craig A. and Clyde Wilcox, eds. The Policy of Same-Sex Marriage. Chicago: University of Chicago Press, 2007.

Delinquency Delinquency is defined as illegal, deviant, or antisocial actions on the part of adolescents. Expressions of antisocial behavior vary by age. During elementary school, it usually consists of actions such as bullying, lying, resisting adult authority figures, and exhibiting anger when desires are thwarted. Such children are often described as oppositional/ defiant. For adolescents, delinquency includes illegal behaviors such as stealing, destroying property, and using controlled substances as well as behaviors that are deviant but not necessarily illegal such as lying, cheating, or engaging in casual sex. Antisocial behavior in adulthood can consist of illegal activities, as well as a wide range of activities that are deviant but not necessarily illegal, such as substance use, gambling, lying, and risky sex. One of the most widely accepted findings in criminology and developmental psychology is that problem behavior in childhood is a strong predictor of later problem behavior. This continuity of antisocial behavior has been found in numerous longitudinal studies in the United States and elsewhere. While it is not certain that individuals who show antisocial behavior in childhood or adolescence will go on to become adult criminals, it is almost always the case that individuals who show antisocial behavior in adolescence or adulthood exhibited antisocial behavior during childhood. These findings indicate that antisocial tendencies likely manifest

332

Delinquency

during childhood. During this time, parents are the primary agents of socialization. Although congenital traits like personality and temperament are also salient, a child’s psychological and behavioral development is heavily influenced by the family environment. This leads to the hypothesis that a major cause of delinquent behavior is ineffective parenting. Indeed, research shows that parenting explains more variance in delinquency than any other single factor. Some of the earliest evidence linking parenting to delinquency came from the pioneering work of Harvard University criminologists Sheldon and Eleanor Glueck. In 1939, they began their most well-known study, called Unraveling Juvenile Delinquency. Over a period of 10 years, they collected data from a sample of 500 delinquents from the Massachusetts correctional system, and 500 nondelinquents enrolled in Boston schools. From this work, they learned that (1) the earlier the age of onset of delinquency, the more serious and persistent the criminal career; (2) antisocial behavior tends to be relatively stable across time; and (3) the most important determinant of delinquent behavior is family environment. Specifically, children were at high risk for delinquency when their parents failed to provide adequate supervision, engaged in lax or inconsistent discipline, and when there were weak emotional ties between parent and child. These findings have been corroborated by a number of studies since then. While seminal, the Gluecks’ research was criticized for being atheoretical. A number of theories that focus on the role of family processes in the development of delinquent behavior have since been developed. One of the first was social control theory. Rather than focus on why some people are deviant, this theory focuses on why people conform. According to criminologist Travis Hirschi, the answer may be that most people establish a bond to society, while delinquency is a result of weak or broken social bonds. In short, delinquents do not get along with their parents or care about others, have no long-term goals, do not participate in conventional activities, and do not believe in the legitimacy of laws. The theory does not address, however, why delinquents lack these social bonds. One possibility is that parents exert much influence over whether children develop them. In light of findings from developmental psychologists regarding the influence of parenting on

delinquency, Hirschi and colleague Michael Gottfredson proposed the self-control theory. It asserts that individuals low in self-control are attracted to deviance and crime. Such individuals are impulsive, self-centered, risk-taking, and unconcerned with long-term consequences. Crime provides instant gratification and a way to avoid activities that require time, energy, and delayed gratification. Furthermore, everyone comes into the world low in self-control, but in time, most develop it as a result of exposure to parents and other authority figures who set behavior standards, monitor behavior, and consistently apply consequences for violating expectations. Children of parents who are lax or inconsistent in establishing such standards and consequences fail to develop self-control. Research suggests that only a portion of the relationship between parenting and antisocial behavior can be explained by self-control, which means that while important, other factors must be considered. The Coercion Model Gerald Patterson used the principles of social learning theory to develop the coercion model of delinquency. This process begins with an explosive critical parent who produces an angry defiant child. At least half of the time, aversive exchanges terminate with the parent capitulating to the child’s demands. The result of this pattern of interaction is that children’s antisocial behavior is reinforced, and parents’ inept parenting is reinforced. This style of interaction is often generalized to peers, leading to rejection by conventional youth, as well as teachers, which results in poor academic performance. By default, these socially rejected youths form friendships with each other and these deviant peer groups serve as a training ground for delinquent activities. Importantly, Patterson distinguishes these youth, called “early starters” from “late starters,” those who experiment with deviant behavior in middle-to-late adolescence. The latter group tends to adopt delinquent activities at an older age and discontinues them in a short period. The early starters are at risk for chronic offending during adolescence, criminal careers as adults, and a dismal life-course trajectory. There has been robust support for the coercion model theory. Its strengths include the focus on reciprocal family processes, clear links between parenting and entrance into a deviant peer network, as well as how such



Demographic Changes: Age at First Marriage

peers amplify the antisocial tendencies learned in the family. It does not, however, explain why a large proportion of antisocial children do not go on to be delinquent adolescents or adult criminals. The Life-Course Perspective The life-course perspective attempts to explain both continuity and rejection of antisocial behavior. Longitudinal research shows that the majority of antisocial children go on to lead conventional lives. Past research suggested that 15 to 20 percent of 10-yearold boys are oppositional/defiant, approximately 10 percent of adolescents are severely delinquent, and roughly 5 percent of adults engage in criminal behavior. While all seriously delinquent adolescents were oppositional/defiant children, and all adult criminals were serious delinquents, only about half of all conduct, disordered children go on to engage in serious delinquency, and only half of serious delinquents go on to engage in crime as adults. Such youth are said to “age out” of antisocial behavior due to prosocial bonds, which are a key factor in Rob Sampson and John Laub’s age-graded life course explanation of antisocial behavior. For oppositional-defiant children, whether or not they go on to engage in adolescent delinquency is largely due to social bonds that develop as a result of improved parenting, academic success, and affiliation with a conventional peer group. A successful romantic relationship with a conventional partner or job to which they are committed are examples of social bonds that reduce the chances that an adolescent delinquent will engage in crime as an adult. Research finds strong support for the theory, though it has been suggested that the influential social bonds in adulthood should be expanded to include peer relationships. Both theory and findings from empirical research demonstrate the crucial role of parents in deterring antisocial behavior. It is important to note, however, that factors such as living in a disadvantaged neighborhood, lack of occupational opportunity, stressful life events, and racial discrimination have also been shown to influence participation in delinquency and crime. It may be the case that many of these social factors are mediated by family processes. Theories such as the age-graded life-course theory provide a framework for merging family processes with these broader social influences. Future research would benefit from an examination of these issues.

333

Research findings provide insights that can be useful to policy and program development. For instance, prevention and intervention programs aimed at improving parenting skills and parentchild communication can be effective in reducing youth participation in antisocial behavior. For example, Gene Brody and colleagues have had substantial success with their Strong African American Families project, which aims to reduce adolescent substance use and risky sexual behaviors. Furthermore, social policies that are focused on supporting families, such as affordable childcare and universal health care, can help alleviate some stress experienced by parents, thus allowing them to spend more time and energy engaging in parenting activities associated with positive youth development. Leslie Gordon Simons Arizona State University See Also: Adolescent and Teen Rebellion; Bullying; Discipline; Life Course Perspective; Parenting; Problem Child; Runaways and Homess Youth. Further Readings Glueck, Sheldon, and Eleanor Glueck. Unraveling Juvenile Delinquency. Cambridge, MA: Harvard University Press, 1950. Hirschi, T. Causes of Delinquency. New Brunswick, NJ: Transaction, 2002. Reid, John B. , Gerald R. Patterson, and James J. Snyder. Antisocial Behavior in Children and Adolescents: A Developmental Analysis and Model for Intervention. Washington, DC: American Psychological Association, 2002. Regoli, Robert, John Hewitt, and Matt DeLisi. Delinquency in Society, 8th ed. Sudbury, MA: Jones & Bartlett, 2009.

Demographic Changes: Age at First Marriage Marriage rates have been steadily declining in the United States since the 1970s, and for those who marry, the average age at first marriage has been rising. This pattern is consistent throughout

334

Demographic Changes: Age of First Marriage

many developed Western nations, including Canada, France, and the United Kingdom. Along with a decrease in the number of people marrying has come an increase in cohabitation, which has increasingly substituted as the first intimate living arrangement for young people. Throughout U.S. history, statistics regarding first marriages and how they vary by sex, education, and race have changed, with the trend toward cohabitation instead of marriage one of the most drastic and enduring changes. Several theoretical perspectives have been used to explain changes in age at first marriage. Historically, marriage rates and age at first marriage have risen and fallen in accordance to periods of economic prosperity, especially in postwar periods, particularly following World War I and World War II. This means that during peacetime and periods of economic growth, such as the 1920s and the late 1940s, more people get married and at a younger age than during periods of war or economic decline, such as during the Great Depression in the 1930s. With the rise of the women’s movement in the late 1960s through the mid-1980s, women entered the labor force en masse. This had the effect of raising the age at first marriage and decreasing the rate of marriage. When women participate in the labor market, the cost of having children rises, which results in many women delaying fertility and marriage. Marriage is also delayed because working women, many of whom are financially independent, can afford to be more selective when it comes to picking a mate, which increases the age at marriage. As more women obtain higher education, their rate of marriage increases, but their age at first marriage remains high. Women with more education are able to be more selective about choosing a partner as a result of their economic stability. Despite the later age at first marriage for well-educated women, their rate of marriage remains higher than for women who are less educated. These findings are somewhat surprising because typically, individuals with a college education have more lenient views toward alternatives to marriage, such as cohabitation. However, it has been argued that because the purpose of attending college has become more geared toward obtaining a job, education may be encouraging more conservative views that support family-oriented choices. While employment opportunities for women have increased over the past 40 years, there has

been rising uncertainty in the job market for men. This explains the delay in age at first marriage and a decrease in the marriage rate for men. Marriage depends on the economic stability of both partners, so when job instability increases, the rate of marriage decreases, and the age at first marriage increases. Also, more educated and/or working women are increasingly looking for a husband with good career prospects. The legalization of abortion, rise of the birth control pill and other methods of birth control, and the repeal of laws that did not require fathers to pay child support for illegitimate children have all helped delay the age at first marriage and decrease rates of marriage. The sexual revolution of the 1960s and 1970s led to the decreased association of marriage and sex. With little worry about pregnancy, the benefits of marriage no longer outweighed those of remaining single, so marriage rates decreased. The increase in cohabitation before marriage has decreased rates of marriage and delayed age at first marriage. Over time, the acceptability of cohabitation has increased, particularly before marriage, and in some cases as a substitute for marriage. These trends reflect the sexual revolution of the 1960s, as well as increasing women’s equality. Despite the fact that cohabitation has become more accepted, many believe that marriage still has benefits that outweigh those of remaining single. The increasing acceptability of children born out of wedlock has also contributed to the delay in marriage and decreased rates of marriage because it is no longer socially necessary to marry after becoming pregnant or having a child. Research has shown that there are significant differences in rate of marriage and age at first marriage according to race. First, African Americans are more likely to delay marriage and marry at lower rates than whites. Black high school dropouts have a higher likelihood of cohabitating and a lower likelihood of marriage, whereas white high school dropouts have a greater chance of both cohabitation and marriage. Similarly, having any college education increases the chance of cohabitation and marriage for blacks, but for whites, college decreases the chance of cohabitation and increases the chance of marriage. Generally, college increases the chance of marriage, but this is amplified for African Americans. Europe is also experiencing a decrease in marriage rates and an increase in the age at first



Demographic Changes: Aging of America

marriage, along with an increase in rate of cohabitation. As in the United States, the increase of women in the labor force has contributed to a decrease in the marriage rate and an increase in the rate of cohabitation. Higher education has also increased the rate of marriage in Europe; however, no evidence supports that a woman’s education or her role in the labor force has affected age at first marriage. Interestingly, some European countries, along with Canada, have been experiencing a decrease in marriage, an increase in cohabitation, and an increase in fertility, further exemplifying the disconnect of marriage, sex, and children. Jay Teachman Carter Anderson Lucky Tedrow Western Washington University See Also: Birth Control Pills; Cohabitation; Contraception and the Sexual Revolution; Demographic Changes: Aging; Demographic Changes: Cohabitation Rates; Demographic Changes: Divorce Rates; Demographic Changes: Zero Population Growth/ Birthrates; Later-Life Families; Living Apart Together. Further Readings Elliot, Diana B., et al. Historical Marriage Trends From 1890–2010: A Focus on Race Differences.” Paper presented at the Annual meeting of the Population Association of America, San Francisco, 2012. Goldstein, Joshua R. and Catherine T. Kenney. “Marriage Delayed or Marriage Forgone? New Cohort Forecasts of First Marriage for U.S. Women.” American Sociological Review, v.66/4 (2001). Kalmijn, Matthijs. “Explaining Cross-National Differences in Marriage, Cohabitation, and Divorce in Europe, 1990–2000.” Population Studies: A Journal of Demography, v.61/3 (2007). Lee, Gary R. and Krista K. Payne. “Changing Marriage Patterns Since 1970: What’s Going On, and Why?” Journal of Comparative Family Studies, v.41/40 (2010). Martin, Steven P., and Sangeeta Parashar. “Women’s Changing Attitudes Toward Divorce, 1974–2002: Evidence for an Educational Crossover.” Journal of Marriage and Family, v.68 (2006). Oppenheimer, Valerie Kincade. “Cohabiting and Marriage During Young Men’s Career Development Process.” Demography, v.40/1 (2003).

335

Rodgers, Willard L. and Arland Thornton. “Changing Patterns of First Marriage in the United States.” Demography, v.22 (1985). Stevenson, Betsey, and Justin Wolfers. “Marriage and Divorce: Changes and Their Driving Forces.” Journal of Economic Perspectives, v.21/2 (2007). Weeks, John R. Population: An Introduction to Concepts and Issues, 11th ed. Belmont, CA: Thomas Higher Education, 2011.

Demographic Changes: Aging of America Throughout the 20th century and into the 21st century, the United States has experienced tremendous demographic growth in its older population. In 1900, only 3 million people were 65 years old or older, but by 2010, that total had surged to more than 40 million. In fact, the ranks of the U.S. older population will continue to swell between 2010 and 2030 as the baby boom generation—those born between 1946 and 1964—become senior citizens. As Table 1 shows, by 2030, the 65-plus population in the United States will reach 72 million, according to the Federal Interagency Forum on Aging-Related Statistics. Significant declines in mortality risks, gains in life expectancy, and decreases in fertility rates have dramatically altered the age structure of the U.S. population. In 1950, children represented about one-third of the total U.S. population, and only 8 percent of Americans were elderly. As of 2013, the proportion of the U.S population that is children has declined to about 25 percent, whereas the share that is elderly has risen to 13 percent. Correspondingly, the median age in the United States rose from 29.5 years in 1960 to 37.2 years in 2010. In fact, demographic projections suggest that although the share of the U.S. population that is children will remain constant during the next four to five decades, by 2050, one out of every five Americans (20 percent) will be aged 65 and older. These demographic shifts have also dramatically altered the age structure of most American families from that of a pyramid to a beanpole. In essence, families now have more generations alive, but fewer members in each generation.

336

Demographic Changes: Aging of America

Only 21 percent of Americans born in 1900 had any living grandparents by the time they reached age 30; in 2000, 76 percent approached the age of 30 with at least one living grandparent. Thus, it is likely that intergenerational relationships between grandparents and their children, their grandchildren, and even their great-grandchildren will pay a greater role in family life in the future. The demographic shift in aging is not just a U.S. phenomenon, it is a global phenomenon. The United Nations estimates that by 2050, the proportion of the world’s population aged 65 and older will more than double, from 7.6 to 16.2 percent. Accordingly, attention has turned to understanding the changes occurring within the older population, especially changes in its age structure. The reality is that the world’s older population is not only getting bigger, it is also getting older. In 2008, individuals aged 80 and older made up 19 percent of the older population globally—26 percent in developed countries, and 15 percent in developing countries. Slightly more than half (52 percent) of the world’s 80-plus population live in six countries: China, the United States, India, Japan, Germany, and Russia. As demographers and policy leaders note, the world is growing grayer. Historical data underscore the tremendous gains in life expectancy in the United States. In 1900, the average life expectancy at birth was only 49.2 years; by 2010, it had reached 76.2 years for men and 81.1 years for women. The oldest-old population, those aged 85 and older, is the fastest-growing segment of the U.S. older population. In 2011, the 65 to 74 age group (21.4 million) was almost 10 times larger than in 1900; however, the 75 to 84 age group (12.8

million) was 16 times larger, and the 85-plus age group (5 million) was 40 times larger. In fact, there were 53,364 centenarians (persons aged 100 or more) in 2010, a 66 percent increase from the 1980 figure of 32,194. By 2050, the oldest old will account for 24 percent of elderly Americans and 5 percent of all Americans. Furthermore, 5 percent of this future oldest-old cohort will be centenarians. Given societal aging, some suggest that the definition or chronological age marker of the oldest old may need to be raised upward from 85 to 90, or even 95. The aging of the baby boomer generation has focused the attention of policymakers on the implications of an increasingly older or long-lived society. Much of the policy debate centers on the impact of an increasing number of older citizens on the nation’s health care, finance, and pension systems. Yet, absolute size and share of the total population are not the only factors that will determine the impact of America’s older population on the nation’s social institutions; characteristics of the older age group such as income and wealth, health and disability, living arrangements, and social networks will also be determinants. Although advanced old age is associated with a greater risk of economic hardship, disabling illnesses, and social isolation, many older Americans are physically well, economically secure, and have strong social bonds. In fact, discussions about the social responsibility of nations to promote secure, positive later-life experiences for their older citizens, both in the United States and abroad, have largely been based on assumptions about the role and availability of family, particularly in regards to long-term care. That families are the backbone of

Table 1 Number and percentage of Americans, 1950 to 2050 1950 Number in thousands

Total

2000

2050 Projections

Percent

Number in thousands

Percent

Number in thousands

Percent

152,272

100.0

282,171

100.0

439,010

100.0

0 to 19

51,673

33.9

80,576

28.6

112,940

25.7

20 to 64

88,202

57.9

166,522

59.0

237,523

54.1

65 and older

12,397

8.1

35,074

12.4

88,547

20.2

Age group

Source: Adapted from L. B. Shrestha and E. J. Heisler, “The Changing Demographic Profile of the United States” (Table 3). Congressional Research Service Report for Congress, 2011. http://www.fas.org/sgp/crs/misc/RL32701.pdf



Demographic Changes: Aging of America

the United States’ long-term care system; however, declining mortality and fertility rates, especially in developed nations, mean that the proportion of the older population has grown, while the number of younger family members available to care for older relatives has decreased. In exploring the demographic changes that are occurring in America’s older population, a cautionary note must be raised about the limitations of future predictions. The characteristics of the older population are not fixed, either absolutely or relative to the norms of other age groups. The process of cohort succession suggests that older adults of the future will differ from those of the present. For example, the rise of advanced education in the United States suggests that the baby boomer generation will be more highly educated than previous older cohorts. Similarly, future members of U.S. families will differ from those of the present day in patterns of formation, composition, roles, and demands. Longer life spans, women’s increased labor-force participation, and the growing complexities of families’ lives may affect flows of assistance, resource sharing, and kinship obligations, both within and between generations. America’s older population is not a monolithic group; there is great diversity in seniors’ social, health and economic wellbeing. Gender, Race, and Ethnicity The sex ratio of the U.S. population changes according to age groups. The numbers of males consistently exceeds the numbers of females until the third decade of life; then, from age 30 on, women increasingly outnumber men. The gender disparity in life expectancy is evident in the sex ratio in later life. In 2010, among the 65 and older U.S. population, 43.1 percent were male and 56.9 percent were female; and for the 85 and over population, the percentage that is female rose to 67.4 percent. The social world of the oldest old is largely female. As of 2013, the 65-plus population is less racially and ethnically diverse than younger age groups in the United States; however, in coming years, immigration patterns will play an important role in increasing this diversity. Approximately 15 percent of baby boomers, who are quickly entering the ranks of the older population, are foreign born. In 2010, non-Hispanic whites composed 80 percent of the U.S. older population, blacks were 9 percent,

337

Hispanics of any race accounted for 7 percent, and Asians made up 3 percent of the older population. Demographic projections suggest that the older Hispanic population will grow the fastest of all older racial and ethnic groups in the United States, from under 3 million in 2010 to 17.5 million in 2050. Thus, by 2050, the composition of the older U.S. population will be 58 percent non-Hispanic white, 20 percent Hispanic of any race, 12 percent black, and 9 percent Asian. Marital Status and Living Arrangements Because women both have longer life expectancies and typically marry men who are older than themselves, there are significant gender differences in marital status and living arrangements in later life. In 2010, the majority of older men were married: 78 percent of those aged 65 to 74; 73 percent of those aged 75 to 84; and 58 percent of those aged 85 and older. In stark contrast, widowhood becomes normative for women as they grow older. Among women ages 65 to 74, 56 percent were married; yet among women ages 75 to 84, this figure dwindled to 38 percent; and for those 85 and older, only 18 percent were married. Although only relatively small percentages of the current cohort of older adults are divorced or never married, these figures are expected to rise in the next decades with the baby boomer generation’s entry into the ranks of the U.S. older population. This is because baby boomers are the generation that witnessed the rise of divorce, cohabitation, and the “nontraditional” family within the United States. They are also the cohort that is most likely to have experienced divorce, and the generation most likely to be currently divorced. Furthermore, approximately 14 percent of baby boomers have never married, a much higher percentage than previous generations. In fact, the proportion of midlife adults (those ages 45 to54) who have never been married increased 300 percent between 1986 and 2009. Ultimately—whether because of divorce, never marrying, or widowhood—one in every three baby boomers is approaching later life not married. This is highly significant because marital status has long been linked to economic resources, social integration, health, and mortality. The differing living arrangements of men and women primarily reflect the gender gap in spouse survivorship. Among noninstitutionalized adults 65

338

Demographic Changes: Aging of America

and older, approximately 71.9 percent of older men, as compared to only 43.4 percent of women, lived with their spouse in 2010. Although the percentage living with a spouse decreases with age for both genders, the decline is much greater for women. Among women aged 75 and older, slightly less than one-third (32 percent) lived with a spouse. Living alone in old age is more often a woman’s than a man’s experience. In 2010, only 19 percent of men 65 and older lived alone, compared to more than one-third (36 percent) of women 65 and older. Although the proportion of those living alone increases with age for both genders, the increase is much steeper for women. Almost half (46 percent) of women ages 75 and older lived alone in 2010. According to the AARP, the vast majority of midlife and older Americans—95 percent—consistently report that they wish to continue to live in their homes and communities, or “age in place” as they grow older. Only a relatively small number of older Americans live in institutions such as nursing homes at any point in time. In 2011, 3.6 percent (or 1.5 million) of older adults in the U.S. resided in an institutional setting; yet, the risk of institutionalization increases with advanced age. In 2011, only 1 percent of the 65-to-74 age group and 3 percent of the 75-to-84 age group were living in an institutional setting; however, 11 percent of the 85 and over population were nursing home residents. Economic Well-Being Today’s older population, in the aggregate, is faring better economically than previous generations of older Americans. Once the age group with a higher poverty rate than any other age group, the 65-plus population has experienced a significant decline in poverty over the past decades. In 1959, 35 percent—more than one out of every three senior citizens—lived below the federal poverty threshold, compared with 27 percent of children under 18 years of age, and 17 percent of working-age adults. Since 1974, however, children have emerged as the age group most vulnerable to experiencing poverty. In 2010, the poverty rate for elderly persons was 9 percent, a rate that is much lower than the 22 percent for children, and lower than the 13.7 percent for working-age adults. Despite aggregate improvement in the economic status of the nation’s older population, over 3.6 million of America’s seniors fell below the official

federal poverty line in 2010, and an additional 2.4 million were classified as near poor. Furthermore, the risk of confronting poverty in later life significantly varies by gender, race, age, marital status, and living arrangement. For example, in 2010, blacks and Hispanics 65 and older were almost three times more likely to confront poverty in old age than nonHispanic whites. For all racial and ethnic groups, the risk of poverty rose with advancing age. Similarly, across all racial and ethnic groups, women faced a greater risk of poverty in late life than men. Yet for both genders, poverty was a more likely outcome for those living alone in old age. Approximately 14.6 percent of men and 17.8 percent of women residing alone were living in poverty in 2010. The interactive effects of gender, race, and living arrangement/ marital status are underscored by the fact that older Asian, black and Hispanic women living alone were the most economically vulnerable. About three out of every 10 older Asian and black women and four out of every 10 older Hispanic women living alone fell below the official federal poverty line in 2010. Median income also offers insights into the economic situations of older Americans. The median income for the 40.2 million persons 65 and older reporting income in 2011 was $19,939; however, an analysis by gender reveals that it was significantly higher for males ($27,707) than for females ($15,362). The four primary sources of income for older Americans are Social Security, assets, pensions, and employment. Social Security, however, is the most widely used retirement benefit, with 86 percent of persons age 65 and older, and 92 percent of persons age 80 and older, receiving the benefit. (Some individuals choose to delay the start date for receipt of Social Security in order to receive a larger annual benefit amount.) Furthermore, Social Security is the largest source of income for the vast majority of Americans during their retirement years. For 67 percent of seniors, Social Security represents at least half of their retirement income, and for more than one-third, it comprises at least 90 percent of their income in old age. The median annual Social Security payment in 2010 was $15,701. Health Status In the United States, as in many other countries, older adults are not only living longer, but they are also living healthier and more independent lives. Older Americans generally have positive



Demographic Changes: Aging of America

perceptions of their health. In 2010, slightly more than three-quarters of adults 65 and over rated their health as good, very good, or excellent. Even among the oldest old (persons aged 85 and older), two-thirds reported their health as good or better. Non-Hispanic whites, however, consistently report more positive health assessments than African Americans and Hispanics in later life. In fact, research has documented tremendous variability in healthy aging, and has underscored how structural factors such as education, income, and wealth may affect both health behaviors and outcomes. The evidence is overwhelming that the prevalence of disability increases with advanced old age, and that these problems often impede the ability of older adults to perform both basic activities of daily life (ADLs; i.e., dressing, bathing, and feeding) and/or the instrumental activities of daily living (IADLs; i.e., housecleaning, cooking, and shopping). Sensory impairments—vision and hearing loss—also increase with age. While only about onethird of those ages 65 to 74 report hearing troubles, the majority (58.6 percent) of the oldest old (85 and older) identify difficulties hearing. Similarly, although only 12.2 percent of the young old cite trouble seeing, vision problems are reported by 22.5 percent of the oldest old. The reality is that the oldest old are seven times more likely to require assistance with personal care than are the young. Finally, while research suggests that disability rates among the oldest old have declined since the 1980s, concerns are now voiced about disturbing health trends among midlife adults and the young old, particularly the increasing prevalence of obesity, diabetes, and depression. Obesity, for example, is viewed as a risk factor for a number of chronic health conditions such as hypertension, heart disease, high cholesterol, diabetes, arthritis, and some cancers. Conclusion Greater longevity is a global success story. Indeed, global aging is one of the most important phenomena of the 20th and 21st centuries. It will continue to impact U.S. social institutions, including the family. Families are the primary structure within which care is provided to individuals from birth to death. Not surprisingly, the extraordinary demographic shift in the age structure in the United States and many other countries has captured public attention. To develop effective social policies, nations will

339

need to recognize and understand the diversity and changing nature of the older population, as well as the social forces impacting this age group. America’s older population is not only getting bigger, it is growing older and more diverse. Judith G. Gonyea Boston University See Also: AARP; Assisted Living; Baby Boom Generation; Caring for the Elderly; Death and Dying; Elder Abuse; Estate Planning; Grandparenting; Nursing Homes; Sun City and Retirement Communities. Further Readings Beard, J. R., S. Biggs, D. E. Bloom, L. P. Fried, P. Hogan, A. Kalache, and S. J. Olshansky. “Global Population Ageing: Peril or Promise” (2011). World Economic Forum. http://www3.weforum.org/docs/WEF_GAC_ GlobalPopulationAgeing_Report_2012.pdf (Accessed Februrary 2014). Brown, S. L. and I-F. Lin. “The Gray Divorce Revolution: Rising Divorce Among Middle-Aged and Older Adults 1990–2012.” Journal of Gerontology: Series B Psychological and Social Sciences, v.67 (2012). Federal Interagency Forum on Age Related Statistics. “Older Americans 2012: Key Indicators of WellBeing.” http://www.agingstats.gov/agingstatsdotnet/ Main_Site/Data/2012_Documents/Docs/Entire Chartbook.pdf (Accessed December 2013). Flegal, K. L., B. I. Graubard, D. F. Williamson, and M. H. Gail. “Excess Deaths Associated With Underweight, Overweight, and Obesity.” Journal of the American Medical Association, v.20 (2005). Gonyea, J. G. “Changing Family Demographics, Multigenerational Bonds, and Care of the Oldest Old. Public Policy & Aging Report, v.23/2 (2013). Gonyea, J. G. “The Oldest Old and a Long-Lived Society: Challenges for U.S. Public Policy.” In The New Politics of Old Age Policy, 2nd ed., R. B. Hudson, ed. Baltimore, MD: Johns Hopkins University Press, 2010. Gonyea, J. G. “The Economic Well-Being of Older Americans and the Persistent Divide.” Public Policy & Aging Report, v.15/2 (2005). He, W., M. Sengupta, V. A. Velkoff, and K. A. DeBarros. “65+ in the United States: 2005.” http://www.census .gov/prod/2006pubs/p23-209.pdf (Accessed December 2013). Howden, L. M. and J. A. Meyer. “Age and Sex Composition: 2010.” http://www.census.gov/prod/

340

Demographic Changes: Cohabitation Rates

cen2010/briefs/c2010br-03.pdf (Accessed December 2013). Jacobsen, L. A., M. Kent, M. Lee, and M. Mather. “America’s Aging Population.” Population Reference Bureau. http://www.prb.org/pdf11/aging-in-america .pdf (Accessed December 2013). Kinsella, K. and W. He. “An Aging World: 2008.” International Population Reports, P95/09-1. U.S. Census Bureau. http://www.census.gov/prod/ 2009pubs/p95-09-1.pdf (Accessed December 2013). Kreider, R. M. and R. Ellis. “Number, Timing and Duration of Marriages and Divorces: 2009.” Current Population Reports. http://www.census.gov/prod/2011pubs/p70125.pdf (Accessed December 2013). Lin, I-F. and S. L. Brown. “Unmarried Boomers Confront Old Age: A National Portrait.” Gerontologist, v.52 (2012). National Center for Health Statistics. “Early Release of Selected Estimates Based on Data From the 2007 National Health Interview Survey.” http://www.cdc .gov/nchs/data/nhis/earlyrelease/200709_12.pdf (Accessed December 2013). Shrestha, L. B. and E. J. Heisler. “The Changing Demographic Profile of the United States.” Congressional Research Service Report for Congress (2011). http://www.fas.org/sgp/crs/misc/RL32701. pdf (Accessed December 2013). Social Security Administration. (2012). “Income of the Population 55 or Older, 2010: Section 9. The Importance of Social Security Relative to Total Income.” http://www.ssa.gov/policy/docs/statcomps/ income_pop55/2010/sect09.html#table9.a (Accessed December 2013). United Nations. “World Population Ageing” (2010). http://www.un.org/en/development/ desa/population/publications/pdf/ageing/ WorldPopulationAgeingReport2009.pdf (Accessed December 2013).

Demographic Changes: Cohabitation Rates In 2012, the median age at first marriage in the United States reached a historic high point: onehalf of all first marriages occurred to men over the age of 28 and to women over the age of 26.

Consequently, an increasing number of men and women are spending their early adult years unmarried, but they are not necessarily spending these years single. Indeed, the majority of young adults form one or more romantic relationships prior to getting married. Cohabitation, or living together, is one such type of romantic relationship. Scholars define cohabitation as a living arrangement in which two adults who are not married to one another, but who have a sexual relationship, share the same residence. It should be noted that the vast majority of scholarly research on cohabitation has focused on heterosexual couples. Increasing rates of cohabitation have altered the way that young adults form romantic relationships, marry, and parent in the United States. Cohabitation affects nearly all demographic subgroups, including those of various socioeconomic and racial/ethnic backgrounds, and its increase suggests that young men and women have grown increasingly inclined to form romantic relationships outside the purview of government and religious institutions. In the early 1970s, only 11 percent of marriages were preceded by cohabitation; this percentage grew to about half of those marrying between 1990 and 1994, and 66 percent between 2005 and 2009. Indeed, the percentage of women ages 19 to 44 who have ever lived with a partner has increased by 82 percent over the past 23 years. Furthermore, the average number of cohabiting partners that a woman has before her first marriage has risen. Among cohabiting women born between 1958 and 1962, the proportion that cohabited with more than one man prior to their first marriage rose from 14.5 to 23 percent a decade later. Combined, these statistics suggest that while the pace of cohabitation slowed in the 1990s, the incidence of cohabitation is still increasing and that cohabitation has become the most common pathway to marriage for the majority of young to middle-aged adults living in the United States. Before the 1960s, cohabitation was mainly common mainly among those living in or near poverty. With few financial resources and little prospect of leaving an inheritance to their children, the poor had few incentives to marry. Today, cohabitation rates vary with income and education, but to a lesser extent than in decades past. Forty-five percent of 19 to 44 year olds who are college graduates have cohabited, compared to 64 percent of those who have not



earned a high school degree. Furthermore, cohabiting couples tend to have lower average incomes than married couples. Rates of cohabitation have increased for all racial/ethnic groups over the past 23 years. White and Hispanic women experienced a greater increase (94 percent and 97 percent, respectively) compared to black women (67 percent). In the 1980s, a larger proportion of black women than white or Hispanic women had lived with a partner. However, in 2009 and 2010, the proportion of white women who have ever cohabited (62 percent) exceeded that of black women (60 percent), though this racial/ethnic gap is relatively small and narrowing. Overall, few racial/ethnic differences exist in terms of who is more likely to cohabitate. Blacks, non-Hispanic whites, and foreign-born and nativeborn Hispanics tend to have similar odds of forming cohabiting relationships in the United States. The average length of a cohabitation experience is two years; after that, the relationships tend either dissolve or the couple marries. However, the time that men and women spend cohabiting has grown over the past several decades. In the early 1990s, about two-thirds of couples ended their live-in relationship or got married within the first two years. Between 1997 and 2001, only 56 percent did. During this time, nearly 70 percent of cohabiting couples continued to live together for at least one year after the start of their relationship, one-third of couples for at least three years, and only one-fifth for at least four years. Over time, cohabitations have become less stable and less tied to marriage, with cohabiting relationships becoming more likely to end in dissolution. In the late 2000s, 40 percent of those living together married within three years, 32 percent remained cohabiting, and 27 percent ended their relationship. Cohabitation has also become a common context for childbearing and parenting, and the increasing instability of cohabiting relationships has important implications for children. Approximately 40 percent of cohabiting relationships include children; about half of these children are born to cohabiting couples, whereas the rest are the existing offspring of one of the cohabiting partners. While births to unmarried mothers have doubled in the past three decades, to 40 percent of all births, over one-half of these children born out of wedlock are born to cohabiting mothers. The proportion of children born to cohabiting parents increased from 11 percent of all births in 1994 to 18 percent by 2001. It is estimated that

Demographic Changes: Divorce Rates

341

almost half of the children living in the United States will spend some time in a cohabiting family. Jessica A. Cohen St. Mary’s University See Also: Cohabitation; Courtship; Demographic Changes: Age at First Marriage; Demographic Changes: Divorce Rates; Stepfamilies. Further Readings Copen, Casey E., et al. “First Premarital Cohabitation in the United States: National Survey of Family Growth, 2006-2010.” National Health Statistics Reports, no 64. http://www.cdc.gov/nchs/data/nhsr/nhsr064.pdf (Accessed December 2013). Kennedy, Shelia and Larry Bumpass. “Cohabitation and Children’s Living Arrangements: New Estimates from the United States.” Demographic Research, v.19/47 (2008). Manning, Wendy D. and Jessica A. Cohen. “Premarital Cohabitation and Marital Dissolution: An Examination of Recent Marriages.” Journal of Marriage and Family, v.74/2 (2012). National Center for Family & Marriage Research. “Trends in Cohabitation: Twenty Years of Change, 1987–2010.” http://ncfmr.bgsu.edu/pdf/family _profiles/file130944.pdf (Accessed December 2013).

Demographic Changes: Divorce Rates In American history, trends in divorce are generally categorized as occurring either before or after the passage of no-fault divorce legislation. No-fault divorce represents the dissolution of a marriage in which blame is not needed for legal action. Prior to no-fault divorce, a fault-based system was the only option available to parties seeking a divorce, wherein one party was blamed for the dissolution of the marriage. The introduction of no-fault divorce coincided with changing societal expectations and values, and changes in the social meaning of divorce. In the early years of the United States, divorce was extremely uncommon, and was considered abnormal. In many jurisdictions, divorces were not

342

Demographic Changes: Divorce Rates

allowed, in accordance with British law. In jurisdictions where divorce was allowed, very limited fault-based grounds were required, and penalties for the guilty party could be severe. Certain acts, such as adultery, cruelty, and abandonment were common grounds for divorce; however, drunkenness, insanity, and incarceration were other prominent grounds that emerged in some jurisdictions at various points. The actual process of divorce in these early years was also tedious and cumbersome, requiring a legislative act to grant a divorce in many places. The process was difficult for other reasons, because it was costly and carried with it an extremely negative social stigma, especially for women. Despite this, divorces were usually initiated by female petitioners, and this pattern has continued since colonial times. Throughout the 19th century and into the 20th, divorce laws gradually became more lenient and less punitive. The responsibility for granting divorces shifted from the state legislatures to the courts, and new grounds for divorce slowly emerged. In some states, emotional and psychological harm became an acceptable ground, and omnibus clauses that allowed for judicial discretion were added in other states; both of these helped to promote a major shift in legal recognition of “intangible reasons” for divorce. These legal changes were reflected in changes in public opinion toward marriage and divorce over time. The fault-based system for divorce presented a substantial problem by the mid-1960s. Married couples who both wanted a divorce, but were unable to establish grounds to meet the fault-based legal language, were forced to either stay in the marriage or perjure themselves to escape. As states began to adopt less stringent standards for divorce, migratory divorces (those in which one or both parties temporarily relocated to another state to obtain a divorce) increased. The restrictive conditions surrounding divorce until the 1970s in most states changed through a series of legislative acts that completely altered the process of divorce in the United States. Although Oklahoma was the first state to introduce no-fault divorce in 1953, California was the first state to legislate exclusive use of no-fault divorce when it passed the California Family Law Act of 1969. Implementation of only no-fault grounds by California began a momentous shift in

policy across the United States. This growing shift toward no-fault divorce brought about the standard of irreconcilable differences, allowing married couples to seek divorce on the premise that their marriage was broken and could not be repaired. Although California was the first to pass an exclusive no-fault divorce statute, making irreconcilable differences the only grounds for divorce, many other states were slow to fully conform. For example, New York had no official no-fault divorce provision until 2010. The introduction of no-fault divorce reflected the changing values and expectations of Americans. A Changing Society Since the 1970s, the United States has had a higher tolerance for and propensity toward divorce as a viable option for married couples. However, even prior to no-fault divorce, Americans already had higher divorce rates than other countries. The freedoms that Americans have enjoyed since the country’s founding, and the religious influences of Protestant Christianity, set the stage for future momentous changes in divorce, both legally and socially. Starting from an ideological foundation that tolerated divorce in some circumstances, attitudes surrounding divorce changed over time to reflect a more individualized notion of personal happiness and individual gain over an adherence to strictly traditional values about family life. Divorce has become normative and more widely accepted, affecting about half of first marriages among recent birth cohorts. With greater ease, less burden, and more social support, people today have more flexibility in their decisions about divorce. Long-term commitment to marriage because of children is no longer necessary, and the predominant viewpoint is that children are better off living in divorced families than living with parents in an unhappy marriage. Although divorce is more widely accepted, it is not actively encouraged and the predominant ideal still emphasizes a long-term commitment to marriage. This is highlighted by the fact that some states still use fault-based reasons in divorce actions, such as the division of marital assets. Grounds used to force divorce actions in the past are also now used in child custody determinations. Regardless of possible repercussions, both legally and socially, the number of divorces increased following the introduction of major policy changes (i.e., the introduction of no-fault divorce, revising of statutes to



facilitate divorce or make it more accessible); however, since the 1980s, divorce rates have gradually decreased. Understanding Divorce Through Marriage Because divorce can only follow marriage, changes in marriage rates affects changes in divorce rates. Even as divorce rates drastically increased in the 1970s, marriage rates in the United States remained remarkably stable, at 10.6 per 1,000 between 1970 and 1980. The marriage rate began to decrease shortly thereafter. Since the 1950s, the age of men and women at first marriage has increased, which has also resulted in many of those couples having children later. A more recent increase in cohabitation and unwed pregnancies reflects the change in attitude toward marriage. Today, marriage is not perceived as a necessity, and many people explore alternatives to marriage, or experience one or more divorces. Societal changes in rates of and beliefs about marriage partly explain changes in divorce rates over time. Specifically, decreases in marriage rates are linked with decreases in divorce rates in general, although the proportion of marriages ending in divorce remains fairly stable among first marriages. Who Divorces and Why? As demonstrated by the spike in divorces in the 1970s, following the introduction of no-fault divorce, divorce rates are influenced by the historical context in which they occur. As new laws, social movements, and economic factors occur, shifts in the numbers of divorces also occur. For example, during World War II, divorce rates sharply rose, and then subsided following the war. These contexts can operate in unison or independently, and can often confuse the reasons for rising or falling divorce rates. The context in which divorce occurs is influenced by public attitudes and changing beliefs about marriage, as well as factors associated with the likelihood of divorce. Family history of divorce increases the odds of a person experiencing divorce. Children whose parents’ divorce may lack models of effective communication and problem-solving skills that sustain a marriage, and they grow up to see divorce as a more normative process. Race also is linked with divorce; African Americans are more likely to divorce than other racial groups. Other structural factors, such as age at marriage, income, education, religious beliefs, length of marriage, abuse,

Demographic Changes: Divorce Rates

343

infidelity, and prior divorce affect the likelihood of experiencing a divorce. Children can also play a role in the likelihood that couples will seek divorce. Having children together in a first marriage decreases the likelihood of divorce. However, having children prior to marriage increases the likelihood of divorce, even if the marriage is to the child’s other biological parent. Having children with more than one person (multiple partner fertility) can also increase the likelihood of divorce, whereas having multiple children with the same partner decreases the likelihood. Divorce influences children, and factors into a couple’s decision to stay married or separate. Children of divorce have the potential to experience higher levels of stress, depression, increased behavioral problems, and greater financial family strain. Emotional aspects of the marriage can also have a substantial influence on the chances of divorce. The type and amount of conflict or negative communication can adversely impact the chance that a marriage will last over time. The amount of positive, supportive communication and interaction can have the reverse impact, helping to maintain marriages over time. The perceptions of trust, love, and marital happiness that exist within marriages can also have an effect on the sustainability of a couple’s relationship. Even though divorce has become a much more socially acceptable and normative process, the emotional consequences of divorce still exist. Individuals experiencing a divorce generally have higher anxiety, stress, and depression levels, higher rates of substance abuse, and decreased mental health. Anthony J. Ferraro Florida State University

See Also: Child Custody; Demographic Changes: Age at First Marriage; Demographic Changes: Cohabitation Rates; Divorce and Religion; Divorce and Separation; No-Fault Divorce; Social History of American Families: 1941 to 1960; Social History of American Families: 2001 to the Present. Further Readings Amato, P. R. and S. Irving. “Historical Trends in Divorce in the United States.” In Handbook of Divorce and Relationship Dissolution, M. A. Fine and J. H. Harvey, eds. Mahwah, NJ: Lawrence Erlbaum, 2006.

344

Demographic Changes: Zero Population Growth/Birthrates

Cherlin, A. J. Marriage, Divorce, Remarriage, rev. ed. Cambridge, MA: Harvard University Press, 1992. Vlosky, D. A. and P. A. Monroe. “The Effective Dates of No-Fault Divorce Laws in the 50 States.” Family Relations, v.51 (2002).

Demographic Changes: Zero Population Growth/ Birthrates Changes in population size can affect a society’s ability to meet its needs, provide resources to its members, and remain economically or politically stable. This is true for both excessive population growth, where the size of the population increases to the point of resource scarcity; and population decline, where the society is no longer able to fill the positions necessary for it to function. However, zero population growth occurs in a society when the number of new members (through birth or immigration) equals or nearly equals the loss of current members (through death or emigration). This demographic trend can be intentional, where it is supported by public policy or law, or it can happen as a transition between growth and decline. This balance between population growth and decline is an important component of social stability. A society must maintain replacement levels in the total fertility rate (TFR) to have a stable population size; a higher TFR usually leads to increased population, and a lower TFR usually leads to decreased population. However, the replacement levels in TFR differ for each country, depending on levels of mortality and economic and social development. For example, low infant mortality and increased life expectancy establish the replacement level TFR for the United States and most of the Western world at 2.1, meaning that the average woman would need to have 2.1 children to replace herself and her partner. The additional 2.1 above true replacement (i.e., “2 people make 2 people”) accounts for those who die before having children or are unable or choose not to reproduce. On the opposite end of the spectrum, societies with elevated infant mortality, shorter life expectancy, and greater death rates need a higher TFR to replace its members, accounting for

a greater number of people who will not live long enough to reproduce. Other than the post–World War II baby boom that saw the TFR in the United States experience a brief increase to 3.8, the TFR consistently declined across the 19th and 20th centuries in the United States and much of western Europe. These declines were particularly apparent during periods of economic decline, such as the Great Depression. The United States currently has a TFR of about 1.9, which is slightly beneath replacement level. However, by comparison, the United States has a higher TFR than most European countries (with an average TFR of 1.6); only Kosovo, France, and the United Kingdom (at 2.0 TFR each) and Ireland (at 2.1) have higher rates in Europe. Though the United States is near replacementlevel TFR, it is variation within the population that maintains this balance, and not a general pattern of overall lower fertility. As an example of this balance, first-generation American families and members of conservative religions are more likely to have family sizes above replacement levels, whereas individuals with higher levels of education and income are more likely to have one child or no children. In addition, female immigrants from religiously and socially traditional countries, such as those in Latin America or the Middle East, are encouraged to have large families. Given this delicate balance, it is only through the influx of immigrants that the United States is able to maintain steady population growth. Demographic Transitions The study of transitions in demographic changes has been a core concept in the social sciences, beginning with British scholar Thomas Malthus in the 18th century. More recent scholarship has built on the theoretical stage model of demographic transition proposed by American demographer Warren Thompson in the 1960s. As shown in Figure 1, the stages of the demographic transition highlight population growth or decline based on changing birth and death rates. Stage 1 illustrates a model of preindustrial society, where high death rates are a product of pervasive infant mortality, famine, disease, and lack of medical advancements. Birth rates are equally high, out of necessity to maintain a stable population size in compensation for high death rates. The transition to Stage 2 occurs with industrialization, as increased food production and access to medical



Demographic Changes: Zero Population Growth/Birthrates

345

begin to plateau, and if birth rates fall well below death rates, decline. Critics of this theory argue that it does not apply to all countries today or societies across history, and the demographic transition theory is a descriptive not predictive model and was designed as a generalization of the effects of population transitions.

Figure 1 Demographic transition model, based on the theoretical stage model of demographic transition proposed by American demographer Warren Thompson in the 1960s.

care cause death rates to sharply decline. However, because societies in Stage 2 continue to reproduce at the high rates that were previously necessary, the population explodes. It is not until Stage 3 of the demographic transition that birth rates begin to fall to match the lower levels of death rates. Increased education (particularly for women), access to contraception, and age at first marriage, as well as decreased social pressure for large families, lead to the decline in birth rates characteristic of Stage 3 and later stages. Yet because of the population swell in Stage 2, total population continues to increase through Stages 3 and 4. During the end of stage 4, birth and death rates are nearly equal, leading to a period of zero population growth. The current combination of increased life expectancy and decreased family size places the United States in stage 4 of the demographic transition. Though Thompson’s original model included only four stages, later scholars have proposed Stage 5 to account for the transition from zero population growth to subreplacement-level birth rates hypothesized to occur in several countries, such as Germany and Japan. Because of its strong immigration patterns, it is unlikely that the United States will enter the proposed Stage 5 of the demographic transition, and face population decline. Only with birth rates dropping to equal death rates can overall population

Consequences of Extreme Population Growth or Decline Whether a society’s population is in a period of growth, stability, or decline shapes the future of that society. A growing population can lead to increased economic production as supply for goods and services develops to meet the greater demand. As economic opportunities rise, so does the standard of living in that society, with greater access to education, health care, and nutrition. This cycle of increased growth, however, is not permanently sustainable, and unchecked population growth has severe consequences. Each society exists within an environment that has limits to how much it can produce, a concept known as carrying capacity. Eventually, societies with explosive growth exceed their carrying capacity in a phenomenon called overshoot, where resources for goods and services become scarce, access decreases, and the society struggles to maintain stability. The resulting population decline, or dieback, shrinks the population size through famine, disease, or outward migration, bringing the society back to levels within carrying capacity. Though unrestrained population growth has dire repercussions, excessive population decline can be equally devastating. As a society begins to lose members through death or outward migration and is unable to replace them with current birth rates, it reaches a tipping point after which it can no longer perform the functions it once did. Subreplacement fertility leads to a future where new workers are not available to replace retirees and support pensioners, an aging population places greater burden on an understaffed medical system, military recruitment shortages weaken defenses, shrinking agricultural output decreases food supply, and decreased demand for goods and services leads to economic collapse. It is not surprising that societies facing this grim forecast may consider incentives for couples to have more children, provide increased resources for working parents, or offer educational

346

Department Stores

and employment opportunities to entice young people to stay, rather than immigrating elsewhere. Mari Plikuhn Tyler Plogher University of Evansville See Also: Childless Couples; Family Planning; Fertility; Infertility. Further Readings Davis, Kingsley, Mikhail S. Bernshtam, and Rita RicardoCampbell, eds. Below-Replacement Fertility in Industrial Societies: Causes, Consequences, Policies. New York: Cambridge University Press, 1987. Kirk, Dudley. “Demographic Transition Theory.” Population Studies, v.50/3 (1996). Malthus, Thomas R. An Essay on the Principle of Population [1798]. Amherst, NY: Prometheus Books, 1998. Morgan, S. Philip. “Is Low Fertility a Twenty-FirstCentury Demographic Crisis?” Demography, v.40/4 (2003). Thompson, Warren S. “The Development of Modern Population Theory.” American Journal of Economics and Sociology, v.23/4 (1964). U.S. Central Intelligence Agency. “Country Comparison: Total Fertility Rate.” World Fact Book. https://www .cia.gov/library/publications/the-world-factbook/ rankorder/2127rank.html (Accessed August 2013).

Department Stores Department stores have been a fixture in American cities since the 1800s. By offering a wide range of products and services, they helped create a culture of mass consumption. As the nation industrialized, items once routinely homemade, such as clothing, became mass-produced, and department stores were a crucial venue through which such goods became available—thanks to new mass transportation systems that brought products into the stores and shoppers downtown. Typically family-owned enterprises, downtown department stores were created to be elegant destinations that catered not only to people’s needs but also to their desires. Store displays demonstrated what was fashionable

and tasteful, providing Americans a picture of an attainable life based on newly emerging commercialism. Department stores attracted a mostly female, middle- and upper-class clientele to shop, dine, and attend events. Thusm such stores became an acceptable social space for women outside the home, helping give them a new role as their family’s main consumer—the one who made many of the family’s main purchasing decisions. Department stores also launched fashion designers’ careers and gave a predominantly female workforce respectable positions as saleswomen and managers. Many department stores began as small specialized enterprises (like dry goods shops or clothing stores) and expanded. The first actual department store in the United States, A. T. Stewart’s Marble Palace, opened in 1846 in New York City. While pioneering the principle of a multi-story, architecturally ornate, one-stop shopping structure, Stewart’s helped mainstream new merchandising practices (such as buying in bulk, allowing returns, and offering fixed prices). Especially from the 1880s on, department stores began springing up in American downtowns, generally taking on their founders’ names. Most major cities had a defining flagship department store, such as Macy’s in New York, Wanamaker’s in Philadelphia, Filene’s in Boston, and Marshall Field in Chicago. In many urban areas, multiple department stores competed for business, often differentiating themselves by focusing on separate classes of clientele, and constantly trying to surpass each other with additions and remodels. Still, most early department stores offered the same general range of items and services. The buildings were typically impressive, with grand facades and large windows with elaborate displays to attract passersby with some of the goods for sale inside. Gleaming glass cases and mannequins showed off clothing and other items. Grand staircases, chandeliers, and many glass, brass, and mirrored surfaces provided an elegant atmosphere inside. Revolving doors, passenger elevators, and escalators helped customers gain easy access to departments on multiple floors. Goods were showcased in separate sections, each with dedicated sales staff. Then and now, the most prominent departments were those related to fashion (including gowns, wedding dresses, and furs) and housewares (from fine china to linens). Toy departments



attracted shoppers’ children, while areas devoted to sporting goods, appliances, and hardware drew male consumers. Other traditional departments included furniture, books, fabrics, groceries, and a bargain basement where sale items could be found. Department stores generally offered free delivery, with their carriages or vans serving as advertisements, as did stores’ branded shopping bags and catalogs. A store’s mail-order business allowed those in rural areas or those who were homebound to shop via catalogue and make orders over the phone or through the mail. Other services might include fur storage, alterations, repairs, travel agencies, beauty salons, photography studios, bridal registries, children’s nurseries, tearooms or restaurants, and parking garages. Stores also hosted a variety of events to draw crowds, such as fashion shows, art exhibits, how-to classes, and concerts. Events were particularly important during holiday seasons, serving as promotional and sales opportunities; in fact, department stores played a key role in commercializing major holidays, especially Christmas, in the first half of the 20th century. They created new holiday traditions as families flocked to experience a department store’s themed window displays, its holiday parades, tree-lighting ceremonies, and to visit Santa. In midcentury America, however, the public increasingly fled cities for life in the fast-growing suburbs. Many urban department stores opened multiple suburban branches, even as business at their flagship locations downtown declined. These suburban buildings were often sprawling, low-rise designs with large parking lots, opened along major roads in conjunction with indoor shopping malls. In many malls, several department stores were designed to be the “anchors” of the shopping complex, attracting visitors and forcing shoppers to pass by numerous smaller stores on the route between the department stores. In recent decades, major competition to the traditional department store has emerged, including discount department stores, category-killer bigbox stores, outlet stores, and warehouse stores. Many shopping malls built in the 1960s through the 1980s have seen its major department stores change hands or go out of business altogether. Newer, outdoor shopping facilities known as power centers, a type of expanded strip mall anchored by the big box or discount stores, are becoming more popular. By 2006, department stores’ market share of the

Department Stores

347

retail category had declined for 15 straight years. Many dropped departments that had once helped define them (like toys and appliances), and turned to Internet sales. A few purchased discount chains, whereas others created their own. Some opened non-mall department stores, especially in power centers. For many chains in this era, expansion became not a matter of opening new stores, but of buying (or merging with) independent stores and other chains, particularly in separate regions. Chains’ underperforming stores quickly closed, especially in downtowns. A number of national and regional chains liquidated, often following bankruptcies, especially during the Great Recession of 2007 to 2010. Scores of historic, family-owned businesses, whose names were once synonymous with the cities in which they were founded, disappeared from the retail landscape. Still, many department stores survive mostly as publicly traded parts of retail conglomerates. The top American department store chains are Macy’s, Sears, and Kohl’s. The oldest operational department store chains in the United States are Lord & Taylor, founded in 1826; Macy’s, founded in 1858; and Bloomingdale’s, founded in 1861. Historic urban department stores are recognized for their architectural, economic, and sociocultural significance. Many boast landmark designations, and some have undergone extensive restorations. Former department store buildings have been appreciatively converted into other retail formats, offices, schools, libraries, hotels, and even museums. Although department stores may no longer hold the cultural cache, economic pull, or consumer appeal they once did, they remain an important part of American life. Kelli Shapiro Texas State University See Also: Leisure Time; Shopping Centers and Malls. Further Readings Grippo, Robert M. Macy’s. New Hyde Park, NY: Square One Publishers, 2008. Leach, William. Land of Desire: Merchants, Power, and the Rise of a New American Culture. New York: Pantheon Books, 1993. Longstreth, Richard. The American Department Store Transformed, 1920–1960. New Haven, CT: Yale University Press, 2010.

348

Desegregation in the Military

Desegregation in the Military World War II broke down many racial barriers, directly leading to desegregation of all military branches and laying essential groundwork for the civil rights movement and the eradication of Jim Crow laws in the southern United States. When the war started, many African American families were suffering economically. African Americans had been disproportionately affected by the Great Depression, and many families had been plunged into abject poverty. While African American males served in World War II and drew steady wages, the war economy brought about significant improvements for their families. Because of entrenched racism in the south, more than 5 million African Americans moved northward and westward between 1940 and 1970. During that period, desegregation spread beyond the military to American society, resulting in the emergence of a black middle class for the first time in American history. In the south, by the mid-1870s, the failure of Reconstruction, a devastated southern economy, and seething racial tensions had led to the rise of Jim Crow laws designed to prevent African Americans from exercising their constitutional rights. These laws mandated segregation of all aspects of life. African Americans lived in segregated housing and were relegated to lower levels of employment. Everything from water fountains to waiting rooms to bus seats was segregated by race. Attempts to change the status quo were stymied by denial of voting rights through measure such as poll taxes, literacy tests, and grandfather clauses. In 1896, the Supreme Court upheld this separatebut-equal doctrine in Plessy v. Ferguson. Shortly after war was declared in December 1941, President Franklin Roosevelt issued Executive Order 8802, declaring that race, color, and national origin were not to be considered barriers to participating in the defense of the United States. While serving in the military, African Americans were segregated by race, living in separate quarters, dining in separate areas in mess halls, and being denied entrance to entertainment facilities. Most African Americans lived in the south, and military families continued to be subject to segregation. Families that lived near military bases and defense facilities fared better than their

counterparts because the war had opened up new avenues of employment, and higher incomes meant that families could live in better neighborhoods. In 1941, only 5.9 percent of the American military was African American. Throughout the war, more than three-fourths of all African Americans worked in the quartermaster, engineer, or transportation corps. After joining in the fight to protect the democratic rights of other peoples, African Americans returned home determined to break down the barriers that segregated them from American society. In the south, that attitude was often seen as “uppity,” and violence frequently broke out. For instance, in South Carolina in February 1946, police officers attacked Isaac Woodard, a returning veteran, beating him so badly that he became blind. That same year in Walton County, Georgia, two African American veterans and their wives were attacked and killed by a mob of angry whites. President Harry Truman was outraged by such incidents and the continued failure of Congress to pass civil rights legislation. He established the President’s Committee on Civil Rights, chaired by Charles Fahy. Truman Executive Order 9981 The war had officially ended with the surrender of Japan on September 2, 1945. That same month, Secretary of War Robert P. Patterson created the Gillem Board to examine military policies concerning the treatment of African Americans. The board recommended immediate elimination of any preferential treatment based on race but failed to recommend integration of the military. African American activists had also been motivated by World War II and the failure to end military segregation. In 1947, A. Philip Randolph and Grant Reynolds established the Committee against Jim Crow in Military Service and Training. The League for Non-Violent Civil Disobedience Against Military Segregation was also set up to focus national attention on the issue. In July 1948, the Democrats added planks on the desegregation of the military and civil rights to its party platform. On July 26, 1948, President Harry Truman issued Executive Order 9981, mandating “equality of treatment and opportunity for all persons in the armed services without regard to race, color, or national origin.” Truman also established the President’s Committee on Equality of Treatment and Opportunity in the Armed Services, naming two African Americans to the committee. At that time, there was



only one African American among 8,200 officers in the Marines and one among 45,000 in the Navy. Desegregation of the military did not go smoothly. The Army continued to insist that the number of African Americans in its ranks should be capped at 10 percent, the proportion of blacks in the total population. It was not until 1950 that desegregation plans for all branches were approved. On May 22, 1950, the Fahy Committee issued its final report, Freedom to Serve. During the Korean War, integration of units occurred naturally in response to casualties and the movement of troops. Civil Rights and Middle-Class Status The process of desegregating the military in the late 1940s laid essential groundwork for promoting activism among African Americans and for focusing national attention on the issue of civil rights and the paradox of fighting for democracy while denying basic political and social rights to a large segment of the population. What became known as the Negro Revolution was designed to break down barriers to exercising democratic rights that included

President Franklin D. Roosevelt, shown signing the declaration of war against Japan, December 8, 1941. Roosevelt had signed Executive Order 8802 on June 25, 1941, preventing discrimination based on race by government contractors.

Desegregation in the Military

349

the right to pursue opportunities. A mass movement was organized that formed alliances with religious groups, both the Democratic and Republican parties, and labor unions. Support for civil rights at the national level was provided by Democratic presidents that included Franklin Roosevelt, Harry Truman, John Kennedy, and Lyndon Johnson. After World War II, the G.I. Bill was introduced to reward veterans for military service by providing employment, educational, and housing opportunities. However, access to those benefits for southern veterans was often made difficult by continued racism. Veterans who had learned new skills in the military were often steered toward menial jobs. If they refused to take jobs offered, they were denied unemployment benefits guaranteed under the G.I. Bill. In 1954, the Supreme Court handed down its decision in Brown v. Board of Education. That decision ultimately affected most aspects of life in the United States. The desegregation of schools took a lot longer than desegregation of the military, but the underlying belief that separate could never be equal resulted in the passage of the Civil Rights Act of 1964, the Voting Rights Act of 1965, and a score of other laws and civil rights court cases. Jim Crow laws were abolished, and African American families were finally able to take advantage of educational and employment opportunities that allowed them to rise to middle-class status. By 1966, approximately half of all African American families were considered middle class. Within the other half, however, many families faltered, hovering on the brink of breakdown. Much of this population was concentrated in urban areas from which whites had fled for life in the suburbs. Educational levels are highly linked to social status. In 1965, more than a decade after the Brown decision, 56 percent of all African Americans taking the Armed Forces Qualification Test were failing it. At the same time, the unemployment rate for African Americans was 29 percent. In 1968, Congress passed the Fair Housing Act, ending discrimination in housing. Integration of schools progressed, and affirmative action programs helped to level out differences in employment. Nevertheless, African American families are still more likely to be poor than any other family group. Elizabeth Rholetter Purdy Independent Scholar

350

Digital Divide

See Also: African American Families; Brown v. Board of Education; Civil Rights Act (1964); Civil Rights Movement; Middle-Class Families; Military Families. Further Readings Daniels, Maurice C. Saving the Soul of Georgia: Donald L. Hollowell and the Struggle for Civil Rights. Athens: University of Georgia Press, 2013. Mershon, Sherie and Steven L. Schlossman. Foxholes and Color Lines: Desegregating the U.S. Armed Forces. Baltimore, MD: Johns Hopkins University Press, 1998. Moskos, Charles C., Jr. “Racial Integration in the Armed Forces.” American Journal of Sociology, v.72/2 (September 1996). Moynihan Report—The Negro Family: The Case for National Action. Washington, DC: Government Printing Office, 1965.

Digital Divide The United States has become a knowledge-based society, where information is transmitted through digital means. Many initially hoped that this new era would erase social inequalities by leveling the playing field through access to computer services and training. However, availability of information does not ensure that all people have equal access to that information. Because of the inequality of access to information, the social gap has widened. Defining the digital divide is difficult. It has been described as the gap between people who have the skills and abilities to use technology, and those who do not; the gap between areas where technology is widely available, and where it is not; the gap between people who are educated and those who are not; and the gap between people who have access to information and those who do not. Because information is crucial to success in the 21st century, lack of access to information is viewed as a social problem. Socioeconomic status (SES) has had the most extensive impact on the digital divide in terms of both access and use. Higherincome households tend to use the Internet more frequently than lower-income households for such everyday activities as finding health information and researching products.

Access and Use Consistently defining the digital divide and interpreting its impact offers a challenge. Factors that affect the digital divide include income, gender, education, perception, and the willingness to take the initiative. These issues are often highly correlated and difficult to individually examine. Digital technologies are more variable than ever before; once access meant having a home computer, but now it means having mobile technologies on a smartphone, tablet computer, or other devices. More minorities and lower-SES individuals than other groups claim that their smartphone is their only point of access to the Internet. However, even with this access, use of Internet resources remain an issue. The digital divide in the United States is highly correlated with socioeconomic status. Low SES groups are obtaining more access to mobile devices than in the past, but their uses have been limited to entertainment rather than for information or education. Some lower SES groups use social networking sites at a higher rate than others. Middle-class families have more access to home computers and mobile devices. Middle-class families tend to use computers for information-gathering. In addition, middle- and upper-class children have access to technology at an earlier age, and therefore tend to be more adept at using various technologies. The majority of Internet users are between the ages of 18 and 64, with a decrease in use for the 55-to-64 age range. Although there is an age difference in use of computers and access to online information, the major factors impacting the digital divide remain SES and education. Other concerns impacting the digital divide include how skills are developed to access technologies, and how the technologies are used. Most children are interested in using technology. However, access and use are two elements of the divide that transcend economics, gender, age, and education. A majority of Americans own and use a smartphone today. However, owning such a device, and having the skills to take advantage of its capabilities, are two different things. In addition, these devices do not substitute for computer access with a high-speed connection. Without the wireless access that these devices provide, it is difficult to apply online for jobs or obtain a college degree online.



Education’s impact on the digital divide can be seen in the direct correlation between Internet use and level of education. For example, there may be a difference between those raised in a workingclass neighborhood and those from a middle-class neighborhood. The usefulness of technology may be viewed differently, depending on one’s family background and early education. Skills, knowledge, competence, and abilities are culturally related, and hold implications for parents, teachers, and administrators. Computer use in schools is related to effective computer use in the home. The digital divide has two aspects—access and computer use. Access is most frequently impeded because of poverty, and use is most frequently impeded because of lack of knowledge. Studies have found that children who use the Internet have higher GPAs and score better on reading tests than those who do not use computers. Furthermore, the earlier children are exposed to these computers, the greater technology’s influence on them. Therefore, early Internet use may be correlated with positive academic outcomes. Technology use in the classroom has generated a great deal of discussion in recent years. Schools and districts that are able to take advantage of federal or state funding, and that are committed to providing tools to teachers and children, are increasingly finding ways to provide access to technologies at lower costs than before. Low SES areas proactively seeking free tools and technologies are still at a disadvantage because of inadequate infrastructure that remains cost-prohibitive to update, such as providing a school with wireless or broadband access. Consequently, even if students have devices, schools may not have the means for students to use those devices in the classroom. The Pew Charitable Trust has found that teachers who have several students without access to technology do not assign projects that include computer use, thereby restricting learning for all, perpetuating the divide. Technology use has created a learning divide, where the disadvantaged do not have access to important information because they do not know how to access it. Perception, initiative, and available options are key components in this issue. Availability and quality education are key to shrinking that divide. This education will require equipment, reliable Internet access, and comprehensive progressive training.

Digital Divide

351

Perception and Initiative The context in which people live and grow influences their way of thinking, which in turn affects the power that people perceive they have in making decisions. This has major implications for their lifestyle and pursuit of knowledge. Living in a certain family and being educated in a certain school make up the structure by which individuals form the beliefs, knowledge, and perceptions that are characteristic of the digital divide. Those without access tend to shrink from technology use because they are intimidated by it or have problems asking for help; often, they do not even know what to ask. Perception affects one’s ability to see what is available. Insecurities distort a sense of power, where people view teachers, doctors, and other professionals as more important than themselves; therefore, they hesitate to take these individuals’ time by asking for help or even explanations. They fail to take ownership of their children’s health care or education because they do not feel that they should. Therefore, the divide widens and solutions grow more distant. Looking at the digital divide as merely a problem of economics is an oversimplification. It also holds social, cultural, and political implications. These issues are difficult to address but need to be considered as the country move forward to provide access and training for the majority of Americans. Educators need to ensure that groups are provided not only the hardware and software, but also the training, skills, and support they need to connect to technology that works for them. The have-nots are on the wrong side of the divide and the barriers presented by low SES make crossing the divide very difficult. Suzanne K. Becking Fort Hays State University See Also: Education, Elementary; Education, High School; Education, Middle School; Personal Computers; Personal Computers in the Home; Technology. Further Readings Attewell, P. “Beyond the Digital Divide.” In Disadvantaged Teens and Computer Technologies, P. Attewell and N. See, eds. New York: Waxmann Publishers, 2003. Crawford, Susan P. “The New Digital Divide.” New York Times (December 3, 2011). http://www.nytimes

352

Direct Home Sales

.com/2011/12/04/opinion/sunday/internet-access -and-the-new-divide.html?pagewanted=all&_r=0 (Accessed December 2013). Ragnedda, Massimo, and Glenn W. Muschert. The Digital Divide: The Internet and Social Inequality in International Perspective. New York: Routledge Advances in Sociology, 2013. Zickuhr, Kathryn. “Who’s Not Online and Why.” Pew Internet & American Life Project. http:// pewinternet.org/Reports/2013/Non-internet-users .aspx (Accessed December 2013).

Direct Home Sales Every 2.5 seconds, someone in the world hosts a Tupperware party. While careers in direct home sales are frequently depicted as less serious than traditional occupations, the direct homes sales industry rakes in $30 billion a year, and has dramatically changed the relationship between women and paid work, which has implications for families in the United States. While there are men in direct home sales, the industry is almost exclusively taken up by women. Tupperware and Mary Kay are perhaps the best-known direct home sales companies, but the industry has exploded, and women now sell makeup, candles, bags, illegal purses, sex toys, children’s toys, jewelry, clothes, kitchenware, scrapbooking materials, baskets, pajamas, lingerie, wall decals, home accessories, and nutritional supplements, and more through home parties. The Home Party Plan The home party plan works when independent contractors, called consultants, present their products at a party. Parties are hosted by interpersonal contacts of the consultant, and the party host provides food and beverages for the comfort of the guests. Direct home sales parties are characterized as fun, and consultants take a soft approach to sales. They attempt to “share quality products” with friends, rather than making a hard sales presentation. Brownie Wise invented the home party plan during her tenure at Tupperware. She believed that the value of Tupperware was best understood through demonstration. She found wild success as an

individual sales representative for Tupperware, outselling all retail sales outlets combined. Earl Tupper, the inventor of Tupperware, immediately hired her in the corporate office, and she implemented the home party plan on a wide scale. Wise realized that women needed a way to work while also serving as primary homemaker. The home party plan offered incredible flexibility and autonomy for women to make their own schedules. Wise also realized that many women needed praise and recognition of their accomplishments outside the home. Tupperware promised women luxury prizes, and invited them to extravagant celebrations for success. This technique paid off, and the promise of exciting prizes contributed to the allure of direct home sales. Mary Kay’s famed pink Cadillacs remain a symbol of the promise of direct home sales success. An integral part of the home party plan is the promotional potential. Most direct home sales companies use a multilevel compensation plan. When individuals reach their sales goals, they are promoted to a director position, and oversee other individual consultants, taking a percentage of their sales. As people below them grow and perform, the directors are continually promoted as a reward. Many high-level consultants make six-figure salaries, largely on the sales of people below them on the organizational hierarchy. Direct Home Sales History Direct home sales and product parties emerged when women started their struggle between competing responsibilities at home, and paid work. During World War II, millions of women took paid jobs supporting the war effort. As men returned home, women were expected to relinquish their jobs and return home. The support that working women experienced during the war eroded, and debates about women’s right to work exploded. Working mothers faced particular criticism when taking up employed positions. They were frequently accused of neglecting their children, abandoning their husbands, and contributing to the decay of society. Thus, when Tupperware offered an opportunity to work while in the domestic sphere, many women jumped at the chance to work for income while maintaining their positions in the home. At the same time that women started their struggle with work and life commitments, the historical context also provided a perfect opportunity for



the success of direct home sales. Set against suburbanization and consumerism, direct home sales tapped into a new market of home products for families enjoying a strong economy. Shortly after the war, many U.S. American families purchased expensive homes and moved into the suburbs. They needed expensive appliances and products to fill their new homes, and started consuming products in large quantities. The addition of home appliances reduced the amount of hours required to keep homes functioning, and traditional homemakers suddenly found themselves with more time for leisure. Coupled with the enjoyment of the work they experienced during the war, and the increased consumption of their families, many women found direct home sales a perfect solution. Direct Home Sales as the Solution to Work–Life Balance Women in the United States, particularly mothers, continue to experience enormous pressure to tend to their children and homes as their primary life function. However, most women and mothers in the United States also work outside the home. Rather than working for extra spending money, women today usually work out of economic necessity. Direct home sales jobs continue to bridge the divide between work and home by offering consultants complete flexibility and scheduling autonomy. By altering the temporal and spatial parameters of work, direct home sales provide a unique opportunity for women to become more economically independent, to contribute to the family income, and to simultaneously maintain their roles as homemakers. While the promise of blending home and work remains appealing, not all consultants realize the dream. Many women join direct home sales companies, only to quit soon after because they do not make the promised income. Exact salaries are private, but some estimates suggest that the average annual income for a Mary Kay representative is around $1,200. Turnover rates for direct sales jobs are overwhelmingly high. Some of the longer-­ tenured direct sales companies have support groups for former consultants, which provide accounts of the “dark side” of direct home sales. For example, Pink Truth is a Web site and blog that has posts from former consultants about their negative experiences with Mary Kay. Despite multiple accounts of dissatisfaction with direct home sales as a career,

Disability (Children)

353

the industry continues to boom, and women continue to represent and purchase products, and host and attend parties. Sarah Jane Blithe University of Nevada, Reno See Also: Cult of Domesticity; Family Consumption; Homemaker; Middle-Class Families; Separate Sphere Ideology. Further Readings Bax, C. “Entrepreneur Brownie Wise: Selling Tupperware to America’s Women in the 1950s.” Journal of Women’s History, v.22/2 (2010). Mullaney, J., and J. Shope. Paid to Party: Working Time and Emotion in Direct Home Sales. New Brunswick, NJ: Rutgers University Press, 2012. Williams, S. and M. Bemiller. Women at Work: Tupperware, Passion Parties, and Beyond. Boulder, CO: Lynne Rienner, 2011.

Disability (Children) Disabilities are impairments to an individual’s cognitive, developmental, emotional, mental, physical, or sensory functioning. Disabilities may be present from birth, or they may manifest at some later date. They can be inherited, congenital, acquired, or of unknown origin. How society deals with and supports individuals with disabilities has greatly affected the American family. For decades, those with disabilities were viewed as deficient by most of society, and their families were provided with little support. This left the disabled with few vocational or educational options, and created financial and social stress for the families of disabled individuals. Education, training, and treatment options have multiplied over the past few decades, broadening the options available to individuals with disabilities. Similarly, legislation, new understandings of disabilities, and changing values have combined to alter the experiences of individuals with disabilities and their families. Today, over 6 million children are identified as having disabilities that entitle them to receive special education services at their schools. However, many more children have

354

Disability (Children)

disabilities that either have not been diagnosed, or that do not require accommodations. Background Historically, those with cognitive, developmental, physical, or other disabilities have been shunned by society at large. Many were kept at home, whereas others were placed in institutions where they spent their entire lives apart from their families. All too frequently, disabilities were attributed to either God’s will or poor parenting, which created a stigma that many families found humiliating. As a result of this stigma, most families felt uncomfortable discussing their challenges and needs. Most care for the disabled was paid for by that individual’s family, with some services provided by charities. Because disabled individuals were provided with few educational or vocational options, their care was the financial responsibility of their families, which sometimes created difficulties. After the U.S. Supreme Court decision in Brown v. Board of Education in 1954, however, society began to reexamine who was excluded from public schools. The civil rights movement ended racial segregation in schools, and in the following decades, discrimination against others in public schools, especially those with disabilities, was reexamined. In 1975, the U.S. Congress passed the Education for All Handicapped Children Act, which was later reauthorized as the Individuals with Disabilities Education Act (IDEA). Up until that time, fewer than 20 percent of children with disabilities received services at, or indeed attended, public schools. Many states and local school districts had specific regulations barring certain children, such as those who were blind, deaf, or cognitively disabled. IDEA changed this, requiring schools to provided educational and rehabilitative services to children identified as having special needs. Services Offered in Schools A variety of services are offered to children with disabilities as a result of IDEA. These services, of course, vary depending on the type of disability facing the child and the severity of that disability. Even after the passage of IDEA, children with disabilities were not automatically qualified to receive special services from their public schools. Disabilities covered under IDEA include autism,

A child and staff member enjoy annual holiday carols at the St. Mary’s Home for Disabled Children in Virginia. The home supports more than 80 disabled children.

blindness or visual impairment, deafness or hearing impairment, intellectual disabilities, orthopedic impairments, those with emotional problems, and those with speech or language impairments. Children who qualify as having a disability either under the Americans with Disabilities (ADA) or section 504 of the Rehabilitation Act of 1973 automatically qualify for special services pursuant to IDEA. These special services include an individualized educational program (IEP) that is especially developed for the student in consultation with the school district, the student’s parents, and other professionals. The IEP is written after a student study team has determined the need for special services, and contains what services are to be provided, how frequently, and for how long. IEPs also document a child’s current level of academic performance, how his or her disability impedes that performance, and



specifies accommodations or modifications in the general education program necessary to help the child perform better. While in previous decades, such accommodations or modifications involved special classrooms or schools, more recent interpretations of IDEA’s mandate for the “least restrictive environment” allow the child with a disability to receive as many services as possible in the general education classroom. Services provided pursuant to an IEP may include transportation, speech-language pathology and audiology services, psychological services, physical and occupational therapy, music therapy, and therapeutic recreation. In addition to these, counseling services, including rehabilitation counseling, orientation and mobility services, and medical services for diagnostic or evaluation purposes, are also available to children with disabilities. Because such services have a greater effect if they are begun at an early age, early identification and assessment of disabilities is a vital part of the special education program at public schools. These services permit a child with a disability to receive a free and appropriate public education. Traditionally, services for children with disabilities were provided outside the general education classroom, either by a resource teacher in a special classroom or at a special school. This was because most general education teachers did not have the training or skills to provide such services, and the prevailing attitude was that it was more efficient to have such students served by specialists. Children with visual or hearing impairments were frequently sent to schools for the blind or deaf, which kept them segregated from much of society. Partly due to cost considerations, and partly due to concerns regarding access, today most accommodations are provided in the general education classroom. Assistive Technology and Other Supports Children with disabilities often use assistive technology to deal with or overcome a disability. Such technology includes wheelchairs, prosthetic limbs, walkers, Braille books, hearing aids, and assistive communication devices, all of which can assist a child with a disability to better access the curriculum. As personal computers and other electronic devices have become omnipresent, such devices

Disability (Children)

355

have been introduced to assist children with disabilities. These innovations include speech recognition software, screen readers, and argumentative and alternative communication systems. Speech recognition software permits a computer to create text based upon the speech of a child with a disability, permitting those with physical impairments to use computers if they are unable to use a keyboard. Screen readers allow users to access information displayed on a computer screen by having a program “read” the text and transform it into spoken word, Braille text, or some other form accessible to the child. Argumentative and alternative communication (AAC) systems help children who are unable to speak or write to communicate their thoughts to those around them. These forms of assistive technology continue to evolve and improve, granting more options to children with disabilities. Legislation has also helped children with disabilities by requiring private individuals, corporations, and government agencies to provide accessibility to them. Commercial organizations such as restaurants, retail stores, hotels, and the like must make reasonable accommodations to individuals with a physical or mental impairment. When children reach the age of 21, they are no longer covered by IDEA, so they must rely on ADA or the Rehabilitation Act of 1973 to protect them against discrimination or inaccessible accommodations. Certain states have also passed legislation to protect those with disabilities by providing additional protections or rights. The disability rights movement refers to efforts to secure additional protections or modifications for those with disabilities. Increased accessibility and safety are the chief goals of the disability rights movement, especially in regard to buildings, transportation, and the physical environment. Families that include children with disabilities have benefited from these actions. Stephen T. Schroth Jason A. Helfer Knox College See Also: Assisted Living; Civil Rights Movement; Education, Elementary; Education, High School; Education, Middle School; Education, Preschool; Head Start; Primary Documents 1994.

356

Disability (Parents)

Further Readings Bichenback, J. E. Ethics, Law, and Policy. Thousand Oaks, CA: Sage, 2012. Harwell, J. M. and R. W. Jackson. The Complete Learning Disabilities Handbook: Ready-to-Use Strategies and Activities for Teaching Students With Learning Disabilities. San Francisco: Jossey-Bass, 2008. McDonnell, J. and M. L. Hardman. Successful Transition Programs: Pathways for Students With Intellectual and Developmental Disabilities, 2nd ed. Thousand Oaks, CA: Sage, 2010.

Disability (Parents) The right to parent without outside interference is constitutionally guaranteed, but it is limited by an equal right of the state to intervene on behalf of children. Family court proceedings to determine whether or not a child should become a ward of the state, or which parent is competent to retain custody, are often unfair because the rules are differentially or unjustly applied to parents with disabilities. Adoption, both nationally and internationally, is biased against disabled people. Such individuals are also discriminated against in trying to obtain assisted reproductive technology. More importantly, many disabled individuals need help with activities of daily living (ADL) such as bathing, cooking, cleaning, and shopping; this extends to parenting duties if such individuals have children. However, the government does not provide financial assistance for disabled parents to take care of their nondisabled children because such personal assistance services programs (PAS) are for the primary recipient only, not for dependents. Other problems include finding suitable housing and paratransit systems that allow children. Disabled women are undersupported with regard to reproductive services. Overall, there is a shortage of support groups and systems. Service-providing agencies lack awareness of the disability status of their clients. Numbers In 2012, there were anywhere from 4.1 to 9 million disabled parents in the United States, out of 13.2 million disabled people total. There are also

grandparents with disabilities who have become primary caregivers for their grandchildren. Despite the obstacles, approximately 6.2 percent of all American parents of children under age 18 are disabled. For American Indian/Alaska Natives the percentage of disabled parents is 13.9 percent. Of African American parents, 8.8 percent are disabled. The percentage of white parents who are disabled is 6 percent; the percentage of Latino/Hispanic parents who are disabled is 5.5 percent; and the percentage of Asian/Pacific Islander parents who are disabled is 3.3 percent. There are 6.6 million children with disabled parents, which accounts for 9.1 percent of all U.S. children. Limited data most likely underreports the number of parents with specific disabilities; according to one survey, 2.8 percent of parents have mobility issues; 2.3 percent have cognitive disabilities; 1.4 percent have a hearing disability; and 1.2 percent have a vision disability. The differing estimates of the number of disabled adults reflects the different definitions used by the American Community Survey (ACS), National Health Interview Survey, and the Survey of Program Participation. Another reason for the discrepancy is that some surveys do, and some do not, include parents with children who are 18 years or older, or whose children live elsewhere. Additionally, survey sophistication varies; for instance, the ACS of 2008–09 began differentiating between deaf/hearing impaired and blind/vision impaired parents, categories that earlier ACS studies and other data sources lumped together. The 4.1 million disabled parents is the most recent estimate, dating to 2012. Whether a parent’s disability is intellectual, psychiatric, physical, sensory, or developmental, such parents may find that pervasive and systemic discrimination may restrict their opportunities to become or remain parents. Disabled parents tend to be less educated than the U.S. average, with 12.6 percent holding college degrees, compared to 30.8 percent of the nondisabled. With regard to high school, 76.5 percent of disabled parents have a high school diploma or higher, compared with 87.2 percent of others. Money is another issue that divides the disabled and nondisabled, with 52 percent of disabled parents on Social Security Insurance (SSI), and a significant percentage on Social Security Disability (SSD), Supplemental Nutrition Assistance Program (SNAP), and Temporary Assistance for



Needy Families (TANF). Even this assistance is insufficient for the needs of many families with a disabled parent. Similarly, the median income for a family with a disabled parent is $35,000, compared with $65,000 for all families. Legal Issues Two-thirds of dependency statutes allow courts to use disability as the basis for a finding of parental unfitness, which is the basis for loss of parental rights. In all states, family or dependency courts use disability as a criterion for determining what sort of custody is in the best interest of the child. The theoretical linking of an uncorrectable disability to actual harm to the child is disregarded in practice, and a disability’s hypothetical potential for harm is sufficient to cost parents custody. Additionally, the family law system is often systemically biased against disabled parents because state laws often discriminate against or fail to protect, disabled parents from unfounded accusations of unfitness, and a lack of familiarity, not to mention expertise, with the capabilities of disabled parents in raising their children. Disabled parents may encounter in family or dependency hearings evidence based on inappropriate or unadapted evaluations of parenting capacity. Although the child welfare system is customarily assigned the task of dealing with parents with disabilities and their children, its solution is often to separate parents and children. This happens because many state laws state that a disability is a criterion for termination of parental rights. Furthermore, some provisions of the Adoption and Safe Families Act of 1997 seem to promote, perhaps mandate, separation and some read the Americans with Disabilities Act with only limited applicability, especially when termination is on the table. The bias is against disabled parents; the perception is that they are by definition unfit; and welfare workers are not trained to recognize the capabilities of disabled persons. The eugenics movement of the first half of the 20th century led over 30 states to legalize involuntary sterilization under the argument that disabled and other allegedly socially inadequate populations should not have children because such children would become a burden to the state. The Supreme Court upheld enforced sterilization, and by 1970 over 65,000 people in the United States had been

Disability (Parents)

357

involuntarily sterilized. By 1990, two decades after enactment of the Americans with Disabilities Act, some states retained laws allowing voluntary sterilization. Women with disabilities are still encouraged, perhaps coerced, to abort their fetuses or be sterilized because others regard them as unfit for motherhood. Increasingly, those with intellectual or psychiatric disabilities are under pressure to become sterilized. Historically, discrimination against disabled parents has affected those with sensory or physical limitations, but legal authorities are increasingly ruling against such disabilities as autism. This and other disabilities made visible by improved diagnostics are the disabilities of the future. As improved diagnostic tools reveal greater numbers of adults with these disabilities, the number of instances of revocation of parental rights due to disability may grow. Removal rates for parents with psychiatric disabilities can be as high as 70 to 80 percent. For intellectual disabilities, rates range from 40 to 80 percent. Physically disabled parents face discriminatory treatment in 13 percent of cases, and rates of child removal and loss of parental rights for blind or deaf parents are also quite high. Disabled parents are also more likely than others to lose custody after a divorce, have greater difficulties in adopting, and have more restricted access to reproductive health care. Self-Help The Disabled Parents Network (DPN) is a national organization with the goal of increasing acceptance of disabled parents and providing them with support and information. The DPN vision is that those who are disabled or have long-term health issues might enjoy the same family life as other Americans. To make this happen, they advocate for better health, education, and social services. When those services are not available for those who need them, the DPN offers personal support and advice, advocacy, peer support groups, and online discussion groups. Another support group, Parents with Disabilities Online (PDO), has been in operation since 1996. Its statistics note that over 8 million families in the United States have at least one disabled parent. PDO advocates that disabled parents share their experiences, including their success stories and their failures, because this information might help other disabled parents and those who live with

358

Discipline

them. Similarly, the Parent Empowerment Network is a free subscription e-mail forum for parents with disabilities, disabled persons with aspirations toward parenthood, and nondisabled partners. The organization’s Web site offers topics such as accessible and independent parenting, society’s attitudes, reproductive issues and pregnancy/childbirth, and general child care and rearing. Conclusion Although the incidence of disabled adults who seek to have families seems to be increasing, there is little research or information on how common this is, and what such parents need. Furthermore, funding for research is scarce, as are programs for assistance and education about the needs and goals of both parents and children. John H. Barnhill Independent Scholar See Also: Adoption Laws; Assisted Living; “Best Interests of the Child” Doctrine; Disability (Children); Genetics and Heredity; Mental Disorders. Further Readings Disabled Parents Network. http://disabledparentsnet work.org.uk/about-us (Accessed December 2013). Kaye, H. Steven. “Current Demographics of Parents with Disabilities in the U.S.” Through the Looking Glass (2012). http://www.lookingglass.org/services/ national-services/220-research/126-current -demographics-of-parents-with-disabilities-in -the-us (Accessed December 2013). National Council on Disability. “Rocking the Cradle: Ensuring the Rights of Parents With Disabilities and Their Children. Executive Summary” (2012). http://www.ncd.gov/publications/2012/Sep272012 (Accessed December 2013). Parents with Disabilities Online. “Let Your Parenting Journey Begin.” http://www.disabledparents.net (Accessed December 2013).

Discipline The controversy over disciplining children most frequently relates to the most appropriate strategies

to use. The term discipline means to teach or train, but the word has different connotations for different people. When used in terms of adult behavior with children, discipline most frequently refers to an effort to correct or eradicate undesirable behavior. Whether the guidance is corrective or preventative in nature, adults use a variety of approaches to teach those in their charge. Generally speaking, a parent’s discipline strategies depend upon his or her fundamental beliefs about the role of a disciplinarian, including the purpose behind disciplinary actions. This belief system is shaped by a number of factors such as culture, age, education, and personal experiences with discipline. Discipline Styles Discipline styles are frequently described as authoritarian, permissive, and authoritative. While the intention behind all three approaches is to influence a child’s behavior, the style used to accomplish that goal varies. For example, imagine how a parent might respond to an 8-year-old jumping on the living room couch. A parent with an authoritarian discipline style will provide the most rigid form of control, reminiscent, some might say, of a military drill sergeant. This person will provide a child with specific instruction on proper behavior and expect unquestioned compliance with stated guidelines. The authoritarian adult will use direct commands to guide a child’s behavior, with little room for negotiation. The authoritarian adult expects the child to do as he or she is told, and if the child does not comply, negative consequences are swiftly doled out. The authoritarian approach emphasizes complying with established norms, and minimal consideration is given to having the individual develop independent decision-making skills. A parent with an authoritarian discipline style may tell a child jumping on the sofa to immediately stop, or he will be spanked. If the child challenges the adult, the threatened punishment (spanking) will be swift. Permissive discipline, on the other hand, tends to provide a child with very little direction. An adult with a permissive discipline style may offer a child suggestions or options for behavior that may not always be consistent. Additionally, the permissive adult may state an expected behavior, but offer no consequence if the direction is not followed. For this reason, the permissive style is sometimes referred to as the “doormat” approach. Children



essentially do what they want, despite the adult’s disapproval, because consequences are few and far between or inconsistent. Typically, those who use a permissive approach are trying to avoid being too controlling. Adults demonstrating this style might explain that they do not want to force children to make the right choice through punitive measures, but prefer instead to encourage them to behave properly through suggestion. While those using this approach intend for the child to develop long-term decision-making skills, the lack of consequences often prohibits this outcome because the child learns that anything he or she wants to do is acceptable. A person who is more permissive in his or her discipline approach may lean toward being more indulgent or more indifferent, but in either case, they tend to be concerned with the child’s happiness, rather than with compliance. A parent with a permissive discipline style may tell a child jumping on a sofa that furniture is not a toy, but may refrain from explicitly asking him to stop jumping, and offer no consequences for continuing the behavior. The authoritative discipline style is what some might call a happy medium between authoritarian and permissive approaches. Parents exhibiting this style address a child’s behavior by providing direct guidance through appropriate options and meaningful consequences. Generally, those adhering to an authoritative philosophy are in tune with a child’s individual needs, while also attending to the context around them (such as rules, other’s rights, or safety). An important facet of the authoritative approach is the emphasis on developing long-term independent decision-making skills. By setting explicit guidelines and identifying specific consequences, acceptable behavior is clearly described, and consequences related to unacceptable behavior are explained. This provides the child with guidance on how a responsible decision is made. An adult with an authoritative approach might say to a child jumping on the sofa, “We do not treat our furniture this way; you could ruin the sofa or get hurt. Go outside if you want to jump around or you get a time out.” The noncompliant child is then swiftly placed in a time out. The categories of authoritarian, permissive, and authoritative are mostly used to discuss a person’s overall discipline style. Some people tend to make most of their discipline decisions by using an authoritarian framework; others mostly adhere

Discipline

359

to a permissive or authoritative approach. One who might consider himself to subscribe to a specific style may occasionally demonstrate behaviors that would be considered outside of that framework. Each of the discipline styles is fundamentally rooted in the desire to teach children right from wrong, which over the course of childhood is a highly variable process that requires a variety of strategies that might be difficult to lump into one philosophical framework. Consequences There are consequences to all behaviors. When working with children, adults will frequently impose a punishment for undesirable behavior and a reward for desirable behavior. Sometimes, these other-imposed consequences are logically related to the behavior, but other times they are not. On the other hand, the consequences for a child’s behavior do not have to be other imposed at all, while still serving as a highly effective learning opportunity. A logical consequence is one that is directly related to the child’s behavior, but is not a direct outcome of the child’s action. When an adult is teaching appropriate and/or prosocial behaviors, he or she structures the consequence of an undesired behavior by directly addressing that behavior in a way that emphasizes respect, is relevant to the offense, and is also realistic. Using logical consequences, rather than punishment, is a preferred strategy used by those who desire to help children learn from their mistakes. Logical consequences require the child to know rules and limits, and to use alternate behaviors to comply with them. In the event that a child’s behavior hurts someone else, reparation is also a part of the process. Consider the following situation: John’s family walks their plates to the kitchen sink using two hands after dinner. Last night, John was in a hurry, so he skipped to the sink, holding his plate with one hand. The plate hit the corner of the counter, and it shattered. One logical consequence in this situation would be to have John purchase a replacement plate. Therefore, his parents might help him find a way to earn the money to replace it. Those who intentionally implement logical consequences do so using a nonpunitive approach. That is, the experience is designed to be a positive learning experience, rather than a punishment for the disobedience. In the example above, the family

360

Discipline

had an established expectation of taking plates to the kitchen sink in a responsible manner. From his behavior, it appears that John did not understand that he could be hurt or the plate could break if he disobeyed the rule. The parents chose a logical consequence as a teaching tool for addressing the problem by requiring Johnny to fix his mistake by replacing the plate. A natural consequence, on the other hand, occurs without adult involvement; it is a direct result of the child’s behavior. For example, if a child refuses to eat dinner, a natural consequence would be that the child is hungry at bedtime. Sometimes, natural consequences are sufficient as teaching opportunities. Perhaps the child who refused to eat dinner will remember that she was hungry at bedtime when she sits down to dinner the next day. However, if a parent (or other adult) interferes with a natural consequence, its effectiveness is weakened or eliminated. If, when the child complains of being hungry at bedtime, the parent provides a snack or even an alternate dinner, the natural consequence of hunger from not eating dinner with the family no longer exists. Many learning opportunities are available through natural consequences. However, adults must be careful when deciding to rely upon the natural consequence as discipline. First, the natural consequence must be one that is undesirable to the child. For example, if a child successfully steals a pack of gum from the grocery store, the natural consequence is that she has some free gum. The free gum might actually reinforce the stealing behavior. In that case, using a logical consequence, like bringing the gum back to the store and having the child apologize to the store manager and pay for it out of her allowance, might more appropriately reinforce the idea that stealing is not a desirable behavior. Also, a natural consequence might not be an acceptable outcome. For example, the natural consequence of playing with matches is that a child could be seriously burned or cause a catastrophic fire. For most people, neither of these natural consequences is acceptable, and a logical consequence should be identified to correct this behavior. Punishment, in the simplest of terms, is a punitive consequence imposed by an authority figure that is intended to decrease undesirable behavior. Punishment occurs by either inflicting something

undesirable on, or removing something desirable from, a child in response to behavior viewed as unacceptable. Punishments may be reasonable for a given situation, but they are not typically considered logical consequences, as defined above, because the emphasis is on avoiding the punishment in the future, rather than understanding a situation. It is generally thought that by introducing a negative experience in response to an undesired behavior, the offending behavior will cease to occur. In the United States, the terms discipline and punishment are often erroneously used synonymously. Common Discipline Strategies In the United States, several strategies are commonly used to teach children appropriate behaviors. Time out is a strategy wherein a child is temporarily removed from the environment in which an undesirable behavior is occurring. For example, if a child is throwing a tantrum in the kitchen because she wants a piece of candy, the mother removes the child from the kitchen and the presence of the candy. After time alone in another part of the house, the child is hopefully able to address the situation with a calmer perspective. The most effective way to conduct a time out is open to debate. Proponents who use time out as a discipline strategy feel that the isolation that is experienced by the child is sufficient to deter future undesired behavior. Others advocate using time out as a cool-down period for both adult and child, as a strategy to diffuse anger before discussing the undesired behavior. Further debate concerns the manner in which a child sits in time out, and the amount of time that a child should be required to sit quietly before being released. A child’s age and developmental stage should be taken into account when making these decisions, as well as factors that are contributing to the misbehavior that may be out of his or her control. Spanking and corporal punishment is a common, albeit controversial, discipline strategy. Generally, spanking refers to slapping a child on the buttocks with the open palm of a hand as a punishment for misbehavior. Striking a child on other body parts with either a hand or an object, such as a belt, is referred to as corporal punishment. Both practices are controversial, with many who believe that a good spanking is necessary for discipline,

Disney/Disneyland/Amusement Parks



and many who believe that it is a form of child abuse. Researchers are also in disagreement about the overall effectiveness and appropriateness of spanking. Advocates insist that the brief pain felt on the buttocks, which causes no physical injury, is an effective deterrent to undesirable behavior, and has no long-term negative impacts. Opponents argue that the intentional pain inflicted by adults is psychologically damaging and leads to anxiety or aggression problems later in life, with even minimal exposure to spanking. Many who oppose spanking express concern about the possibility of physical maltreatment that can occur when spanking gets out of hand. Many children who were spanked or otherwise physically disciplined by their parents raise their children using the same practices, believing that what worked for them will work for the next generation. Others who suffered at the hands of their parents forswear the practice as unnecessary and barbaric when it comes to their children. Another common form of discipline is grounding. In this practice, a child is prohibited from some privilege as a consequence for undesirable behavior. The method of grounding is dependent upon the age of the child, and can include removal of the child’s cell phone for a certain period of time, probation from social activities, or restriction from the use of electronics, such as video games. Tara A. Newman Stephen F. Austin State University See Also: Adolescent and Teen Rebellion; Attachment Parenting; “Best Interests of the Child” Doctrine; Child Abuse; Child Care; Childrearing Practices; Curfews; Nature Versus Nurture Debate; Parent Education; Parent Effectiveness Training; Parenting; Parenting Styles. Further Readings Donaldson, J. M., and T. R. Vollmer.” An Evaluation and Comparison of Time-Out Procedures With and Without Release Contingencies.” Journal of Applied Behavior Analysis, v.44 (2011). Fields, Marjorie V. and C. Boesser. Constructive Guidance and Discipline. Boston: Pearson, 2014. Gershoff, Elizabeth Thompson. “Corporal Punishment by Parents and Associated Child Behaviors and Experiences: A Meta-Analytic and Theoretical Review.” Psychological Bulletin, v.128/4 (2002).

361

Disney/Disneyland/ Amusement Parks The amusement parks that cartoonist, filmmaker, and business entrepreneur Walt Disney created beginning in the 1960s are a uniquely American phenomenon, and his influence continues to be felt many decades after his death. Disney transformed the concept of hospitality, branding, and marketing, in the process making the Mickey Mouse logo one of the most recognizable brands images in the world. Disney also transformed the family vacation by creating Disneyland in Anaheim, California, and Disney World in Orlando, Florida, both of which have become destinations that have offered generations of families memories to last a lifetime at “the happiest place on Earth.” History Amusement parks are destinations filled with entertainment, food, music, games, and rides that evolved from the carnivals in 16th-century Europe. The first amusement park can be traced back to performers who entertained crowds that gathered in Denmark near a natural spring believed to have healing waters. Dyrehavsbakken, as it came to be known, is still an amusement park, and now has multiple roller coasters, restaurants, and circusstyle performers, and draws approximately 2.5 million visitors each year. American amusement parks grew out of world expositions and state fairs that gained popularity in the late 1800s. Specifically, the Chicago World’s Fair in 1893 had an area devoted to rides and amusements called the Midway Plaisance; the midway continues to be a standard feature of amusement parks. The next several years heralded a golden age of amusement parks in the United States. Once electric trolley cars were established in heavily populated areas, amusement parks were built as destinations along some of the lines. Additionally, areas such as the boardwalk in Atlantic City, New Jersey, and Coney Island in Brooklyn, New York, built multiple hotels and restaurants to accommodate visitors to the parks, thus expanding their capacity to attract millions of tourists each year. Realizing that the parks would be even more attractive if they appealed to all ages, entrepreneurs introduced the

362

Disney/Disneyland/Amusement Parks

Walt Disney World, located in Lake Buena Vista, Florida. Built on 30,000 acres of mainly swampland near Orlando and opened on October 1, 1971, the resort is the flagship of Disney’s worldwide theme park empire. It is now by far the most popular theme park resort in the world, with attendance of 52.5 million people annually.

concept of “kiddie parks” with rides and amusements aimed at children. Attendance at amusement parks declined during the Great Depression and continued its downward trend during World War II. The suburbanization of America after the war, as well as the availability of television, further reduced American families’ interest in amusement parks. They had developed a reputation for being somewhat seedy and filled with unsavory characters, no longer appropriate for family entertainment. By 1964, the last of Coney Island’s large theme parks closed down. Some amusement parks in other areas of the country continued to do well, but they were the exception. Cedar Point in Sandusky, Ohio; Six Flags in Arlington, Texas; and Kennywood near Pittsburgh, Pennsylvania, for example, drew increasing numbers of family vacationers as smaller local amusement parks went out of business. Walt Disney The cartoon character of Mickey Mouse debuted in the 1928 film Steamboat Willie, and over the next several decades, he continued to appear on film and in print with his frequent companions Donald Duck, Pluto, Goofy, Minnie Mouse, and

Daisy Duck. The characters were the stars of Walt Disney Studios, the enormously successful movie company and animation studio founded by Disney in 1939. Fans often wrote Disney to ask if they could tour the studio, but he believed that it would not be very interesting to fans. Instead, he began thinking about creating an eight-acre park near the studio that would have rides that families could enjoy together. Disney and his designers (known as “Imagineers”) began drafting a plan, while Disney bought 160 wooded acres in Anaheim, California. Securing the funding for the park proved difficult, but Disney made a deal with the ABC television network to air his new anthology series, Disneyland (later renamed Walt Disney’s Wonderful World of Color, and then The Wonderful World of Disney), on the unproven network in exchange for funding. The series debuted in 1954, the same year that construction on Disneyland began. In 1955, Disneyland opened to the public, and Walt Disney Studios had released 26 successful movies, many featuring iconic Disney characters such as Peter Pan, Cinderella, Snow White, and Dumbo. It was also the year that The Mickey Mouse Club debuted on television. Within three months of opening, Disneyland had



welcomed its 1 millionth guest. Disney’s gamble was embraced by American families. Over the next 10 years, Disney acquired almost 30,000 acres of land near Orlando—much of it swampland—and in 1965, he announced plans to open a second, more ambitious theme park that would feature innovations for urban living. Just one year later, however, Disney died, and his brother, Roy Disney, oversaw the construction of the park, which was named Walt Disney World. The complex included the Magic Kingdom theme park, three resort hotels, and a golf course. Disney’s concept of a planned city was incorporated into the Experimental Prototype Community of Tomorrow (EPCOT), which opened in 1982. The third theme park at Walt Disney World, Disney–MGM Studios (later renamed Disney Hollywood Studios), opened in 1989 with a focus on show business. The fourth theme park, Disney’s Animal Kingdom, opened in 1998. In 2001, Disney California Adventure joined the Disneyland Resort. In 2013, the massive Walt Disney World entertainment complex housed the four theme parks, two water parks, 33 resorts and hotels, five golf courses, a racing facility, a wedding pavilion, a sports complex and more than 300 eating establishments. With more than 66,000 employees, it is the largest singlesite employer in the United States. More than 47 million people from around the world visit there annually. Disneyland Resort has two parks (each with eight themed areas) and three hotels, with approximately 22 million annual visitors. There are also Disney theme parks and resorts in Paris, Tokyo, and Hong Kong. A Disney cruise line, based in Florida, was added to the lineup in 1996. The Disney Way Despite the fact that the Walt Disney Company is a multibillion dollar conglomerate, and is one of the six companies that control a sizable portion of the U.S. entertainment industry, it has generally maintained a reputation for providing wholesome family fare. The “Disney way” of doing business has been analyzed by many experts in an attempt to replicate its success. One such Disney tactic that has helped keep its image pure is vigilantly protecting its trademarked characters and monitoring how they are portrayed, threatening legal action against even small businesses that use images without permission. Additionally, the Disney mystique is

Disney/Disneyland/Amusement Parks

363

maintained by the company’s commitment to innovation, the immersive experience, employee pride, and near fanatical customer service, which can all be traced back to Disney’s personal philosophies of service and entertainment. Disney also has the benefit of cyclical advertising in which the television shows, movies, and theme parks are all vehicles used to promote one another. In fact, Disney was the first studio to advertise movies on television. Regardless of their job, Disney park employees are all considered “cast members,” supporting Disney’s notion that everyone plays an equally important role in entertaining the park’s guests. Also, with the exception of costumed characters, all Disney employees, including the CEO, wear name tags with their first name on it while at the parks. The parks are known for their cleanliness, and it is unusual to see even a stray popcorn kernel or piece of confetti on the ground for more than a moment. Toward this end, gum is not sold in any of the Disney parks. From the beginning, Disney parks charged an admission fee (it was not until the early 1980s that the fee included use of all rides and attractions), and they were built in areas that were not easily accessible by public transportation. This has caused some to speculate that the admission fee and the need for a car were designed to limit guests to primarily suburban families, theoretically reducing the number of “objectionable” individuals gaining access to the park. The Disney parks shifted the focus of amusement parks from individual rides to the broader guest experience. At Disney, guests are encouraged to have “magical” days, and the cast members work to immerse them in a world that is separate from reality. Disney parks and resorts have become a destination vacation for families, rather than a single-day excursion, and the emphasis on families and nostalgia has fostered multigenerational loyalty. The Disney Look In order to ensure that his park had a more wholesome feel to it than the other amusement parks, Disney created an appearance code for all park employees that codified the “Disney look.” The Disney look has been criticized for its homogeneity and rigidity, but the goal was to present a unified, polished, and professional appearance by all cast members. The code regulates everything from fingernail length to

364

Disney/Disneyland/Amusement Parks

facial expressions (e.g., no frowning) and gestures (e.g., pointing is allowed only with two fingers or an open palm). In 2000, the policy was rewritten to allow male cast members, who for decades were forbidden to have facial hair, to have a neatly trimmed mustache. In 2010, the code allowed female cast members to forego pantyhose when wearing skirts, and it was again amended in 2012 to allow closely trimmed beards and goatees on men. Some former employees have alleged that they were discriminated against when they were not allowed to wear articles of clothing required by their religion (such as a turban or hijab) because they did not conform to the Disney Look. To further enhance the immersive experience, cast members are prohibited from being seen in their costume or uniform, except in the appropriate area. Reportedly, this stemmed from the dissonance experienced by Walt Disney when he saw a cast member dressed for Tomorrowland while walking through Frontierland. The dress code for guests has been largely unwritten, but in the 1960s,male visitors were sometimes refused admittance if their hair was too long, and women were refused admittance if they were wearing halter tops. Visitors are still asked to wear clothes that have no offensive words or images on them, and if the guest is more than 10 years old, he or she is prohibited from wearing any costume. In 2012, a man resembling Santa Claus who was giving autographs to other guests was asked to leave after refusing to change his Christmas-themed clothes. For a more authentic Disney look, girls younger than 10 years old can pretend to be their favorite princess at the Bibbity Bobbity Boutique as they get special hair, make-up, and nail treatments for a range of fees. Boys can also get the “knight package,” which gets their hair styled, and they receive a sword and shield. While the further immersion into the illusion is compelling, this division also reinforces traditional gender role stereotypes by emphasizing appearance in girls and aggression in boys. This has been a common critique of the Disney movies as well. Safety and Lawsuits Though there have been several injuries and even deaths at the different Disney parks, many of them have been the result of guests disregarding safety instructions, having complications of a known or unknown medical condition, or deliberately attempting to circumvent existing boundaries (e.g.,

jumping off of rides, swimming in restricted areas). Both Disneyland and Disney World have fire stations (some with Dalmatian themes or fountains in the shape of giant fire hydrants) that provide emergency medical care to guests and cast members at the Disney parks and hotels. Disney parks have been the defendants in multiple lawsuits, which are often brought by the families of individuals who have been injured or killed at one of the parks. Additionally, many individuals have sued Disney, alleging that the costumed characters groped, hit, or were otherwise inappropriate with them or their children. All lawsuits of this type have been dropped when demonstrations with the costumes show that the alleged behaviors would be physically impossible. In 2013, two families alleged that two characters (Donald Duck and White Rabbit) exhibited racist behavior by ignoring their African American children in favor of attending to white children. The Walt Disney Company has also been criticized for its lack of ethnic or racial diversity in its senior management team. Implicit and explicit racism has also been repeatedly raised as a concern with regard to Disney movies. For instance, many of the Disney villains have black fur (e.g., Scar in The Lion King), dark skin (e.g., Jafar in Aladdin), black clothes (e.g, Ursula in The Little Mermaid and Maleficent in Sleeping Beauty) or foreign accents (e.g., the Siamese cats in Lady and the Tramp). Conversely, Disney has been praised by LGBT groups for being one of the first major corporations to offer employee benefits to same-sex domestic partners. The parks have also been praised for creating an atmosphere of inclusion for gay and lesbian individuals, and they have allowed same-sex weddings to take place in the wedding pavilion since 2007. Diana C. Direiter Lesley University See Also: Commercialization and Advertising Aimed at Children; Middle-Class Families; Same-Sex Marriages; Suburban Families; Television for Children; Vacations. Further Readings Byrman, Alan. “The Disneyization of Society.” Sociological Review, v. 4/1 (1999).

Forbes, Bruce David. “Mickey Mouse as Icon: Taking Pop Culture Seriously.” Word & World, v.23/3 (2003). Giroux, Henry. The Mouse That Roared: Disney and the End of Innocence. Lanham, MD: Rowman & Littlefield, 2010. Weinstein, Raymond M. “Disneyland and Coney Island: Reflections on the Evolution of the Modern Amusement Park.” Journal of Popular Culture, v.26/1 (1992).

Divorce and Religion Estimates are that 65 percent of Americans report having a religious affiliation, and about 43 percent regularly attend religious services. Religion, defined as an organized system of beliefs and rules used to worship a god or group of gods, is important for many in the United States. Americans are generally more religious and involved in more religious organizations than citizens in most industrialized nations in the world. Religion shapes social, political, and moral attitudes and values, and has a positive association with mental health and overall life satisfaction. For many, religious affiliation is a core marker of identity. Most religions emphasize and reinforce positive values that are taught in the home, such as respect, kindness, trust, commitment, forgiveness, and compassion, which strengthens family ties and marital relationships. Given these positive spiritual behaviors, religion shapes and supports values and norms that are consistent with couples’ attitudes and beliefs about life in general, and more specifically within the family system. As a whole, married couples who adhere to a religious organization’s set of beliefs and attend religious meetings together are less likely to divorce, regardless of the specific religious affiliation. While religion is a complex, multidimensional phenomenon, simply affiliating with a religious group or denomination exposes individuals to the various tenets of that group. Religious groups have historically adopted and adhered to more conservative views on social issues, including marriage and divorce. However, denominations have become more internally heterogeneous and liberal regarding social issues since the end of World War II, and religiosity as a whole experienced a decline across

Divorce and Religion

365

the 1980s and 1990s. Although different religious groups may adopt both conservative and liberal views across the United States, religion still influences individual and family decisions regarding household labor, childrearing, cohabitation, family formation, and family size. Divorce as a Sacred Loss Divorce rates in the United States sharply increased during the 1960s and 1970s, and then declined somewhat in the 1980s, before leveling off. By historical standards, however, the divorce rate is still relatively high. During the divorce process, individuals may turn to religion and churches for emotional help and financial assistance. From the perspective of religious couples, divorce is often interpreted as a desecration or sacred loss, which for some is closely linked to depressive symptoms. Spiritual conflict as result of this loss of a union under God, such as feeling abandoned, betrayed, or punished by God, or questioning God’s power, can enhance the depression. These self-destructive spiritual experiences can disrupt the psychological well-being of a person and decrease the capacity to effectively adapt to the changes resulting from the divorce. When these symptoms are present, it is not uncommon for individuals to draw on adaptive spiritual coping methods to deal with the traumatic loss. Spiritual coping, such as prayer to draw on the support from God, attending church services/activities, reading holy writings, and embracing thoughts like “God will provide and help” have been found to offer both peace and motivation to individuals as they struggle through the process of divorce. Spiritual living and religious coping strategies, such as prayer, have been specifically shown to relieve depressive symptoms and buffer the effects of divorce. Religion as a Predictor of Divorce Homogamy suggests that individuals are attracted to and have more fulfilling relationships with people who are similar to them. People who share values, attitudes, and goals feel validated and report that their relationships are more rewarding. This concept largely holds true in the realm of divorce and religion as well. Couples who share similar religious beliefs, expectations, and belong to the same denomination have historically reported higher levels of marital quality, and are less prone to divorce than couples who have religious differences. However, this link

366

Divorce and Religion

has significantly weakened in recent decades among the younger generation, partly due to the relative influence of structural and secular changes related to work, gender, family issues, and a decline in perceptions of religious authority. Same-faith marriages also provide a sense of unity, purpose, and commonality that promotes life satisfaction associated with sharing core beliefs. The principle of homogamy is also reinforced with couples who share similar levels of religiosity or spirituality. When both partners in a marriage support each other’s spiritual standards, it can enhance their overall marital satisfaction. However, an elevated risk of divorce occurs when one partner is very religious and the other partner is not religious. The risk of divorce for couples who report vast differences in religiosity is even stronger than the likelihood of divorce when both spouses report that they are not religious. Another factor that may contribute to lower risk of divorce is the religious behavior of husbands. For instance, although it is believed that men who ascribe to conservative religious norms and expectations are more likely to be focused on family responsibilities, be nurturing toward women and children, and fulfill household responsibilities, actions that reduce the likelihood of divorce, there is evidence from research that the risk of divorce is elevated among couples in which the husband attends religious services more frequently than his wife. Furthermore, when husbands attend religious services regularly with their wives, couples report higher levels of marital quality. In addition to religious homogamy, other factors influence divorce and religion. When couples share similar knowledge and beliefs about religion, it fosters positive communication, interactions, and mutual understanding. Similar religious values and opinions shared by spouses also lead to similar behaviors and world views, which are mutually confirmed and supported. Additionally, when couples’ religious views are related, it promotes joint activities, both religious and nonreligious, which can strengthen the marriage. Conversely, couples with disparate religious backgrounds and beliefs often find that these differences spill over into other areas of the relationship such as household organization, parenting, leisure, and friendships. Because religious institutions are often integrated into the broader community, participants have wider and more diverse social networks, and are more likely to give and receive social support

from religiously similar friends and family. Religious behaviors have been described as falling into two categories: intrinsic (internal motivation) and extrinsic (external motivation) religiosity. There are benefits to extrinsic religious behaviors, where couples and families receive additional strength and support from, as well as wholesome and fulfilling friendships within, these social networks. When marriages receive additional fortification and support by external religious activities with others who share similar beliefs (e.g., personal association and social gatherings with other religious friends outside of church worship), their relationships are more likely to be both stable and satisfying, which spills over into happier family life. This larger pool of social resources and strong and positive social networks found within religious congregations is related to relationship quality and stability. Other reasons for the lower likelihood of divorce among highly religious couples are the higher levels of marital satisfaction that they report, and lower levels of domestic violence. Religious couples may also be less prone to divorce because, on average, they perceive fewer attractive options outside of the marriage compared with less religious couples. Couples that are more religious find that their support systems often are comprised of fellow worshippers, and so consequently there are greater social sanctions for divorcing for those couples compared to less religious pairs. In addition, couples who ascribe to conservative faiths may have greater intrapersonal barriers to seeking divorce, in that their own belief systems may make the costs of divorce seem particularly unappealing, and alternatives to marriage (e.g., being a single parent) especially stressful propositions to consider. Effects of Divorce on Adults and Children’s Religiosity Approximately half of all marriages today will end in divorce. The process of divorce often brings negative emotional, social, and financial consequences for both adults and children. It is not uncommon for divorce to have additional adverse effects on one’s religious and spiritual well-being. Adolescents whose parents have divorced, but who view marriage as something sacred and divine, often experience an elevated risk of spiritual struggle that frequently raises questions about their faith that may have a long-term impact on their religious lives.



The strongest predictor of an adolescent’s religiosity is the religiosity of his or her parents. In addition, married parents are more likely to be religiously engaged than adults who are not married. A parents’ divorce often has an effect on adolescents’ religiosity, the extent of which depends on a variety of external and interpersonal factors. For adolescents raised in religious families, particularly those of Christian background, parental divorce increases the likelihood of both switching to another religion and/or apostasy. Many adolescents question and doubt their religious beliefs, doctrines, and principles, including whether God even exists. Youth who experience the divorce of their parents often report lower levels of religious involvement, church attendance, and spiritual struggles than youth from intact families. One reason for this is that divorced fathers are less engaged in the religious socialization of their children than married fathers. Practical barriers also play a role, such as disruptions in the continuation of religious practices because of changes in their parents’ attendance patterns, residential moves, or dealing with new schedules and routines that make religious attendance more difficult. Some children of divorce turn toward their faith as a way of coping. While religious service attendance may decline following divorce, many youth maintain their faith through religious or spiritual habits, such as prayer, and report spiritual growth and feel strengthened through the adversity and sacred loss of divorce. For others who are not as religious prior to their parents’ divorce, they may find themselves turning to religion for comfort and to find meaning and spiritual healing. Many parents experience emotional barriers to religious involvement following divorce. Congregations often hold marriage as a traditional ideal, causing some divorced parents to feel disconnected, unwelcome, or uncomfortable in their congregation, which also reduces their children’s religious involvement. Friends or others in the faith may also feel uncomfortable and unsure of what to say or how to offer assistance. Some churches offer divorce education classes and groups for parents and children, which are aimed at reducing the stress and stigma associated with divorce, and offer fellowship with others in similar circumstances. Churches also proactively strive to reduce the incidence of divorce by

Divorce and Religion

367

offering formal relationship and marriage education classes, seminars, and other resources to strengthen marriage relationships. Some endorse “covenant marriage” (a higher order of marriage that is more difficult to dissolve) as a way to promote marriage and reduce the chances of divorce. Dominant Religion’s Views of Divorce Most Christian churches discourage divorce, but may differ in their toleration of it. For many religions, marriage is considered a sacrament, and is meant to be a special relationship between a man, a woman, and their God. Nearly all major religious denominations in the United States view marriage first and foremost as a sacred institution, in addition to being a legal and social institution, which provides benefits to adults, children, and the broader society. Conversely, divorce, from a biblical view and in other sacred texts, is perceived as a sin, and should only be allowed in limited cases, such as abuse or adultery. Traditional religious doctrines emphasize marriage and family as the fundamental unit of society. However, there are some differences in marital status among specific religious traditions. Hindus and Mormons are the most likely to be married. These two traditions also have the lowest rates of divorced members, and both traditions highly discourage divorce. The Roman Catholic Church prohibits divorce and permits annulment, the grounds for which are determined by church authority, only after the civil divorce or annulment has taken place. The Eastern Orthodox Church allows divorce and remarriage in certain circumstances, though its rules are generally more restrictive than the civil divorce rules of most countries. Most Protestant churches discourage divorce, except as a last resort, but do not actually prohibit it through church doctrine. In general, the more conservative the religious tradition, the more divorce is discouraged. David G. Schramm University of Missouri G. E. Kawika Allen Brigham Young University See Also: “Best Interests of the Child” Doctrine; Child Custody; Demographic Changes: Divorce Rates; Divorce and Separation; No-Fault Divorce; Nuclear Family; Shared Custody.

368

Divorce and Separation

Further Readings Denton, L. Melinda. “Family Structure, Family Disruption, and Profiles of Adolescent Religiosity.” Journal for the Scientific Study of Religion, v.51 (2012). Mahoney, Annette. “Religion in Families, 1999­–2009: A Relational Spirituality Framework.” Journal of Marriage and Family, v.72 (2010). Vaaler, L. Margaret, G. Christopher Ellison, and A. Daniel Powers. “Religious Influences on the Risk of Marital Dissolution.” Journal of Marriage and Family, v.71 (2009).

Divorce and Separation The dissolution of a marriage is a complex and often lengthy process. It begins with thoughts of ending the relationship, continues through divorce, and can extend long after the divorce has been granted. The effects of divorce often carry over to one’s life as a nonmarried person, depending on any shared responsibilities that the couple may still hold (e.g., child rearing and child support). Because divorce is governed by the legal system, it frequently requires interaction between spouses to resolve issues regarding shared property, income, and children. These mechanisms can include forced separation periods of varying length, spousal support, child support, child custody, and child access. Separation and divorce affect both adults and children who are part of the process. The Process of Divorce Divorce is a process. Generally, before a physical separation or divorce occurs, a couple’s relationship breaks down to some degree. Relationships that end in divorce are often marred by hurtful interactions that can include disinterest, lying, threatening, contempt, criticism, arguing, lack of trust, lack of support, emotional withdrawal, and infidelity. Infidelity is one of the most destructive acts, and often leads to divorce . The way in which a partner responds to hurtful behavior can vary depending on the behavior’s frequency, the way in which it is communicated, whether it is perceived as intentional, and the presence of mitigating circumstances that may compound

the issue. For example, when criticism is infrequent, but accompanied by compassion and support, it is likely to be interpreted differently than an infrequent criticism delivered without compassion or support. Even though hurtful behaviors are experienced differently by each spouse, they may cumulatively eat away at the couple’s relationship, resulting in emotional distance. This distance can ignite more negative or aggressive responses, adding to the downward spiral that leads to marital dissolution. Martial satisfaction is one factor often studied by researchers, and it is a strong predictor of divorce. For example, a couple’s satisfaction with their relationship often declines early in marriage, and this decline often dramatically accelerates after the birth of the first child. Such declines are linked with a reduction in commitment, poorer communication between partners, and problems handling conflict. The effects of conflict on marital satisfaction varies over time, such that direct and open exchanges often have negative short-term effects while simultaneously helping to maintain or even improve long-term marital satisfaction. Suppressing or minimizing conflict often sustains marital satisfaction in the short term but can lead to frustration and resentment later on. In addition, changes in expectations are often linked with changes in satisfaction. For example, the division of household labor may unexpectedly fall along gendered stereotypes, to the wife’s displeasure. Furthermore, dissatisfaction often falls along a gender divide: men tend to be dissatisfied when the couple fails to share activities, and women tend to be dissatisfied when the couple fails to communicate. Yet, not all people who are dissatisfied with their marriage, have poor communication, high conflict, and/or frequent hurtful interactions end up divorced. Some couples tolerate less-than-favorable elements of their relationship because they consider the option of divorce more distasteful than staying together. Such couples may cite as reasons for continuing a less-than-satisfying marriage include the length of marriage, financial constraints, religious or moral beliefs that favor marriage, the presence of young children, and a perceived lack of viable alternatives. Deciding to Separate Before divorcing, a couple may pursue either an unofficial separation or a legal separation. Unofficial separations occur when couples agree to spend time



apart, often by having one partner move out. Some couples hope that this separation will provide more perspective so they can accurately assess whether it is worth addressing their problems and continuing the marriage or whether divorce is preferable. In some jurisdictions, legal separations are mandatory precursors to divorce, with variable requirements and lengths of required time. Financial arrangements and child-custody plans are a part of legal separations. Such arrangements can add structure to a separation, but they can also result in more emotional distance between the couple. The vast majority of women and men ages 15 to 44 who separate unofficially or legally end up divorced within five years. The transition from separation to divorce in the United States varies by race and other cultural factors. For example, Hispanic men between the ages of 15 and 44, who were born in the United States and separate from their spouses, are more likely to experience divorce within five years than Hispanic men of the same age group who are foreign born. The majority of separations ending in divorce typically take less than one year to transition from marriage to separation to divorce. Couples can make the decision to resume their marriage after a separation, but few do. Changes in Divorce Over Time Divorce in the United States reached an all-time high in 1979 with 2.28 percent of all married women 15 years of age and older reporting getting either a divorce or annulment. This represented a substantial increase from the 1 percent who experienced a divorce or annulment in 1964. This drastic increase reflected the introduction of new lessstringent divorce laws, most notably the no-fault divorce. Early in U.S. history, divorce was illegal in many places. Gradually, all jurisdictions allowed divorce, although those who followed through with it were often ostracized from their former social circles. Before the introduction of no-fault divorce, divorces were only granted after blame had been assigned to one party, based on acceptable grounds. Such grounds varied by state, and commonly included abandonment, adultery, abuse, neglect, and incarceration. It took a long time for all states to accept the no-fault divorce; New York began granting them beginning in 2010. The ubiquity of divorce in recent decades has erased many of the negative associations it used to have. Many now advocate a

Divorce and Separation

369

“good divorce” in the belief that it is better to dissolve a marriage than to continue a dysfunctional marriage that is harmful to both the spouses and any children who may be involved. Evidence shows that in highly contentious marriages, individual well-being can improve following divorce. Despite a growing acceptance of divorce and the high proportions of marriages ending in divorce, the choice to dissolve a marriage is neither ideal nor preferred by many. Apart from some fluctuation in the early 1980s, divorce rates have gradually declined. This gradual decline in divorce is mirrored by a similar decline in marriages. Using the crude divorce rate, 3.6 out of every 1,000 Americans experienced a divorce in 2011, whereas 6.8 out of every 1,000 Americans experienced a marriage in 2011. Both marriage and divorce rates are declining, but the marriage rate is declining at a much quicker rate, down from 8.2 per 1,000 and 4.0 per 1,000 divorces in 2000. Factors Associated With Divorce Not all couples have the same level of risk for divorce. For example, couples who are married younger are more likely to divorce, especially if they marry before 20 years of age. Cohabiting before marriage can increase the likelihood of divorce, especially among those who have multiple cohabiting relationships before a first marriage. Also, second and subsequent marriages are more likely to end in divorce than first marriages. Level of education and income level also significantly impact the likelihood of divorce, with low levels of education and income associated with a higher chance of divorce. Furthermore, culture, race, and ethnicity are linked with divorce; being African American increases the risk, whereas being Asian decreases the risk. Interestingly, Asian Americans remain unique in that the their marriage rates are increasing, unlike other racial and ethnic groups. Lifestyle choices (e.g., religion and the distribution of household labor) can affect the chances of divorce. A spouse’s participation in the labor force, whether it is a wife who resumes working full time after the birth of a child or a former breadwinner who remains unemployed for a significant period of time, can lead to more conflict and an increased risk of divorce. Infidelity, spending money that exceeds the couple’s means, and illicit behavior (e.g., substance

370

Divorce and Separation

abuse) are also among a long list of reasons that individuals give for terminating a marriage. Those who have a child while they are married have a lower risk of divorce, but children whose parents are divorced have a increased risk of becoming divorced themselves when they grow up. Some evidence suggests that children born outside of the marriage, whether to a prior partner or a current partner, increases the likelihood of both separation and divorce. However, this is not always true among couples who have a first child while cohabiting, and who then later marry. Those who have children with multiple partners are at risk for divorce. Collectively, these factors can play a substantial role in the probability that a marriage remains intact; however, they are not all inclusive, and couples frequently defy or transcend their relationship circumstances. Effects of Divorce on Adults The process of marriage dissolution, whether by separation, divorce, or by separation and then divorce, initiates changes in multiple facets of a person’s life. These changes can be gradual or rapid. Some evidence shows that the spouse who initiates the separation or divorce (which throughout U.S. history has usually been the wife) experiences fewer negative outcomes, at least early on, whereas other evidence shows that both parties experience substantial adjustment issues, especially as the divorce process advances. Ending a marriage results in the loss of important relationships and social networks, and this can make adjustment more challenging. One or both spouses may end up severing ties to family members and friends gained through the marriage. Just as marriage increases one’s social and economic resources through shared income and wider social networks, separation or divorce can deplete these resources. For example, the costs of living may increase after divorce because couples must establish separate residences with two sets of associated costs (such as rent, mortgage, and utilities). Shared economic investments or assets are frequently are not maintained or the assets are divided. Often, one spouse relied on the other for financial assistance, especially if he or she did not work outside the home, and the end of this arrangement may be the beginning of a major burden for that individual. These financial burdens, and the costs associated with divorce, are reasons that some couples decide

to remain in an unhappy or less-than-satisfactory marriage. Costs for parents who take on primary custody (usually mothers) following a divorce can be especially trying because in many instances child support is not granted. Even when it is, it may not be fully paid or on time. Spousal support, formerly known as alimony, is not often awarded, especially in short-term marriages, and when awards are made, they are also not always paid in the full amount or on time. In separations or divorces involving children, the cost of transportation (both in terms of money and time) for shared parenting or visitation can become burdensome for both parents, as can an increased need for child care. Because women are likely to take on the primary child-rearing responsibilities, and often have lower-paying jobs than their husbands, women typically experience more financial insecurity following divorce. Although the economic aspects of divorce and separation can be more challenging for women, the physical and emotional effects of divorce can be more challenging for men. Men report more difficulty adjusting, as is evident in their reported higher levels of stress. Because children often reside with their mothers, noncustodial fathers are forced to redefine themselves and their relationships with their children. For example, custody arrangements often relegate men to weekend visitation schedules, a change perceived as a loss compared to their prior daily contact during the marriage. These changes force many fathers to re-evaluate their role as a father, and place greater effort toward behaviors that appear more attainable; this is why divorced nonresident fathers are more likely to place a higher emphasis on providing for children, as opposed to caregiving behaviors. This emphasis is further enforced by some custodial parents’ use of gatekeeping strategies, that is, restricting the noresident parent’s access to his or her children, especially if that parent is not financially contributing to the children’s well-being. However, some fathers’ inability to financially contribute to their children’s well-being can complicate this issue. Through dealing with these emotional and physical strains, men are more likely to turn to unhealthy coping methods such as alcohol. Higher levels of depression and anxiety and reduced physical health show up more readily for men following a separation or divorce. Following divorce, the ability to engage in positive and effective parenting strategies become



more challenging. Difficulties resulting from the inability to effectively monitor, supervise, and discipline children, as well as inconsistencies between parenting styles, can be exacerbated as children move between households. Coparenting conflict following separation and divorce, which is common at least in the short term, serves as an additional barrier to effective parenting. The lack of contact between nonresident parents and children can be an additional burden for both parents and children, and the quality of these relationships generally decreases with less contact. Nonresident parents often adopt permissive parenting strategies in response to the limited time that they spend with children, weekends are commonly assigned for visitation, and family life is often more relaxed and more recreationally driven on these days than on weekdays with the primary custodial parent. Thus, nonresident parents will generally stray from direct caregiving or teaching during their visitation. Divorced parents may not be able to continue to behave in the same ways or use the same childrearing methods that they did during the marriage, which has potentially harmful effects for both parents and children. Most experts recommend that both parents attempt to provide consistency between households, which can be established through frequent and positive communication. Effects of Divorce on Children Divorce requires that children adjust to new routines, schedules, rules, and lifestyles while dealing with the loss of time, availability, contact, and resources that were once provided by the now nonresident parent. Acceptance of the new relationship dynamic between children and the nonresident parent can be a difficult for children because the nonresident parent typically has fewer opportunities to interact with them. Even when opportunities exist, the parenting dynamic shifts because the parent has less time to care for his or her children. This causes inconsistency between expectations and behavior, presenting a challenge to maintaining a healthy and supportive relationship. Additionally, the children’s opportunities to engage with their nonresident parent’s family, including grandparents, aunts, uncles, and cousins, are reduced. In marriages that eventually end in divorce, children commonly experience high levels of stress well before the relationship terminates. Common

Divorce and Separation

371

feelings among children before, during, and following divorce include anger, sadness, isolation, and anxiety. These negative emotions typically dissolve some time after the divorce as stability is established and the child gains appreciation of the new family environment. However, adjustment may take longer if parents fail to explain this process well to children, especially those who are too young to fully comprehend the situation. In the time period immediately following divorce, the parents’ relationship is likely to be highly contentious and uncooperative. As time passes, it may become more stable and amicable. Even when this relationship retains tension, the coparenting relationship remains important. As much as engaging in positive communication is beneficial, avoiding negative communication is just as important. When the level of hostility and antagonism is high within the coparenting relationship, nonresident parent involvement dwindles. Out of concern for children’s adjustment, many states offer mandatory and/or voluntary parenting classes that emphasize effective coparenting or child-focused parenting as part of the divorce process. Furthermore, in cases in which the best interests of the child must be assessed (generally when parents cannot agree on a parenting plan), limiting transitions is usually emphasized. However, an environment that completely restricts children’s transitions is not realistic. Following separation or divorce, parents are more likely to move, get a new job, or change their work schedule, and experience financial repercussions because of this. Also, the level of involvement and contact with family members and friends generally changes, and new friends and partners are introduced. A child’s parent entering a new romantic relationship represents yet another transition, and these relationships following divorce tend to be less stable and more susceptible to dissolution (such as dating relationships, cohabiting, and remarriage). Children may experience one or both of their parents forming new relationships multiple times. With each new potential partner, children face the possibility of developing bonds with a person who will not stay in their life long term. Such transient relationships require children to continually adjust to change. Stability and limiting these transitions, although sometimes difficult in a post-divorce environment, can be beneficial to children’s adjustment.

372

Domestic Ideology

Children whose parents divorce are at increased risk for various negative outcomes, many of which may begin in households where parents are engaged in conflict. Growing up in single-parent households or in a stepfamily is linked with children’s earlier engagement in sexual behavior; this is especially true for girls. Children are also more likely to exhibit higher levels of aggression, substance abuse, and other externalizing behavior problems. School performance can also suffer. Collectively, these factors can play a role in intergenerational divorce. For example, lower academic performance is correlated with a higher likelihood of having a marriage end in divorce, which perpetuates the cycle. Furthermore, if a child’s parents ended up divorced, it is likely that the parents’ relationship exhibited unhealthy characteristics compared with children raised in stable two-parent households. Learning from the model that parents present can normalize the process of divorce and increase the chances that children will someday experience a marriage ending in divorce. Not all children experience negative outcomes following divorce. In fact, in high-conflict or abusive marriages children often adjust well and experience relief coupled with improved behavioral, emotional, and psychological outcomes once their parents separate. Divorce itself is not what affects children; instead it is the process and the situations that surround it that can be problematic. When parents are able to establish clear roles and expectations (thereby reducing ambiguity), limit transitions, and engage in positive co-parenting, children are more likely to thrive. Anthony J. Ferraro Kay Pasley Florida State University See Also: “Best Interests of the Child” Doctrine; Child Custody; Coparenting; Demographic Changes: Divorce Rates; No fault Divorce; Nuclear Family; Parenting; Responsible Fatherhood; Shared Custody. Further Readings Amato, P. R. “Research on Divorce: Continuing Trends and New Developments.” Journal of Marriage and Family, v.72 (2010). Kelly, J. B., and R. E. Emery. “Children’s Adjustment Following Divorce: Risk and Resilience Perspectives.” Family Relations, v.52 (2003).

Vangelisti, A. L. “Hurtful Interactions and the Dissolution of Intimacy.” In Handbook of Divorce and Relationship Dissolution, M. A. Fine and J. H. Harvey, eds. Mahwah, NJ: Lawrence Erlbaum, 2006.

Domestic Ideology Domestic ideology is a system of ideas, substantiated by cultural assumptions and structural constraints, in which women’s identities are relegated to their roles as mothers, wives, and homemakers. According to this ideology, a woman’s proper focus is on her service to her family and home. In an appropriate division of labor, women should act as caregivers and household managers, entrusting their husbands with breadwinning responsibilities. Historically, domestic ideology has been corroborated by religious doctrine, culturally pervasive philosophies, and even science. The terms cult of domesticity, cult of true womanhood, separate spheres ideology, and cult of republican motherhood are, at times, used interchangeably with domestic ideology, though they are technically variations of domestic ideology rather than synonyms. While justifications for women’s place in the domestic sphere are plentiful in world history, considerable scholarly attention has been devoted to the politically and economically marginalizing domestic ideologies of the Victorian Era of the late 19th century through the 1950s in Western society. A number of social changes, including the shift from an agrarian to an industrial economy, the expansion of education opportunities for women, and the increased potential for social mobility, predicated a Victorian belief that women had a moral and religious duty to manage their homes. This belief, though not universally accepted, was culturally widespread. Prescriptive literature and religious instruction portrayed the ideal woman as pious, moral, pure, caring, domestic, and submissive. Home management and domestic advice manuals depicted an archetypal “angel in the home,” a diligent mother, sacrificing her energies for the benefit of her family. Ministers preached of the dutiful woman, laboring persistently in her home, yet content in her domestic seclusion. According to Victorian domestic



ideology, woman was, by nature, more delicate and less aggressive and individualistic than her male counterpart; and her position was rightly in the home. In return for her unpaid domestic labor, she was endowed with superior virtue and protected from the ruthless capitalist marketplace. Victorian domestic ideology presupposed that respectable women married, had children, and assumed economically dependent roles within the family. Social stigma threatened any woman who rejected this arrangement. Those who did not marry and working women were labeled as unfeminine, irresponsible, and abnormal. Similarly, selfsupporting women were often confronted with economic obstacles. Female employment was commonly regarded as temporary or supplementary to the husband’s living wage. As a consequence,

Homemaking guru Martha Stewart at the Vanity Fair party celebrating the 10th anniversary of the Tribeca Film Festival. Stewart has written numerous bestselling books and is the publisher of the Martha Stewart Living magazine and television program focusing on the domestic arts.

Domestic Ideology

373

women often faced unequal pay standards, professional exclusion, limited property rights, and hardship following desertion or widowhood. The model of the Victorian domestic woman was normative, yet unachievable for a sizable portion of American women. Prerequisites for ideal womanhood such as homeownership, family stability, and exemption from physical labor were inaccessible to African American, American Indian, immigrant, and working-class women as a result of enslavement, displacement, or poverty. Contradictorily, domestic ideology sought to indoctrinate these women with expectations of proper behavior while creating or perpetuating economic and social obstacles that prevented such realization. Ultimately, Victorian domestic ideology was fleeting, undermined by technologies that eased the burden of household labor, the rise of secular and nontraditional ideals, urbanization/suburbanization, and the emergence of home economics. In addition, the emergence of first-wave feminism in the late 19th and early 20th centuries raised the issue of suffrage and the confining domestic roles allotted to women. The cultural figure of the “new woman,” characterized by her sexual and economic freedom, arose around this time, lasting until the late 1920s. The Great Depression forced an end to this new image of womanhood, and in its place, traditional domestic ideology reemerged following World War II, and crystallized during the 1950s. This new domestic ideology affirmed femininity and self-actualization, rather than virtue, as compensation for a woman’s commitment to motherhood and marriage. The mass media, including television, radio, and women’s magazines, replaced religious literature in the propagation of a biologically based image of the ideal woman. The ideal woman embraced her innate and inalienable femininity, attending to the material needs of her home and the welfare of her husband and children in order to gain personal fulfillment. Women who lacked interest in these duties, or who were unfulfilled by performing them, were deemed shameful and pathological. This appraisal led many women to seek psychiatry and prescription tranquilizers to remedy what they believed to be their psychological abnormalities. Partly because of to new reproductive freedoms, such as the development of the birth control pill, the legalization of

374

Domestic Masculinity

abortion, and the rise of second-wave feminism in the 1960s, the hegemony of the 1950s domestic ideal eventually faded. Still, domestic ideology has persisted in Western culture. The traditional homemaker and devoted mother ideal continues to resonate in the 21st century. Many Americans still believe that children are better off if their mothers, rather than fathers, remain in the home and do not hold full-time jobs; and on average, women make less money, hold fewer full-time positions in the workforce, and still do more housework than men. Additionally, domestic science and home economics remain stable interests in American culture. Women’s magazines such as Good Housekeeping and Martha Stewart Living, home decorating and home improvement television shows, cooking shows, and crafting classes enjoy continued popularity, though their intended audience is no longer exclusively female, and their aim is not necessarily family betterment. In recent decades, various media sources have reported that domestic ideology is experiencing a revival. These sources have highlighted increased interest in domestic activities, including canning, quilting, baking, pickling, jewelry making, chicken raising, and gardening, particularly among middle-class women in Western countries, but middle-class men increasingly share many of those interests as well. Additionally, these sources have popularized terms such as the Opt-Out Revolution and the New Domesticity to describe an allegedly growing category of women (and some men) who leave the workforce to primarily focus on domestic pursuits. While this media coverage has provoked some scholarly concern, most workforce and family research indicates that the widespread “revival” of domestic ideology is merely a social myth, or at least an exaggeration. Jacqueline Henke Arkansas State University Kelsey Henke University of Pittsburgh Monika Myers Arkansas State University See Also: Breadwinner-Homemaker Families; Cult of Domesticity; Feminism; Gender Roles; Gender Roles in Mass Media; Marital Division of Labor; Magazines, Women’s; Separate Spheres Ideology.

Further Readings Friedan, Betty. The Feminine Mystique. New York: W. W. Norton, 1963. Lachance-Grzela, Mylène, and Geneviève Bouchard. “Why Do Women Do the Lion’s Share of Housework? A Decade of Research.” Sex Roles, v.63/11–12 (2010). Matchar, Emily. Homeward Bound: Why Women Are Embracing the New Domesticity. New York: Simon & Schuster, 2013. Mintz, Steven and Susan Kellogg. Domestic Revolutions: A Social History of American Family Life. New York: The Free Press, 1988. Welter, Barbara. “The Cult of True Womanhood: 1820–1860.” American Quarterly, v.18/2 (1966). U.S Census Bureau. “Women in the Workforce.” http://www.census.gov/newsroom/pdf/women _workforce_slides.pdf (Accessed June 2013).

Domestic Masculinity In the study of family and domestic life, researchers have determined that gender plays a dynamic and significant role. Certainly, much of the earlier dialogue about gender within the domestic sphere focused on femininity and how women rear children. Yet in recent decades, critics have repeatedly proposed that the close study of men’s activities in the domestic sphere has the potential to yield useful knowledge that may benefit a wide range of people. The phenomena of studying men in this manner has been dubbed “domestic masculinity.” Many researchers use the term to explain a predominant version of men’s activities that function as a key organizing principle within not only homes, but also at the community, state, and national level. Masculinity plays a substantial role within the household. Historically, scholars understood masculinity as the dominant gender within most social settings, but its role in the domestic sphere, where it was traditionally been less dominant, can still be especially meaningful. Masculinity has long been associated with leadership, power, and social importance, and people have strong beliefs about what is a proper masculine behavior in the domestic sphere.



Concerns, Debates and Stereotypes The discussion about proper domestic masculinity is especially noticeable in the continuing debates over fathers who shirk their family duties. Numerous critics have lambasted men who abandon their children and their children’s mothers. This phenomenon demonstrates the importance that people attach to the ideal of proper fatherhood. It is also worth noting that the cultural ideal of being a good father appears in numerous contexts, ranging from governmental legislation to political campaigns, thus suggesting that a sizable number of people are concerned about how men are falling short of cultural ideals. Moreover, these debates about fatherhood are not without problems. As experts have attempted to delineate the ideal forms of domestic masculinity, certain groups have been assailed by the dominant culture. In many mainstream dialogues, a substantial amount of criticism has been lobbed against men of color, especially African American men. Because of the persistent problems of racism and misinformation, some critics hold false assumptions that African American men are less concerned with being a father figure than men of other races and ethnicities. However, little compelling evidence supports the stereotype that there is a higher proportion of deadbeat dads who are men of color than white deadbeat dads. Such forms of scapegoating undercut the goal of understanding how men deviate from the paradigm that many traditionalists hold up as the ideal. Certainly, there is an abundance of perspectives about what being a good father involves. Expectations, Norms and Traditions In the contemporary era, diverse people commonly expect the man of the house to carry out a set of tasks around the home, including protecting the home, making repairs, maintaining the lawn, and disciplining the children. This work-related dimension of domestic masculinity is particularly pronounced in rural, suburban and working-class families. Researchers assert that domestic masculinity played a substantive role in earlier time periods, when many men lived as farmers, ranchers, and pioneers, rooted in the American frontier and countryside. In these manifestations, men tended to the livestock outside the home, and although men may not have maintained the day-to-day activities in the home as women did, they remained the dominant figure.

Domestic Masculinity

375

However, not all men have held positions of authority or power within their homes throughout history. During the antebellum period in America, slave men were completely subjugated, and slaves were forced to obey the wishes of white men, women, and children. These slaves had no options or protections, and it is believed that this denial negatively impacted slave men in numerous ways. Similarly, Native American men were denied domestic autonomy when white Americans forced them to relocate to reservations and mandated that their children attend boarding schools away from home. This effectively fractured the domestic sphere of many communities. Moreover, the concept of domestic masculinity is entwined with prevailing codes of gender and sexuality. In popular culture, masculinity is associated with the qualities of bravery, reliability, resilience, strength, and virility. Similarly, the dominant cultures of many societies, including the United States, believe that men should conform to the inculcated norms of heterosexuality within the domestic sphere. Heterosexuality supposedly allows for the perpetuation of families and their traditions, although recently critics have pointed to a slew of ways in which family and tradition is not reliant on heterosexual social arrangements. At the same time, heterosexuality has become tied to domestic masculinity through the institution of marriage, which has historically been entwined with agreements that extend beyond the boundaries of desire and love. In the past, heterosexual marriage was linked to business deals and other contractual arrangements, thereby connecting families through capital, property, and business interests. Because of these factors, a failure to maintain a heterosexual relationship could lead to negative consequences, such as criticism, harassment, and social marginalization. Men who deviated from the dominant social mores risked losing their connections to homes, particularly in conventional households where traditional values were held sacred. Recent Developments Although many people aim to maintain their familial connections, certain circumstances that threaten this often cannot be avoided or easily hidden from public view. Some personal circumstances can lead to people being isolated from

376

Domestic Partner Benefits

the domestic sphere. For instance, gay, bisexual, and transgender men often exhibit a form of gendered expression that is masculine and domestic, but others perceive these individuals as deviating from the acceptable norms. Yet, numerous forward-thinking critics contend that this unconventional kind of domestic masculinity merits equal consideration because it creates unique versions of domesticity. Among these unconventional forms of domestic masculinity, there are butch women, bisexual husbands, gay dads, stay-at-home dads, and transgender men. These identities illustrate the great diversity of domestic masculine experience, and this plurality suggests that the familial experiences of gender cannot be reduced to a simple logic. Denying the legitimacy of masculine diversity may be viewed as closemindedness, or a shortsighted way of thinking about the changes taking place within the contemporary domestic sphere. While there is still a significant amount of continuity in domestic contexts, people are witnessing widespread social change, creating more leeway in the ways that people perpetuate domestic configurations. Edward Chamberlain University of Washington, Tacoma See Also: Companionate Marriage; Deadbeat Dads; Domestic Ideology; Fathers’ Rights; New Fatherhood; Separate Sphere Ideology; Transgender Marriage. Further Readings Harvey, Karen. The Little Republic: Masculinity and Domestic Authority in Eighteenth-Century Britain. New York: Oxford University Press, 2012. Kilkey, Majella, Dianne Perrons, and Ania Plomien. Gender, Migration and Domestic Work: Masculinities, Male Labor and Fathering in the UK and USA. New York: Palgrave MacMillan, 2013. Marangoly George, Rosemary. Keywords for American Cultural Studies. New York: New York University Press, 2007. Moisio, Risto, Eric J. Arnould, and James W. Gentry. “Productive Consumption in the Class-Mediated Construction of Domestic Masculinity: Do-ItYourself (DIY) Home Improvement in Men’s Identity Work.” Journal of Consumer Research, v.40/2 (2013).

Domestic Partner Benefits Domestic partner benefits in the United States can only be understood in the context of the history of domestic partnerships and civil unions. Through an examination of these policies, their histories and challenges, the benefits can be explained. The American family has historically been defined using traditional parameters. A legally married couple comprised one man and one woman, recognized by law. However, increasing focus on the prevalence and need for legal recognition of nontraditional couples, committed pairs, often of the same sex, who are unable to marry for a variety of reasons, has expanded the use of domestic partnerships. Some states and municipalities have created this legal status as court orders and legislation have moved the rights of unmarried couples closer to same-sex marriage in other localities. The absence of legal recognition has long separated unmarried partners, whether heterosexual or same sex, from the benefits of legal protections provided by federal and state governments within the United States. In recent decades, individual states have begun to recognize couples in civil unions and domestic partnerships. These arrangements are different from traditional marriage, and fewer benefits are guaranteed. Such partnerships are not equivalent to marriage, and often require supplemental legal agreements ranging from child custody to health directives and more to ensure, as much as possible, the benefits afforded legally married couples. Ultimately, regardless of a particular state’s definition of domestic partnerships, and how close these are to the benefits of legal marriage within the state, federal marriage benefits are not available. Historically, domestic partnerships created a significant step in the movement toward same-sex marriage by providing limited state recognition of nontraditional committed couples. Some states have replaced their civil union option with legalized same-sex marriage. What are Domestic Partnerships? Individuals in the United States who do not qualify for legally recognized state and federal marriage have options in some states; these include civil unions and domestic partnerships. Civil unions are defined as an alternative to state-recognized



same-sex marriage while providing the same protections and responsibilities as marriage to samesex and heterosexual couples. In 2000, following resistance to same-sex marriage, Vermont was the first state to provide civil unions, offering the same state benefits as those afforded married couples. Domestic partnerships typically provide a variety of recognitions, and are offered in lieu of civil unions. While providing some protections, legally married couples and their children enjoy more than 1,000 additional federal protections. Thus, some committed partners have no option for legal recognition, whereas others have the same protections as other married couples at the state level in the absence of legal state same-sex marriage. Domestic partnerships are typically defined as legally recognized couples in committed and close personal relationships. States’ civil unions or domestic partnerships can vary; there is no standard such as that of legal marriage at state and federal levels. Some states have structured domestic partnerships to provide almost the same benefits as marriage, whereas others provide fewer rights. Domestic partnerships and civil unions are also available to heterosexual couples who choose not to marry and do not have the option of common law marriage. In the United States, same-sex marriage, like civil unions and domestic partnerships, are different from traditional marriages because they are not recognized when a couple moves from one state to another or across international borders. This requires couples to reregister their domestic partnership in a new state, if this option is even available. When benefits and rights are lost, couples may need to make legal arrangements to protect themselves and their children. History of Domestic Partner Benefits In 1978, Tom Brougham began working for the city of Berkeley, California. He realized that he could not provide medical and dental benefits to his life partner, Barry Warner. Together, they worked to deconstruct the elements of marriage, and to reconfigure it without the element that prevented their rights to marry; a proposal to offer domestic partnerships, a marriage equivalent for nonheterosexual couples, was conceived. The policy was adopted in July 1984, but was not implemented due to fears of the costs of health care. The domestic partner benefits policy was the primary campaign issue in the November 1984

Domestic Partner Benefits

377

city council election, and all candidates who were against the policy were defeated. Berkeley incrementally offered benefits, and in 1986, same-sex domestic partners could have health coverage via the covered partner’s policy. In 1985, West Hollywood, a newly incorporated city, legalized domestic partnerships. Throughout the 1980s and 1990s, increasing numbers of U.S. cities began to offer the benefits of domestic partnerships to their employees. Gay activists encouraged unions to negotiate these benefits for them. Lotus became the first publicly traded company to offer such benefits in 1992. The same year, the University of Chicago and Stanford University announced plans to offer these benefits to students, staff, and faculty. More than half of all Fortune 500 companies now offer domestic partner benefits. In 1993, Mayor David Dinkins of New York City handed down an executive order requiring New York City to provide a registry for unmarried gay, lesbian, and heterosexual couples. Dinkins’s successor, Mayor Rudolph Giuliani, signed the domestic partnership law into effect in 1997. California became the first state to provide a statewide domestic partner registry recognizing same-sex couples. While few marriage rights were included, it was a start. The goal was to lay a foundation to provide all traditional marriage benefits, which was achieved in 2003. What Are Domestic Partnership Benefits? Domestic partner benefits are provided to registered couples within states offering and/or recognizing civil unions and domestic partnerships. These benefits are not guaranteed or built upon by the federal government, consistent from state to state, or equal to those provided and guaranteed by federally recognized marriages. Such benefits include rights of inheritance, child custody, claims to rental leases in a deceased partner’s name, hospital visitations, family health insurance, and Social Security benefits, as well as automatic medical proxy status. These benefits fill a gap in medical and inheritance protections; however, many gaps remain, including immigration issues. At a societal level, since it was first introduced, the concept of nontraditional legal unions, with limited benefits and recognition, has gained societal acceptance. J. Summer and S. Miller found that many employers offered benefits to same-sex domestic partners who were unable to marry, but not to heterosexual couples in domestic partnerships.

378

Domestic Violence

Further complications arise when couples in states offering neither civil unions nor domestic partnerships seek spousal support because they have been in long-term relationships, beyond the nine-month marriage requirement for Social Security benefits, despite the absence of legal documentation. It is impossible to discuss this without looking at the impact of money on this situation. Domestic partnerships and civil unions are not recognized outside of the state in which they are granted. Numerous other federal laws and regulations provide or restrict benefits available to those in domestic partnerships, designating only legally married spouses and children as beneficiaries. Who Is Impacted? Individuals, couples, and their children are impacted by the availability or absence of domestic partnerships in their state. The momentum of domestic partnership legislation and benefits propelled efforts to seek federal recognition of gay marriage. The Defense of Marriage Act (DOMA) was passed by Congress and signed into law in 1996 by President Bill Clinton, who stated that he did so reluctantly. This law further delayed the possibility of federally recognized gay marriages by encouraging some states to enforce new legislation legally defining marriage as only between a man and a woman. Other states recognized domestic partnerships and eventually, same-sex marriages. An article published in the New York Times in 2014 highlighted the continuing inequity of benefits accorded married couples and those in domestic partnerships. According to the article, Lawrence Schact and Russell Frink Jr. met in 1953, and remained together for 58 years. In 2004, they became domestic partners. In 2011, five months after they married in New York, when same-sex marriage became legal, Schacht was widowed. The Social Security Administration would not recognize his claim to collect spousal benefits because Schacht and his partner had not been married for nine months, the time period used to determine that any marriage is not a false deathbed arrangement. As this case demonstrates, committed couples recognized in civil unions or domestic partnerships have no hope of collecting spousal benefits or settling the estate of a long-term partner. However, many experts believe that these benefits will also eventually be available to these individuals.

Implications for the Future As society continues to change the definition and requirements of marriage, domestic partnerships and civil unions may also change. As more individuals secure the right to marry, employers and others may be less likely to provide benefits, maintaining that people who choose not to marry when they have the legal right are in less stable relationships. Nonmarried couples may need to continue to use legal means to ensure the same benefits as those who are legally married, until their states offer civil unions and domestic partnerships, and eventually same-sex marriage. Kim Lorber Ramapo College of New Jersey See Also: Child Custody; Civil Unions; Cohabitation; Common Law Marriage; Defense of Marriage Act; Gay and Lesbian Marriage Laws; Inheritance; Same-Sex Marriage; Social Security. Further Readings Human Rights Campaign (n.d.). “Federal Laws Impacting Domestic Partner Benefits.” http://www .hrc.org/resources/entry/federal-laws-impactingdomestic-partner-benefits (Accessed March 2014). Lieber, Ron. “After 58 Years in a Couple, a Spouse Fights for Benefits.” New York Times (March 21, 2014). http:// www.nytimes.com/2014/03/22/your-money/a-same -sex-couple-together-for-58-years-but-husband-is -still-fighting-for-benefits.html?_r=0 (Accessed March 2014). Sammer, J. and S. Miller. “The Future of Domestic Partner Benefits” (2013). https://www.shrm.org/ hrdisciplines/benefits/Articles/Pages/Domestic -Partner-Benefits.aspx (Accessed March 2014). Traiman, Leland. “A Brief History of Domestic Partnerships.” Gay and Lesbian Review (July 1, 2008). http://www.glreview.org/article/article-635 (Accessed March 2014).

Domestic Violence According to the National Institute of Justice (NIJ), nearly half of all women in the United States have experienced at least one form of psychological



aggression by an intimate partner. Furthermore, more than one in three women and more than one in four men in the United States will experience rape, physical violence, and/or stalking by an intimate partner in their lifetime. These statistics highlight one of the most serious yet preventable health issues in the world: domestic violence. Domestic violence occurs across age, ethnic, gender, and economic lines, among persons with disabilities, and among both heterosexual and same-sex couples. It affects both men and women, although women are almost twice as likely to be assaulted by a partner as men. In 2007, intimate partner violence (IPV) caused 2,340 deaths; of these, 1,640 were female, and 700 were male. The facts uncovered by the NIJ indicate the ubiquity of domestic violence, which is the leading cause of female homicide. For example, more than three women are murdered by their husbands or boyfriends every day, and 74 percent of all murder-suicides involved an intimate partner (spouse, common-law spouse, ex-spouse, or boyfriend/girlfriend). Of these, 96 percent were women killed by their intimate partners. In the majority of these relationships, the man abused the woman before the murder, although for 20 percent of women killed or severely injured, the incident was the first physical violence experienced from the abuser. The cycle of violence often starts early in a woman’s life; one in five female high school students reports being physically and/or sexually abused by a dating partner. One of the results of such abuse is that these women are three times as likely to consider their mental health to be poor as women without a history of violence. Other trends that pertain to domestic violence include the fact that women with disabilities have a 40 percent greater risk of intimate partner violence, especially severe violence, than women without disabilities. Furthermore, violence is often associated with rape; sexual assault or forced sex occurs in approximately 40 to 45 percent of battering relationships. However, experts caution that the true prevalence of domestic violence is unknown because many victims are afraid to disclose such issues to others. What Is Domestic Violence (Intimate Partner Violence)? The World Health Organization (WHO) defines IPV as “any behavior within an intimate relationship

Domestic Violence

379

that causes physical, psychological, or sexual harm to those in that relationship. It includes acts of physical aggression (slapping, hitting, kicking, or beating), psychological abuse (intimidation, constant belittling, or humiliation), forced sexual intercourse or any other controlling behavior (isolating a person from friends and family, monitoring their movements, and restricting access to information or assistance).” IPV is about one person in a relationship using behaviors to control the other person. The relationship does not have to be between a husband and a wife—it can be between couples who are dating, live together, are separated or divorced, and are heterosexual or homosexual. The relationship does not even have to be sexual. Although most victims of IPV are female, about 20 percent of victims are male. Many people think of IPV as physical violence. Many perpetrators believe that they cannot be guilty of domestic violence if they do not touch the other person. Likewise, many others believe that if they are not physically battered, they are not victims of domestic violence. In this way, many people who are abused do not see themselves as victims, and many abusers do not see themselves as perpetrators. Yet, IPV takes many forms, including psychological, emotional, or sexual abuse. History of Domestic Violence Historically, monogamous relationships that had been designed to protect women from violation by men other than their spouses and guarantee husbands their rights as fathers resulted in differential power between the spouses. In medieval times, husbands had the right to beat and even to murder their wives as long as it was for disciplinary purposes for such behaviors as talking back, scolding or nagging, or miscarrying children. English common law, in the name of the protection of the family, provided husbands the right to chastise their wives only “moderately.” It excluded death. English law, which was brought to the American colonies, allowed husbands to retain their right to physically chastise their wives, as long as they did not use a stick larger than their thumb (the origin of the expression “the rule of thumb”). The subjugation of the wife to the husband’s authority was reflected in the marriage contract. Through marriage, the woman had to give up her name, move

380

Domestic Violence

to her husband’s home, and become his dependent. The marriage vow required the wife to “love, honor and obey” her husband. The various restrictions on the wife through the marriage contract (such as an inability to own or manage property, enter into contracts, or sue) made the wife economically and legally dependent on her husband. This dependency has been “justified” by the state’s overriding interest in keeping the family intact. The protection of the family was also the major reason for a de facto decriminalization of wife abuse. The sanctity of the family home and the charge that “a man’s home is his castle” led to treating spouse abuse differently than assault between persons who were not intimates. Because the wife was viewed as belonging to her husband, what happened between them was regarded as a private matter, and was not a concern to the criminal justice system, according to R. W. Dobash and R. Dobash. A major change in the legal rights of married women in the United States occurred at the end of the 19th century. Many of the legal restrictions on them were lifted, and the right of the husband to chastise his wife was abolished. Much of what is considered “domestic violence” today was considered acceptable, if not recommended, behavior a century ago, according to E. Pleck. In the late-19th century, lawmakers and judges were still considering whether a husband’s physical assault toward his wife was a criminal act, sufficient to serve as grounds for divorce, or whether it was merely an acceptable way of correcting her misbehavior, according to Dobash and Dobash. Yet relative to criminal justice, the belief that physical abuse in spousal relationships does not constitute a crime continued to guide the police in their response to domestic violence cases until the 1970s. As long as the chastising of women did not result in serious injury, the criminal justice system would not intervene. The activities of the women’s movement in the 1970s, together with concurrent advocacy on behalf of victims of crime, particularly victims of rape and domestic violence, have been instrumental in changing the prevailing approach to domestic violence. They called attention to the plight of victims in the criminal justice system, especially to female victims of domestic violence and sexual assault, whose neglect and invisibility in the

criminal justice process was just surfacing. They transformed domestic violence from a private issue to a public concern, and redefined it as a crime and law violation warranting criminal justice intervention. The impunity of batterers and perpetrators of gender violence to criminal charges was challenged, and the message that violence against women is not a serious offense was reversed. No longer could perpetrators avoid responsibility for inflicting injuries on their female partners, and the legal distinction between violent acts that are criminal toward strangers, yet tolerated toward intimate partners, specifically female partners, began to fade away. Yet, the perception of wife abuse as different from other assaults retains some of its special status in criminal law. Long after wife battering was formally defined as a criminal offense, many states continued to define sexual assault or rape as criminal only when the complaining party was not the wife of the perpetrator. Some states even maintain this dual standard today. The emergence of the battered women shelters movement, together with grassroots advocacy organizations, called for legal and practical solutions to domestic violence victims. In particular, short-term solutions like shelters to house abused women were created, and long-term solutions, such as reorienting gender roles toward equality between the sexes and establishing legal reforms in the institution of marriage, were begun. In addition, various groups on behalf of women directed attention to the asymmetry in power relationships underlying partner violence, and challenged barriers to women’s rights and equality. They argued for greater social concern for women and children, and legitimized the needs of women and children who sank deeper into poverty because of unfair welfare practices that economically penalized them for the negligent behavior of their husbands. Calls for the reform of the criminal justice system followed, and efforts directed by activists, practitioners, and scholars to restructure the criminal justice system response to domestic violence addressed the various components of the criminal justice system: police, prosecution and adjudication of domestic violence, and intervention programs for batterers. In 1996, the National Domestic Violence Hotline began operating, and in 2000, President Bill Clinton signed the National Violence Against Women Act into law.

Domestic Violence



Types of Intimate Partner Violence According to L. E. Saltzman and colleagues, the following are the four main types of intimate partner violence: • Physical violence: The intentional use of physical force with the potential for causing death, disability, injury, or harm. Physical violence includes, but is not limited to, scratching, pushing, shoving, throwing, grabbing, biting, choking, shaking, slapping, punching, burning, use of a weapon, and use of restraints or one’s body, size, or strength against another person. • Sexual violence: Can be divided into three categories: (1) the use of physical force to compel a person to engage in a sexual act against his or her will, whether or not the act is completed; (2) an attempted or completed sexual act involving a person who, because of illness, disability, or the influence of alcohol or other drugs, or because of intimidation or pressure, is unable to understand the nature or condition of the act, decline participation, or communicate unwillingness to engage in the act; and (3) abusive sexual contact. • Threats of physical or sexual violence: Using words, gestures, or weapons to communicate the intent to cause death, disability, injury, or physical harm. • Psychological/emotional violence: Traumatizes the victim by acts, threats of acts, or coercive tactics (e.g., humiliating the victim, controlling what the victim can and cannot do, withholding information, isolating the victim from friends and family, denying access to money or other basic resources). It is considered psychological/ emotional when the act or threat or coercive tactics have been preceded by acts or threats of physical or sexual violence. • Stalking: Often included among the types of IPV. Stalking generally refers to repeated harassment or threatening behavior that includes following a person, repeatedly appearing at a person’s home or place of business uninvited, making multiple phone calls after being told to stop, leaving a person written messages

381

or objects, or vandalizing a person’s property. Risk Factors for Intimate Partner Violence Research supported by the NIJ and others has identified some of the risk factors for intimate partner violence. Foremost among these is that a woman’s attempt to leave an abuser was the precipitating factor in 45 percent of the murders of women by their intimate partners. Early parenthood is also a risk factor. Women who had children by age 21 were twice as likely to be victims of intimate partner violence as women who were not mothers at that age. Men who had fathered children by age 21 were more than three times as likely to be abusers as men who were not fathers at that age. There is also a significant relationship between problem drinking by male perpetrators and violence against their intimate female partners. More than two-thirds of the offenders who commit or attempt homicide used alcohol, drugs, or both during the incident; less than one-fourth of the victims did. Severe poverty and unemployment is associated with an increase in risk for IPV; the lower the household income, the higher the reported IPV rates. Ultimately, women who experience serious abuse face overwhelming mental and emotional distress. Almost half of the women reporting serious domestic violence also meet the criteria for major depression; 24 percent suffer from posttraumatic stress disorder (PTSD), and 31 percent from anxiety. Overall, those with the highest risk of experiencing IPV are poor, have little education, are young adults, are female, live in a high-poverty neighborhood, and are dependent on drugs or alcohol. Those who are at an increased risk of becoming an abuser have a low income or are unemployed; had low academic achievement; exhibited aggressive behavior as youth; use drugs and alcohol heavily; are depressed, angry, and hostile; have a prior history of being physically abusive; have few friends and are socially isolated, emotionally dependent, and insecure; believe in male dominance; desire power and control in a relationship; and are victims of child abuse. The Cycle of Violence In many relationships, the violence occurs as part of a cycle that can be as short as a few hours or may take days, weeks, or months to complete. The cycle

382

Domestic Violence

usually starts with tension building in the perpetrator, with the victim trying to keep the abuser calm. The victim often feels as if she is “walking on egg shells,” but no matter what she does, the tension increases, and an incident of abuse (physical, psychological, sexual, or emotional) occurs. After an instance of abuse, a period of making up takes place, during which the abuser apologizes, promises it will never happen again, or in some cases, blames the victim, or denies that it took place or that it was as bad as the victim claims. A period of calm usually follows, during which the abuser acts like nothing happened and may give gifts to the victim, and the victim hopes that the abuse is over. Four types of domestic violence have been identified based on the motivation of the aggressor and the overall pattern of the violence: coercive controlling violence, violent resistance, situational couple violence, and separation-instigated violence. The two most common forms are coercive control, which involves an escalating pattern of terrorism, and situational couple violence, which involves isolated conflict-based incidents. Violent resistance involves self-defense. Batterer subtypes can be classified along three descriptive dimensions: (1) severity and frequency of marital violence, (2) generality of the violence (i.e., family-only or extrafamilial violence), and (3) the male batterer’s psychopathology or personality disorders. Holtzworth-Munroe and Stuart (1994) suggested that using these descriptive dimensions produces three major subtypes: (1) family only (moderately violent offender)—these men exhibit little or no psychopathology; (2) dysphoric–borderline—these men exhibit moderate to severe marital violence primarily toward their partner but also against others and display significant pathological traits such as jealousy, and (3) generally violent–antisocial—these men show high levels of marital violence and often have criminal histories. Janet Johnston proposes five typologies of IPV that she uses to understand families of divorce. These typologies are a useful way of understanding IPV relationships: 1. Ongoing episodic male battering: This is the pattern most represented in popular literature—the battering husband and battered wife. The men are often jealous, accusatory,

and have low frustration tolerance and poor impulse control. The victims do little to provoke the abuse, and are often surprised when the abuse happens. The attacks may be severe and life threatening. When confronted with their abuse, the perpetrators may deny it and blame the victim. 2. Female-initiated violence: These physical attacks are initiated by the woman, who feels rejected and experiences intolerable tension. Many of the male victims are passive or passive aggressive, which may provoke temper outbursts. 3. Male controlling interactive violence: This arises out of a disagreement between the partners that escalates out of control. Once the aggression begins, the male asserts control as if it were his right to control the “hysterical woman”; this might be moderate aggression and/or restraint. 4. Separation-engendered and post-divorce trauma: In these families, there is little violence until the time of separation, an experience that proves overwhelming and traumatic. 5. Psychotic and paranoid reactions: For a small number of families, violence emerges from disordered thinking, delusions, or drug-induced psychosis. Consequences of Intimate Partner Violence Some female victims report acute physical injuries, such as broken bones, bruises to the body or head, or black eyes. Others report more chronic symptoms such as headaches, sleep and appetite disturbances, sexual dysfunction, vaginal infections, or abdominal pain. Among victims who are still living with their perpetrators, high amounts of stress, fear, and anxiety are commonly reported. Depression is also common, because victims are made to feel guilty for “provoking” the abuse and are frequently subjected to intense criticism. It is reported that 60 percent of victims meet the diagnostic criteria for depression, either during or after termination of the relationship, and have a greatly increased risk of suicidality. In addition to depression, victims of domestic violence also commonly experience long-term anxiety and panic, and are likely to meet the diagnostic criteria for generalized anxiety disorder and panic



disorder. The most commonly referenced psychological effect of domestic violence is post-traumatic stress disorder (PTSD). IPV can lead to negative consequences in pregnant women, such as infant mortality, anemia, stillbirth, and even maternal mortality. Forced sexual contact and refusal to use birth control also can lead to unintended pregnancies. Children may witness the IPV; 3.3 million children witness domestic violence each year in the United States. A child who is exposed to domestic abuse during their upbringing will suffer in their developmental and psychological welfare. During the mid-1990s, the adverse childhood experiences (ACE) study found that children who were exposed to domestic violence and other forms of abuse had a higher risk of developing mental and physical health problems. Because of the domestic violence that some children have to face, it also generally impacts how the child develops emotionally, socially, behaviorally, and cognitively. Some emotional and behavioral problems that can result from domestic violence include increased aggressiveness, anxiety, and changes in how a child socializes with friends, family, and authorities. Depression, emotional insecurity, and mental health disorders can follow because of traumatic experiences. Problems with attitude and cognition in schools can start developing, along with a lack of skills such as problem solving. Correlation has been found between the experience of abuse and neglect in childhood, and perpetrating domestic violence and sexual abuse in adulthood. Recent Trends in IPV The National Crime Victimization Survey (NCVS) presented data among U.S. households from 1993 to 2010. Some of the highlights included that IPV in the United States during that time period declined 64 percent. Four out of five victims of IPV were female, and those aged 18 to 34 generally experienced the highest rates of intimate partner violence. Compared to every other age group, a smaller percentage of female victims ages 12 to 17 were previously victimized by the same offender. The rate of intimate partner violence for Hispanic females declined 78 percent, from 18.8 victimizations per 1,000 in 1994 to 4.1 per 1,000 in 2010. Additionally, females living in households comprised of one female adult with children experienced intimate partner violence at a rate more than 10 times higher

Domestic Violence

383

than households with married adults with children, and six times higher than households with one female only. Neil Ribner Jason Ribner California School of Professional Psychology See Also: Child Abuse; Elder Abuse; Rape; Teen Alcohol and Drug Abuse; Wife Battering. Further Readings Block, C. R. “How Can Practitioners Help an Abused Woman Lower Her Risk of Death?” NIJ Journal, v.250 (2003). Dobash, R.E and R. Dobash. Violence Against Wives. Somerset: Open Books, 1979. Holtzworth-Munroe, A. and G. L. Stuart. “Typologies of Male Batterers. Three Subtypes and the Differences Among Them.” Psychological Bulletin, v.1/16 (1994). Johnston, J. and V. Roseby. In the Name of the Child. New York: The Free Press, 1997. Kelly, J. and M. Johnson. “Differentiation Among Types of Intimate Partner Violence: Research Update and Implications for Interventions.” Family Court Review, v.46/3 (2008). Meisel, J., D. Chandler, and B. M. Rienzi. “Domestic Violence Prevalence and Effects on Employment in Two California TANF Populations.” Violence Against Women, v.9/10 (2003). Moffitt, T. E. and A. Caspi. “Findings About Partner Violence From the Dunedin Multidisciplinary Health and Development Study.” Research in Brief. Washington, DC: U.S. Department of Justice, 1999. Pleck, E. Domestic Tyranny: The Making of Social Policy Against Family Violence From Colonial Times to the Present. Oxford University Press, 1987. Saltzman, L. E., J. L. Fanslow, P. M. McMahon, and G. A. Shelley. Intimate Partner Violence Surveillance: Uniform Definitions and Recommended Data Elements. Atlanta: Centers for Disease Control and Prevention, 2002. Sharps, P., J. C. Campbell, D. Campbell, F. Gary, and D. Webster. “Risky Mix: Drinking, Drug Use, and Homicide.” NIJ Journal, v.250 (2003). Tjaden, P. and N. Thoennes. Stalking in America: Findings From the National Violence Against Women Survey. Washington, DC: Department of Justice, 1998.

384

Dowries

Dowries Dowries are a part of a marriage contract whereby a bride’s family agrees to provide money and/ or other property to the groom and his family in exchange for the couple becoming man and wife. In U.S. history, dowries have had both economic and legal significance. From an economic perspective, dowries acted as an incentive for men to marry certain women, especially those whose families were wealthy enough to provide their daughters with an extensive dowry. Wealthy families frequently used dowries as a way to protect their land and real estate holdings by giving their daughters’ husbands access to property, which would then be passed down through their future children. Prospective brides who had somewhat questionable pasts might also be able to enter into socially fortuitous marriages if their families were willing to pay enough money for their dowries. Large dowries often encouraged bridegrooms to overlook any scandalous transgressions on the part of their brides, such as premarital affairs or illegitimate children. From a legal perspective, dowries gave the bride’s family a way to make certain that she would be provided for in the event of her husband’s death. Legally, upon the death of a husband, English common law stated that a wife had a right to inherit one-third of her husband’s estate. Usually, she inherited the money and property that her family had conveyed to her husband as her dowry. Families could also stipulate that property conveyed to a groom’s family might only be inherited by a bride’s children in the eventuality of her death. This ensured that a bride’s blood relatives inherited a family’s property, as opposed to the husband’s family. In cases where brides predeceased their husbands without producing any living children, it was not uncommon for the dowry to revert to the bride’s family for the same reason. American law initially followed this tradition before various reform movements worked to improve the legal rights of married women in the mid-19th century. Purpose of Dowries Historically, dowries were economic considerations negotiated prior to a bride and groom’s nuptials. The purpose of the dowry varied depending on the parties involved, usually for one of three reasons. One reason for a dowry was for the bride’s

family to provide gifts to the groom’s family as a social consideration. A second reason was for the bride’s family to compensate the groom and his family for their support of the marriage. The third, and most common, reason for dowries was for the bride’s family to pass on their wealth to their daughters and grandchildren. Dowries allowed for the establishment of new households in societies where a married female could not legally own property in her own name. Thus, it was a way of passing on property that a woman had no other means of inheriting. Types of Dowries The types of dowries a bride’s family offered to the prospective groom’s family greatly varied depending upon social class, geographic location (urban versus rural communities), and professions of the parties involved. Dowries might consist of money, land, slaves, domesticated animals, household goods, or some combination of these items. However, there was nothing predetermined for what a dowry might or might not consist of in the colonies of British North America, and later in the United States. Dowries, Brides, and Wives Under English common law and the early legal traditions of the United States, when a woman married, she became “covered” by her husband’s legal identity. In such cases, a dowry was a way that her birth family could ensure that she inherited property and/or other miscellaneous goods from her parents. This practice differed from the colonies of other European empires during this period. For example, the laws of Spain stipulated that married women could receive a “partible inheritance.” The practice of partible inheritance stated that upon death, all of an individual’s heirs, regardless of gender or marital status, receive an equal share of the deceased’s estate. While dowries receive a special consideration in this formula, partible inheritance allowed married women who lived in the colonies of New Spain to retain control of property that they inherited. Dowries and Widows Under English common law and American jurisprudence, a widowed woman (feme covert) reverted to the legal status that she had enjoyed before her marriage. As a feme sole, widows could inherit

Dr. Phil



property. Whether they would inherit the dowry that their husbands had received upon their marriages depended upon if their husbands left a valid will or intestate. Because most men who died in the colonial and antebellum periods died without a will, their widows had to rely on the provisions that the law made for them. According to such legal traditions, widows had rights to three types of property from their husband’s estate. First, she retained ownership of any personal goods, particularly clothing or household items. Second, she regained control of any property that she had owned prior to her marriage, or any items that she had inherited during the course of the marriage. Third, she inherited the “dower right.” The dower right was the one-third life interest of her husband’s estate. As her inheritance, it was thought that this would be enough for a widow to support herself after her husband’s death, to not become a burden on the community in which she lived. For a select number of women, widowhood enriched them to the point that they became some of the wealthiest members of colonial society. Rich widows, often made so because of inheritances from dead husbands, became sought after by men in every community. While many rich widows often remarried, some did not. Many of these women treasured the legal and financial independence granted to them by their widowhood. Upon their deaths, the widows usually freely dispersed their goods as they wished via arrangements provided in their wills. Many of these wills survive to the modern era, and provide some of the most interesting primary source documentation from the colonial and antebellum periods. While dowries were common in the colonial period, by the end of the 18th century, they declined in popularity as the nature of marriage changed. By the early 19th century, many men and women in the United States married for love instead of as a family-­ cum-business arrangement. With the increasing popularity of these companionate marriages, as they are also called, dowries continued to wane in popularity until they disappeared altogether. Deborah L. Bauer University of South Florida See Also: Arranged Marriage; Divorce and Separation; Widowhood; Wills.

385

Further Readings Alston, Lee J. and Owen Shapiro. “Inheritance Laws Across Colonies: Causes and Consequences.” Journal of Economic History, v.44 (1984). Bossen, Laurel. “Toward a Theory of Marriage: The Economic Anthropology of Marriage Transactions.” Ethnology, v.27/2 (April 1988). Cott, Nancy A. Public Vows: A History of Marriage and the Nation. Cambridge: Harvard University Press, 2000. Degler, Carl N. At Odds: Women and the Family in America From the Revolution to the Present. New York: Oxford University Press, 1981. Hirsh, Jennifer S. and Holly Wardlow, eds. Modern Loves: The Anthropology of Romantic Courtship and Companionate Marriage. New York: Macmillan, 2006. O’Day, Rosemary. Women’s Agency in Early Modern Britain and the American Colonies: Patriarchy, Partnership, and Patronage. New York: Pearson Longman, 2007. Rosen, Deborah A. “Women and Property Across Colonial America: A Comparison of Legal Systems in New Mexico and New York.” William and Mary Quarterly, v.60/2 (2003). Salmon, Marylynn. Women and the Law of Property in Early America. Chapel Hill: University of North Carolina Press, 1989. Shammas, Carole. “Re-Assessing the Married Women’s Property Acts.” Journal of Women’s History, v.6/1 (1994).

Dr. Phil In 2013, nearly 4 million people watched Dr. Phil every day, a show dedicated to popular psychology and advice. Since launching his hit talk show in 2002, Dr. Phil McGraw has become one of the most popular hosts on daytime television, and one of the country’s most widely recognized mental health professionals. In addition to hosting Dr. Phil, McGraw has written several best-selling selfhelp books, and established the nonprofit Dr. Phil Foundation. McGraw has been at the center of several controversies and lawsuits, primarily involving accusations of unethical practice. According to Forbes magazine, McGraw makes around $50

386

Dr. Phil

million per year, and has a net worth of over $200 million. Biography Phillip Calvin McGraw was born September 1, 1950, in Vinita, Oklahoma. He grew up with three sisters, and spent the first part of his childhood in Vinita before moving to Overland Park, Kansas. He played football at the University of Tulsa, and completed his bachelor’s degree in psychology at Midwestern State University in 1975. He completed his graduate work at the University of North Texas, earning his master’s degree in experimental psychology in 1976, and his Ph.D. in clinical psychology in 1979. He then completed a one-year postdoctoral fellowship in forensic psychology at the Wilmington Institute. In 2006, McGraw did not renew his license to practice psychology, asserting that his current work is only for entertainment value. McGraw and his wife, Robin, have been married since 1976, and live in Los Angeles. They have two adult children, Jay and Jordan, and as of 2013, they had two grandchildren, Avery and London. Road to Fame After completing his academic training, McGraw joined his father’s private psychology practice in Wichita Falls, Texas. He then quit private practice to pursue other endeavors, including developing the Pathways self-motivation seminar, and Courtroom Sciences Inc., a company that helped lawyers use psychology to develop legal cases. It was through the latter that he met media magnate Oprah Winfrey, and helped her win a lawsuit in 1998, when she was sued for libel after making disparaging comments about beef on her television show. McGraw became a regular guest on Oprah, and launched his own daytime talk show, Dr. Phil, in 2002. Dr. Phil is an advice show that covers a variety of topics, including health, life strategies, money, sex, relationships, parenting, self-esteem, and weight. The success of the show led McGraw to launch Dr. Phil House, a reality television show likened to Big Brother, in which participants lived together for a week and could be observed by McGraw at any time of day or night. Through the use of cameras mounted throughout the house, McGraw examined peoples’ behavior and offered counseling as needed. In 2008, McGraw and his son Jay created

and executive produced The Doctors, a daytime talk show that features a team of medical professionals who discuss a range of various healthrelated topics. It was renewed through 2016. McGraw has authored several best-selling selfhelp books that have sold more than 25 million copies combined. Some of his more widely recognized books include Life Strategies: Doing What Works, Doing What Matters; Relationship Rescue: A SevenStep Strategy for Reconnecting With Your Partner; Self Matters: Creating Your Life From the Inside Out; and Life Code: The New Rules for Winning in the Real World. McGraw’s work has garnered recognition from a number of national organizations. In 2006, the American Psychological Association awarded him with a presidential citation for his work highlighting important mental health issues. In recognition of his accurate depictions of mental health and substance abuse, the Entertainment Industries Council Inc. has awarded him five PRISM Awards. Dr. Phil has also been honored with a Gracie Award four times by the Alliance for Women in Media for celebrating women and making an exemplary contribution to the industry. In 2003, McGraw and his wife established the Dr. Phil Foundation, which supports initiatives focused on the emotional, mental, physical, and spiritual needs of children and their families. The foundation’s initiatives include Court-Appointed Special Advocates (CASA), Dr. Phil’s Million Dollar Challenge for the Children, and Little Kids Rock across America. Controversies and Lawsuits McGraw has been involved in a number of highprofile lawsuits throughout his professional career. In 2005, he was sued for making false statements related to the health benefits of a line of weight loss products that he endorsed, despite the fact that he is not a physician. In 2006, Deepak and Satish Kalpoe, brothers who were one-time suspects in the disappearance of an Alabama teenager on vacation in Aruba, reported that they did an interview with McGraw, and the resulting segment reflected deceitful editing that made them look guilty. The Kalpoe brothers sued for invasion of privacy, fraud, deceit, defamation, emotional distress, and civil conspiracy. In 2008, McGraw visited pop singer Britney Spears in her hospital room, and attempted to facilitate an intervention. He openly discussed

Dr. Ruth



his visit with the media, prompting criticism that he had violated doctor/patient confidentiality. Also in 2008, a producer of Dr. Phil paid $30,000 to bail out the ringleader of the Polk County 8, a group of teenagers who were videotaped beating a 16-yearold girl, in order to get an exclusive interview. Jennifer S. Reinke University of Wisconsin, Stout See Also: Advice Columnists; Family Counseling; Family Therapy; Television, 2000s. Further Readings Day, Sherri. “Dr. Phil: Medicine Man.” New York Times. http://www.nytimes.com/2003/10/27/business / media-dr-phil-medicine-man.html (Accessed January 2014). Dembling, Sophia, and Lisa Gutierrez. The Making of Dr. Phil: The Straight-Talking True Story of Everyone’s Favorite Therapist. New York: Wiley, 2005.  Peyser, Marc. “Paging Doctor Phil.” Newsweek (September 1, 2002). http://www.newsweek.com/ paging-doctor-phil-144717 (Accessed January 2014).

Dr. Ruth Dr. Ruth Westheimer is a popular sex and relationship therapist who has frequently appeared on radio and television since the 1980s. Additionally, she has authored many books and newspaper columns, and is a popular public speaker. Westheimer was born Karola Ruth Siegel in 1928 in Germany. According to her autobiography, All in a Lifetime, she and other Jewish children were sent from Germany to Switzerland prior to the outbreak of World War II, and she spent the duration of the war in an orphanage. Both of her parents died in a concentration camp (she considers Auschwitz the most likely location). She later lived in Israel and France, before moving to New York City in 1956. She married her third husband, Manfred Westheimer, in 1961. In the late 1960s, Westheimer became the program director for a Planned Parenthood clinic in Harlem while she studied at Columbia University in the evenings, focusing on family and sex counseling.

387

She received her Ed.D. in 1970, and became an associate professor at Lehman College in the Bronx, where she specialized in teaching sex counseling. In the late 1970s, Westheimer gave a lecture to an audience of broadcasters, which led to her receiving an opportunity to appear on local radio in New York. From there her reputation grew. Media Impact Westheimer places great importance on providing accurate factual information to her listeners and readers in order to dispel entrenched myths, such as the myth that a woman cannot get pregnant the first time she has sex. Westheimer’s shows were noteworthy in the 1980s for her frank discussions and unabashed use of graphic and medical terms in discussing topics that many considered taboo at the time, such as masturbation, homosexuality, and anal and oral sex. She told the Los Angeles Times in a 1985 interview that, thanks to the work of many sex therapists and the researchers Masters and Johnson, “we can now talk about every issue of sex without question.” Westheimer’s rise to prominence via radio and television in the early 1980s coincided with the early stages of the AIDS epidemic. It is perhaps for this reason that she sometimes asked callers who had described themselves as sexually active what types of contraception and protection from sexually transmitted diseases they were using. Though Westheimer typically used clinical terms, a seeming anomaly was that she sometimes referred to AIDS as “that dreadful disease.” Between 1984 and 1991, Westheimer hosted a succession of television shows on the Lifetime cable network, with names such as “Good Sex! With Dr. Ruth Westheimer” and “The Dr. Ruth Show.” These shows, which varied in format and episode length, attracted an estimated 2 million viewers per week. She has also been a prolific writer, providing advice via the syndicated newspaper column “Ask Dr. Ruth,” and writing or cowriting numerous books on sexuality and relationships (including First Love: A Young People’s Guide to Sexual Information, 1986; Dr. Ruth’s Guide for Married Lovers, 1992; Dr. Ruth’s Sex After 50: Revving up the Romance, Passion & Excitement!, 2005; and Sexually Speaking: What Every Woman Needs to Know About Sexual Health, 2012). Her textbook, Human Sexuality: A Psychosocial Perspective, cowritten with Sanford Lopater, is on the reading lists of

388

DREAM Act

many college courses. Westheimer’s books have also branched beyond her sexuality specialty, covering topics such as grandparenting and providing care for patients with Alzheimer’s disease. Cultural Icon Westheimer is instantly recognizable because of her German accent, diminutive height, and ribald sense of humor. She considers the withdrawal method of birth control very ineffective, and conveys this message during speaking engagements by asking the audience how many sperm cells are needed to impregnate a woman. “One fast one!” is her typical response. On her radio shows, whenever a woman caller describes a relationship with a boyfriend or husband that Westheimer feels is untenable, she plays a clip from the song “I’m Gonna Wash That Man Right Outa My Hair.” Westheimer’s colorful personality makes her a frequent target of parody sketches. Westheimer has continued making guest television appearances well into her 80s, including on The Today Show and The Doctors. She maintains a YouTube channel, on which 30-second videos are posted of her answering questions submitted via her Web site. During her 2012 Today Show appearance, Westheimer addressed beneficial aspects, she saw resulting from increased public discourse on sexuality over the previous 30 years. Said Westheimer, “We are better off. . . . We can prove that there are [fewer] women who haven’t heard the message . . . that they have to take responsibility for their own sexual satisfaction.” From 1990 to 2009, several positive trends emerged among teenagers that indicated a marked reduction in sexual risk taking. These include a decline in teen pregnancy, increased condom use, and decreased prevalence in having multiple sex partners. It is extremely difficult to determine the reasons for these changes, but given the millions of people exposed to Dr. Ruth Westheimer’s sex advice, one cannot rule out the possibility that she and other media-based sex educators have contributed to these positive developments. Alan Reifman Texas Tech University See Also: Advice Columnists; Radio, 1971 to 2013; Mothers in the Workforce; Television, 1980s.

Further Readings Dullea, Georgia. “Therapist to Therapist: Analyzing Dr. Ruth.” New York Times (October 26, 1987). http:// www.nytimes.com/1987/10/26/style/therapist-to -therapist-analyzing-dr-ruth.html (Accessed January 2014). Gudelunas, David. Confidential to America. New Brunswick, Canada: Transaction, 2008. Kogan, Rick. America’s Mom: The Life, Lessons, and Legacy of Ann Landers. New York: HarperCollins, 2003. Melody, M. E. and L. M. Peterson. Teaching America about Sex: Marriage Guides and Sex Manuals from the Late Victorians to Dr. Ruth. New York: New York University Press, 1999. Morris, Bob. “At Home With: Dr. Ruth Westheimer; The Bible as Sex Manual?” New York Times (December 21, 1995). http://www.nytimes.com/1995/12/21/ garden/at-home-with-dr-ruth-westheimer-the-bible -as-sex-manual.html (Accessed Jan. 2014).

DREAM Act The Development, Relief, and Education for Alien Minors (DREAM) Act is a bipartisan legislative proposal that aims to provide a path to legal permanent residency (LPR) for eligible undocumented individuals. It targets immigrants who moved to the United States with their families when they were children. It was first introduced to Congress on August 1, 2001, by Representatives Howard Berman (D-CA) and Christopher Cannon (R-UT) in the House of Representatives and by Senators Orrin Hatch (R-UT) and Richard Durbin (D-IL) in the Senate. Several bills had been introduced since 2001, but none have become law as of early 2014. A recent version of the DREAM Act was introduced in the House and Senate on May 11, 2011. Legislative Background Although comprehensive immigration reform has been considered several times in recent decades, no proposed legislation has been signed into law. This persistent legislative deadlock has pushed lawmakers to try to pass narrower reform. To this end, the DREAM Act is designed to regularize the situation of an estimated 2.1 million unauthorized



individuals who are currently students or have recently graduated. The proposal is considered necessary in order to continue providing opportunities for undocumented students after they graduate from high school. Since the 1982 Plyler v. Doe Supreme Court decision, states have been required to provide free public education through high school to all students, including undocumented immigrants. However, under Section 505 of the Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) of 1996, these students are not eligible for in-state tuition to attend college, and cannot receive federal student loans. Thus, a majority of the approximately 65,000 undocumented students who graduate high school in the United States each year are unable to pursue higher education. They are also not allowed to join the military, and many jobs are off limits to them due to their immigration status. The DREAM Act seeks to integrate these individuals into society. Eligibility Criteria and Process The 2011 versions of the Senate and House DREAM Act would repeal the provision of the 199­6 IIRIRA relative to in-state tuition for higher education. It would also grant eligible unauthorized students conditional permanent resident status for a period of six years, through the procedure known as cancellation of removal. After this period, they may transition from a conditional status to a permanent status. In order to be eligible for this conditional status, individuals must meet several criteria. First, they must either graduate from high school or obtain a GED in the United States, be admitted into college, or enlist in the military. Second, they must have been full-time residents of the United States for a period of five years immediately preceding the enactment of the bill, and must have arrived in the United States before the age of 16. Third, they must fulfill the moral condition of maintaining good character since entering the country. This includes not having been convicted of a felony offense or significant misdemeanor offense, and not being considered a security threat. Finally, there is an age requirement that these individuals need to fulfill. Under the Senate version of the DREAM Act, they would need to be under the age of 36 when the bill becomes law, and under the House version they would need to be under the age of 33.

DREAM Act

389

At the end of the conditional period, these individuals may be granted unrestricted lawful permanent resident status (LPR) if they have continuously lived in the United States, maintained good moral character, and have either graduated from a two-year higher education institution, studied toward a bachelor degree, or served in the armed forces for at least two years. Support and Controversy Proponents of the DREAM Act argue that it will allow undocumented youth to pursue higher education, and thus become economically valuable members of the communities in which they already live. Supporters believe that these young people should not be penalized for their parent’s decision to illegally immigrate. Furthermore, most of these students have weak ties to their country of origin and often consider the United States their home. Deporting these individuals to a country that they barely know is a harsh punishment for a crime that they did not personally commit. Finally, some proponents argue that the act would reduce the deficit and increase revenue, thus serving as good economic policy. Opponents of the DREAM Act criticize it as an amnesty program. They contend that allowing unauthorized individuals to legalize their situation rewards law-breaking behavior and will encourage more undocumented immigration to the United States. They also argue that repealing section 505 of the IIRIRA and reducing the cost of higher education for these students would represent an unnecessary burden on U.S. taxpayers. In June 2012, after several failed attempts at passing the DREAM Act, the Obama administration announced that it would implement a Deferred Action for Childhood Arrivals (DACA) initiative that would allow temporary relief from deportation to most of the potential beneficiaries of the DREAM Act. The eligibility requirements to benefit from this two-year renewable initiative are similar to those established under the DREAM Act, albeit somewhat more restrictive because they only include individuals between the age of 15 and 30. Under this initiative, the “DREAMers,” as they are called, could also apply for employment authorization. In essence, the DREAM Act epitomizes the debate over immigration reform in the United States and the difficulty of obtaining consensus

390

Dreikurs, Rudolf

on immigration issues. The DREAMers, and all potential beneficiaries of the DREAM Act, belong to a 1.5 generation that is neither fully American, nor fully anything else. The DREAM Act is an attempt to integrate this generation into American society. Marie L. Mallet Harvard University See Also: Immigrant Children; Immigrant Families; Immigration Policy; Latino Families. Further Readings Bruno, Andorra. Unauthorized Alien Students: Issues and “Dream Act” Legislation. Congressional Research Service (June 19, 2012). Carrasco, Stacy. The D.R.E.A.M. Act, Is It Just a Dream?: Latino Challenges in Public Policy. San Francisco: San Francisco State University, 2006. Kim, Caleb. “Lost American DREAM of Undocumented Students: Understanding the DREAM (Development, Relief, and Education for Alien Minors) Act.” Children & Schools, v.35/1 (2012).

Dreikurs, Rudolf Rudolph Dreikurs was an American child psychiatrist and educator who is most well known for his work related to child rearing and children’s behavior. Dreikurs was born on February 8, 1897, in Vienna, Austria. Upon graduating from the University of Vienna medical school, he spent five years as an intern and resident in social psychiatry. His research led him to organize the first Mental Hygiene Committee in Austria, and to become interested in the work of Alfred Adler, a pioneering social psychologist who believed that the primary goal of all human beings was to achieve belonging and acceptance by others. As a director of a child guidance center in Vienna, Dreikurs employed Adler’s methods within both the family and classroom setting. Following Adler, Dreikurs immigrated to the United States in 1937 to avoid Nazi persecution, and two years later he settled in Chicago, where he spent the remainder of his career developing Adler’s theory into a

practical approach for understanding children’s misbehavior. Dreikurs died on May 25, 1972, in Chicago. During his career in Chicago, Dreikurs served as a professor emeritus of psychiatry at the Chicago Medical School, and founded and served as the director of the Alfred Adler Institute of Chicago. Dreikurs also served as the editor of the Journal of Individual Psychology, and founded theNorth American Society of Adlerian Psychology (NASAP), which remains active in the 21st century. Dreikurs’s professional accomplishments also include authoring or coauthoring over a dozen books on child rearing, most notably Children: The Challenge; Discipline Without Tears; and New Approach to Discipline: Logical Consequences. One of his most notable concepts is “the courage to be imperfect.” Mistaken Goals Based on Adler’s work, Dreikurs suggested that a human’s primary motivation is a feeling of belonging. When a child feels a lack of belonging, he or she may express this through misbehaving. Dreikurs described a child’s misbehavior as having one of four purposes: undue attention, struggle for power, retaliation and revenge, or complete inadequacy. These are known as “mistaken goals.” Recognizing these four mistaken goals ,and knowing ways to effectively address them, became one of the mainstays of Dreikurs’s work. Dreikurs suggested that children are not consciously aware of these goals, and that they are “mistaken” because they result in uncooperative, and troublemaking behavior that are ineffective strategies for the child to get his or her needs met. The first mistaken goal, undue attention, refers to children who believe that their self-worth is tied to being the center of attention. Children may try to affirm their status and gain attention by engaging in inappropriate behaviors such as making silly noises, interrupting others, or constantly talking. The second mistaken goal, the struggle for power, refers to children who believe that complying with others’ requests is tantamount to submission, which will cause them to lose their sense of personal value. Children may try to gain power by engaging in behaviors such as refusing to exhibit a desired behavior like eating dinner or going to be when told.

Drive-Ins



The third mistaken goal is retaliation and revenge. Children seeking retaliation and revenge believe that intensifying the power struggle and hurting others as much as they feel others have hurt them will lead to feelings of significance and importance. Dreikurs suggested that children with this mistaken belief are those who need the most encouragement but who receive the least. The fourth mistaken goal is complete inadequacy. Children who entirely give up believe that becoming helpless and using their helplessness to avoid tasks will help them avoid real failure, which may be even more embarrassing. Children who feel they have exhausted their efforts in trying to belong may resort to withdrawal from the group by engaging in behaviors such as making excuses, avoiding trying, or wasting time. Dreikurs suggested that all actions that children exhibit are grounded in the idea that they seek belonging and acceptance in the group. Though well-adjusted children are able to make appropriate and positive contributions to the group, children who misbehave attain and maintain social status by defying the group’s norms and expectations. For example, if a child does not feel a sense of belonging with his or her peer group at school, he or she may begin to act as a class clown (undue attention) or bully (retaliation and revenge, assuming that the child felt hurt or embarrassed by the peer) in order to attain the attention and liking of his or her peers. Dreikurs postulated that the undesirable behaviors reflective of mistaken goals are signs of a child’s discouragement, and underscored the importance of encouraging children. In fact, Dreikurs believed that encouragement is more important than any other aspect of child rearing. Jennifer S. Reinke University of Wisconsin, Stout See Also: Adler, Alfred; Discipline; Parent Education; Parenting; Psychoanalytic Theories. Further Readings Dreikurs, Rudolph, Pearl Cassell, and Eva Dreikurs Ferguson. Discipline Without Tears: How to Reduce Conflict and Establish Cooperation in the Classroom. New York: Wiley, 1972. Dreikurs, Rudolph and Loren Grey. New Approach to Discipline: Logical Consequences. Portsmouth, NH: Hawthorn, 1968.

391

Terner, Janet and W. L. Pew. The Courage to Be Imperfect: The Life and Work of Rudolf Dreikurs. Portsmouth, NH: Hawthorn, 1978.

Drive-Ins Drive-in restaurants and drive-in movie theaters originated before World War II, when widespread automobile ownership, new and improved road systems, and mass travel were becoming the norm. They continued to flourish after the war, becoming part of mid-century America’s car culture, youth, and leisure culture. These automobile-focused businesses operated as an economic, architectural, and societal reflection of the public’s increasing mobility, informality, and desire for convenience. At such casual affordable sites, entrepreneurs and young workers made a living by serving families, dating couples, and cruising teens hanging out. Although drive-in restaurants and movie theaters began to decline in the 1960s, drive-ins both new and old continue to attract customers today. The first purpose-built drive-in restaurant was the 1921 Pig Stand in Dallas, which became a chain. It was followed by A&W, Steak ‘N Shake, Sonic, and other chains—although most drive-ins were independently operated. Drive-in restaurants primarily opened along major thoroughfares, with elaborate designs, signage, and neon lighting attracting passersby. Buildings sometimes offered indoor dining, also and typically featured a large outdoor canopy for weather protection. From the comfort of their cars, customers ordered simple menu items from (sometimes rollerskating) waiters or waitresses, called carhops, or from electronic speaker systems. Carhops brought patrons’ food on trays that hooked to the car windows. The first drive-in movie theater was patented and built in 1933 by Richard Hollingshead in Camden, New Jersey, as a family-friendly come-as-you-are alternative to formal indoor theaters. Generally built on main roads on the outskirts of town, drive-in theaters promoted themselves through neon marquees and large screens (often taller than nearby structures). Inside fenced parking lots, car occupants watched films from angled ramps, listening through speaker poles or car-radio transmissions while eating

392

Dual-Income Couples/Dual-Earner Families

snacks from the concession stand. Movies were only shown after dark, and were usually inexpensive double features of low-budget, independent, or secondrun movies. (Most non-chain drive-ins could not get first-run major studio films.) Drive-ins of both types proved appealing to families because parents did not have to leave their children with a babysitter while enjoying an evening out. At drive-ins, in the relative privacy of a car, children did not need to be dressed up or remain on their best behavior, and babies could cry without disturbing other patrons. Business owners welcomed the young presence. Drive-in restaurants often offered lower-priced smaller-portioned kids’ meals, while drive-in theaters gave children admission discounts. Most drive-in theaters provided a screen-front playground, and some went even further in attracting a family audience, providing everything from pony rides and kiddie trains to miniature golf. Drive-in theaters peaked in 1958, with 4,063 nationwide, while drive-in restaurants’ height was 1964, with 35,000 across the United States. Driveins of both types quickly declined, though, due to multiple factors. As teen cruising intensified, concerns about fighting, drinking, rowdiness, noise, and traffic turned public opinion against drive-ins as appropriate places for teenagers and families to gather. To combat the problems, some communities enacted curfews and anticruising ordinances aimed at keeping youth inside. Competition arose—including drive-thru fast-food chains, multiplex movie theaters, and indoor shopping malls—and became the norm in American life. New, popular compact cars lacked room for large groups of passengers and were uncomfortable to sit in for long periods. Interstate highways often bypassed drive-ins’ major roads. Suburban sprawl caused some drive-ins to have higher property taxes, and they became targets of land developers—especially appealing to owners because most drive-ins operated seasonally (other than in warm, dry areas like the sunbelt), leaving the properties unused and unprofitable much of the year. Drive-ins tried various survival tactics. Many drive-in restaurants with interior seating eliminated carhops. Drive-in theaters often placed additional screens around their perimeters. Some attracted a new audience with X-rated films; others added daytime swap meets. Both drive-in types closed en masse across the United States. Many sat vacant and decayed, while others were demolished for new

development. Between 1978 and 1988 alone, over 1,000 drive-in screens went dark. By 2013, only 355 drive-ins theaters remained nationally—less than a one-tenth of their peak number. However, a nostalgia-fueled comeback has occurred. Since 1990, 42 new drive-in theaters have been built, and 63 closed drive-ins have reopened. A number of vacant drive-in restaurants have reopened, and new ones have come into existence (spearheaded by Sonic, the nation’s largest drive-in chain). Classic car clubs host drive-in cruise nights, young hipsters champion drive-ins’ retro nature, and baby boomers bring children and grandchildren to enjoy the moviegoing and dining experiences of their youth. Appreciation for drive-ins as icons of mid-20thcentury American history, culture, and design has led to preservation efforts. Museums host drive-in signs and speakers, and a number of demolished drive-ins’ marquees and signs have been restored onsite. Campaigns to save endangered drive-ins are common, as are fundraising projects for restorations, and equipment upgrades. Some drive-ins have even received landmark designations. Kelli Shapiro Texas State University See Also: Automobiles; Baby Boom Generation; Date Nights; Leisure Time; Suburban Families. Further Readings Heimann, Jim. Car Hops and Curb Service: A History of American Drive-in Restaurants 1920–1960. San Francisco: Chronicle, 1996. McKeon, Elizabeth and Linda Everett. Cinema Under the Stars: America’s Love Affair With Drive-In Movie Theaters. Nashville, TN: Cumberland House, 1998. Segrave, Kerry. Drive-in Theaters: A History From Their Inception in 1933. Jefferson, NC: McFarland, 1992.

Dual-Income Couples/ Dual-Earner Families Dual-income families are those in which both parents are employed and share breadwinning



Dual-Income Couples/Dual-Earner Families

393

responsibilities. Since the 1970s, when women began entering the workforce in increasing numbers, dual-income couples and families have changed family life in innumerable ways. In 1970, about 30 percent of U.S. families considered themselves dual income, and as of 2014, this has risen to about 70 percent. Dual-income families with children are the most common family structure today, and working women typically account for about 40 percent of a family’s total income in these families. The increase of women in the workforce and the effects of this trend on families has garnered much attention. However, women have always participated in the workforce to varying degrees throughout history. Working-class women have served as housekeepers and nannies, and have historically put in long hours for little pay. Middle-class women worked as teachers, shopkeepers, nurses, and secretaries for generations. Additionally, women took up “men’s work” during wartime, and prior to the Industrial Revolution, women farmed alongside the other members of their families. Thus, while women have a long history of participating in the paid workforce, some significant changes in their motivations for taking up employed positions have occurred in recent decades. Inflation and cost of living increases, especially during the 1970s and 1980s, required an additional income for families to maintain their standard of living. In 2010, one-third of married couples required two incomes to earn a living wage of $25,000 to $50,000.

Despite the dominant discourse about the negative social impacts of dual-income families, studies repeatedly suggest that most dual-income families thrive. The dual-income family structure produces a number of benefits for families. First, although some media discourses depict the children of working mothers as neglected, most children are not negatively impacted by their mother’s participation in the paid work force, and they may even benefit from the dual-income family structure. Dual-income children report the same level of closeness with their parents as children with stayat-home mothers. Three-quarters of children growing up in dual-income families hope to have a dualincome family someday, and nine out of 10 children from single-income homes also want a dual-income family later in life. Thus, most 21st-century children want to have the best of both worlds, and largely plan to work and raise children together with their future partners. However, the desire for a stay-athome mother persists for 33 percent of men and 15 percent of women. In general, those seeking this style of family life do not oppose dual-income families, and many believe they may have a dual-income family, even if they would ultimately prefer a traditional arrangement. Second, women’s wages increase the family income, which staves of poverty for some, and they creates opportunities for enriching experiences for others. Women’s earnings are the primary reason that contemporary households have maintained a similar standard of living to families of the past.

Misconceptions About Dual-Income Families A number of myths and stereotypes exist about dual-income families. The media frequently portray working mothers as harried and hurried, less committed to their careers, or unable to properly attend to their partners and children. Many traditionalists believe that working mothers harm their young children by placing them in daycare centers and by not providing the kind of consistent care and availability that young children require. This view is a harmful factor in mothers’ work– life experiences that add pressure to the lives of already-busy working women. Working mothers are sometimes portrayed as selfish, materialistic, or overly ambitious, and this causes many to experience anxiety, guilt, and pressure to overcompensate when they spend time with their children.

Finding Time Family time in dual-earner households is frequently depicted as rushed and in short supply. Pressure for intense parenting does make many working parents feel as if they do not have enough time for their children. This is especially true for working mothers, who remain disproportionately responsible for much of the day-to-day parenting that children require. Working women also claim that they do not get to spend enough time with their spouses. However, the children and spouses of working women do believe that they get enough time together. Thus, time and how it is perceived is contextual and subjective. However it is perceived, family time is at a premium for dual-income families, which require strategic scheduling and creative time management to provide quality childcare

394

Dual-Income Couples/Dual-Earner Families

while parents maintain their jobs. Although working mothers do much less housework than stayat-home mothers, they do significantly more than their employed husbands and partners. Dual-income couples typically work about 80 hours a week combined, though many couples, particularly high earners, may work many more hours than this. On the surface, it would seem that a stay-at-home parent would spend more time with his or her children, but data suggest that working and stay-at-home parents spend about the same amount of time with their children. Working parents gain time by giving up other activities, such as sleep, housework, socializing, volunteering, and hobbies in order to spend time parenting. Additionally, U.S. parents take less vacation and work longer hours than parents in any other country. In addition to giving up extracurricular life activities, dual-income couples make up time by working together to care for children. In most traditional families, fathers do much less child care than mothers; however, fathers with working partners contribute more hours to child care and housework. Thus, ironically children of dual-income parents actually get more time with their parents. Dual-income parents also share their burdens with outside helpers. For example, they tend to use cleaning services more often than traditional families, and eat meals out more frequently. All mothers spend more time with their children now than in previous decades, as a result of the movement toward intensive parenting. This trend demands that parents focus their time on their children by enrolling them in sports, music lessons, and dance classes. Fathers’ time spent with children has increased as well, growing steadily from 1985 to the present. Part of this time increase is attributed to family planning. Many couples wait to become parents until after they have established their careers and become financially stable. This trend opens up opportunities for working parents to have more flexibility in when and how they work. A second way that family planning affects family time is through family size. Families are significantly smaller than in previous generations, so parents have more time to devote to each child. Sociologist Arlie Hochschild described working women’s time spent on house and childcare as the “second shift.” The second shift begins when

working mothers come home and begin their unpaid work, such as cooking, laundry, and child care. Employed women spend about 62 hours per week on household tasks, compared to only 21 hours per week for employed men. Married women with young children continue to take on a full twothirds of all household chores, while their male counterparts assume the remaining one-third. For dual-income couples, women perform about twice as much child care and housework as men, but men put in more paid labor hours. Total hours worked, including paid employment, housework, and childcare, are about the same for both partners in dualincome couples in the U.S. because women average fewer hours in paid employment. Female Breadwinners Women who make more money than their husbands are a point of interest and controversy. Throughout history, most women in the workforce have been considered secondary earners, not primary breadwinners; that their income simply “helps out” with family expenses. But in recent decades, many households have adopted a female breadwinner framework, in which the mother works while the father assumes the role of primary child care provider. Popular media suggest that this family structure can destroy male egos and marriages, and might adversely affect children. While research suggests that most couples who adopt this model are happy with their decisions, there are differences in the balance of power in marital relationships (although the breadwinning mother is still more likely to do more housework). Still, these changes in power are slight and men retain the majority of power in female breadwinning homes, as in dualincome and traditional family structures. Dual Income, No Kids One in five contemporary couples is a dual-income couple without children. “DINKs” (Dual Income, No Kids) or “DINKYs” (Dual Income, No Kids Yet) are childless (sometimes referred to as childfree) couples. DINKs tend to have high educations, high incomes, and often lead high-consumer lifestyles. Couples choosing to remain childless may have political or medical reasons for their lifestyle. For example, many childfree couples believe that the world is overpopulated, inherently immoral, or that there are too many unwanted children in the



Dual Income Couples/Dual Earner Families

world to purposefully have more. Medical reasons might include genetic disorders or irrational fears about pregnancy and childbirth. Still other DINKs believe that they are more suited for work than parenting, dislike children, or believe that they are too old or not economically stable enough for children. Some are happy with their lives, and do not wish to change their lifestyles, or are already serving as caregivers for their parents or siblings. While the reasons for remaining intentionally childless greatly vary, not all DINKS are childless by choice or are permanently childless. Some couples are unable to have children, and many couples put off having children until they feel economically comfortable. Challenges Dual-income families face a number of challenges. Some frequently cited issues include power struggles and jealousy over spouses’ career success, economic uncertainty, spousal neglect, and financial decisions. However, the most readily discussed challenge for dual-income couples are work–life balance issues. Problems managing work and family commitments are particularly difficult, and require that both partners negotiate workable arrangements with each other. Scheduling conflicts are continual points of stress for dual-income couples, as is unreliable, unavailable, or unaffordable childcare. Tension may arise over expectations about how housework and childcare will be divided, and how to make these life responsibilities more equal. Finally, working mothers and fathers face incredible pressure to “have it all” and to “do it all.” Expectations for intensive parenting, paired with demanding career commitments make work–life balance a daily challenge for dualincome parents.

395

Dual-income parents frequently use multiple strategies to manage their work–life issues. They also make decisions about how to arrange their schedules and responsibilities through trial and error, experimenting with different versions of scheduling and task roles. Notably, these families may not always be dual income. Sometimes, parents take turns working, focusing on one career, and then the other, over a period of years. Thus the trial and error experiments of dual-income couples may involve slight changes about who leaves for work first, or they may involve deciding on which career will continue while another temporarily ends. Striking a balance between paid employment and quality family time is an ongoing pursuit for dual-income couples. Sarah Jane Blithe University of Nevada, Reno See Also: Egalitarian Marriages; Family Consumption; Gender Roles; Living Wage; Marital Division of Labor. Further Readings Bianchi, S., J. Robinson, and M. Milkie. Changing Rhythms of American Family Life. New York: Russell Sage Foundation, 2006. Hochschild, Arlie. The Second Shift: Working Parents and the Revolution at Home. New York: Penguin, 1989. Moe, K. and D. Shandy. Glass Ceilings & 100-Hour Couples: What the Opt-Out Phenomenon Can Teach Us About Work and Family. Athens: University of Georgia Press, 2010. Tichenor, V. Earning More and Getting Less: Why Successful Wives Can’t Buy Equality. New Brunswick, NJ: Rutgers University Press, 2005.

E Earned Income Tax Credit The Earned Income Tax Credit (EITC) is a federal tax credit for low- to moderate-income individuals and families. The amount of the credit depends on income and family size, and is refundable, which means that the credit can not only reduce or eliminate a family’s tax liability, but any excess of the credit will be refunded back to the family, even if no other taxes are owed. The EITC is the largest antipoverty program in the United States, as well as the largest cash transfer tool for low-income working families with children. The credit reduces the overall poverty rate by about 10 percent, and has a greater effect on the poverty rate than any other anti-poverty approach. The “plateau” structure of how the credit is paid generally incentivizes working for low-income families; consequently, the program has fairly consistently received support from across the political spectrum. Although calls for reform posit that the yearly “windfall” payment of the EITC, as opposed to payment being spread out throughout the year, can leave families vulnerable to economic shocks and instability, data largely shows positive benefits associated with the program. EITC payments have been correlated with better health outcomes, school performance, and work trajectories for the families receiving them.

History of the Credit The EITC was created in 1975 as a modest credit intended to offset Social Security taxes paid by low-income workers and incentivize work, and in 1978, the credit was made a permanent part of the U.S. tax code. Legislation in 1984 and 1986 significantly expanded the credit and indexed it for inflation starting in 1987. In the 1990s, lawmakers continued to favor work-based income transfers (as opposed to entitlement-based income transfers), and in 1996, welfare reform legislation replaced Aid to Families with Dependent Children (AFDC) with Temporary Relief to Needy Families (TANF), which dramatically reduced the scope of welfare and strengthened a welfare-to-work approach. Against this backdrop, politicians praised the EITC’s incentivizing of work, and the credit was significantly expanded during this time, eventually coming to replace traditional welfare as an antipoverty method. Between 1990 and 1997, the credit was expanded three times, with eligibility added for certain childless workers and a supplemental credit allowed for families with two or more children. Studies of this time period suggested thatthe expansion of the EITC positively affected employment rates among single mothers, more than welfare reforms or gains in the economy. Although many agreed that the credit provided incentives for employment, critics noted a disincentive toward marriage, in that single workers 397

398

Earned Income Tax Credit

with kids could face a lower credit upon marriage, especially to a partner who also had children. Revisions to the credit in 2001 lessened the marriage penalty and improved compliance measures. In 2009, the marriage penalty was further lessened by creating a third tier of the credit for families with three or more children. A strict interpretation of economic theory may suggest that enlarging the credit to families with more children than that may incentivize childbirth among low- to moderate-income families; however, studies have not shown a correlation between the expanding EITC and increased fertility. Three decades of expanding the credit, partly in response to the political will to replace welfare as it had come to be known, saw the number of individuals and families receiving the credit considerably grow, with nearly 60 million recipients benefiting from the credit by the end of 2010. The credit, particularly the refundable nature of it, can provide a significant increase in earnings for lower-income families. For example, an unmarried minimum-wage worker with two children can see a 40 percent increase in her yearly earnings because of the EITC. How the Credit Works To receive the EITC, an individual or married couple must file a federal income tax return that meets several criteria. As the name implies, filers need to have had earned income for the year in question. Earned income includes all salary, wages and tips, self-employment income, certain military pay, certain strike benefits, some disability long-term benefits, and certain savings-plan contributions made by filers’ employers. Benefits, such as food stamps or Social Security income, do not affect EITC eligibility. Filers also need to have low–moderate income, and there are limits on how much investment income filers may have. The yearly income cutoffs depend on how big a household is, with higher income cutoffs for bigger families. Income cutoffs change yearly, but are generally around 125 percent of the federal poverty level (FPL) for individuals or married couples without children, and around 225 percent of the FPL for individuals and families with children. Families with children receive a higher credit than families without children. Qualifying children include sons, daughters, stepchildren, grandchildren, and adopted children, as long as they lived

with the filer for more than half the year. Brothers, sisters, stepbrothers or stepsisters—as well as descendants of such relatives—can be claimed as foster children if they lived with the taxpayer more than half the year and were cared for as members of the family. Other children may qualify as foster children, but only if they are placed with the worker by an authorized placement agency. A qualifying child must be either less than 19 years old, a full-time student under 24 years old, or totally and permanently disabled (no matter the age). If a lower-income individual or married couple with earned income does not have children, they may still be eligible for the credit, provided they are between 24 to 65 years of age, have lived in the United States for more than half the year, and cannot be claimed as a dependent on someone else’s tax return. Also, married couples must file jointly to receive the credit. The amount of the credit depends on income and family size. The amount of the credit increases as earned income levels rise (i.e., are phased in) until it reaches a maximum level (i.e., plateaus), and then gradually decreases with higher income levels until the income threshold is reached (i.e., phased out). Marriage and the presence and number of children increases the size of the credit (up to three children, after which the credit amount does not change). For example, for tax year 2013, the lowest credit was $487 for a single person with no qualifying children, whereas the highest credit was $6,044 for a married couple with three or more children. EITC refunds are disbursed during tax time in the same manner as other refunds (e.g., direct deposit or paper check). Previously, recipients could receive advance payments of their expected EITC refund throughout the year via their paychecks, but 2010 legislation eliminated the advanced payment option. Generally, the EITC does not affect other benefit eligibility. Graham McCaulley University of Missouri Extension

See Also: Living Wage; Poverty and Poor Families; TANF; War on Poverty; Welfare; Welfare Reform; Working-Class Families/Working Poor.

Further Readings Athreya, Kartik, Devin Reilly, and Nicole Simpson. “Earned Income Tax Credit Recipients: Income,

Marginal Tax Rates, Wealth, and Credit Constraints.” Economic Quarterly, v.96 (2010). Hansen, Drew. “The American Invention of Child Support: Dependency and Punishment in Early American Child Support Law.” Yale Law Review, v.108 (1999). Meyer, Bruce D. “The Effects of the Earned Income Tax Credit and Recent Reforms.” Tax Policy and the Economy, v.24 (2010).

Easter Easter, like Christmas, juxtaposes religious and secular rituals in ways that are no longer easy to tease apart. Easter is considered the most important religious holiday for Christians, and symbolizes the resurrection of Jesus Christ from the dead, but the celebration of Easter long preceded the Christian religion. It is called a movable feast because there is not a set date for Easter, which is not true of other holidays (e.g., Christmas and Valentine’s Day). It falls somewhere between March 22 and April 25 each year. Some believe that the word Easter is derived from the pagan holiday Eostre, named after Eostre, the pagan goddess of spring, who symbolized rebirth. The word estrogen is also derived from the name of this pagan goddess. The pagan spring rituals were related to fertility, and the eggs and rabbits commonly associated with Easter were considered symbols of fertility, although this meaning is often lost in Christian and secular rituals. The mingling of religious and pagan rituals has been common throughout history, which some believe originally served to make pagans more open to accepting religion. Pagans did not want to give up their rituals, such as the exchange of eggs. Colored eggs were exchanged among Egyptians and Persians, long before Easter was celebrated as a religious holiday, as a means of commemorating the coming of spring. Rabbits in ancient Egypt were symbols of rebirth, and represented the coming of spring. Religious Rituals Christians celebrate Easter as the third day that Christ rose from the dead after his crucifixion, and many participate in rituals to prepare for this day. Mardi Gras, sometimes called Fat Tuesday, occurs

Easter

399

the day before Lent, and is celebrated as a time of feasting and partying prior to the seriousness of the Lenten season. The celebration of Mardi Gras in the United States is most famously connected to New Orleans, but other communities also celebrate it in less flamboyant ways. Ash Wednesday marks the beginning of the Lenten season, the 40-day period leading up to Easter. Lent is considered a period of fasting, reflection, and penance, representing the 40 days that Jesus spent in the wilderness before starting his ministry. During this time, it is believed that Jesus was tempted by the Devil, thus Christians often “give up” something for Lent (e.g., chocolate or ice cream), and try to avoid the temptation to eat it during Lent. The week before Easter is often referred to as Holy Week. Palm Sunday, celebrated one week before Easter Sunday, marks the beginning of Holy Week, leading up to the celebration of Jesus’s death and resurrection. Some Christians may only celebrate Easter services at the culmination of Holy Week. For others, Holy Week is a time of remembrance, marked with contemplative rituals honoring the sacrifice of Jesus. After Palm Sunday, the first of the remembrances of Holy Week occurs on Maundy Thursday, the Thursday before Easter. Maundy Thursday commemorates the last supper that Jesus had with 12 of his disciples. Because the last supper occurred during the Jewish feast of Passover, some Orthodox Christians also hold traditional Jewish Seder meals, which are meals commemorating the food eaten at the Passover meal recorded in the Old Testament. The Friday following Maundy Thursday is termed Good Friday, and marks the day that Jesus was crucified. Many churches hold solemn services commemorating the death of Jesus, and normally end these services in silence. Easter Sunday celebrates the resurrection of Jesus Christ. On the third day after Good Friday, Jesus is believed to have arisen from the dead. The cross is a symbol of the crucifixion and resurrection of Jesus. Many families serve hot-cross buns (usually a small cake or biscuit with a cross cut in the top, or topped with a cross made from icing). Other typical family dinners at Easter include roasted lamb (Jesus is referred to in scriptures as the Lamb of God), but more often the meat of choice is ham. It is likely that ham was associated with Easter because it was preserved in the fall, and was meant to last families through the winter. Part of the celebration of spring

400

eBay

was to use up the last of the cured meats. Most importantly, while the season leading up to Easter is solemn, Easter is a celebration of the resurrection of Jesus. Families often spend time together after Easter services, sharing a meal and participating in Easter traditions. Secular Rituals While Easter is not a U.S. federal holiday, it is still regarded as a major family holiday. Many families have traditions surrounding the Easter Bunny, a fictitious character that reportedly comes the night before Easter and places candy for good boys and girls (similar to the Santa Claus myth) in baskets. The Easter Bunny tradition originated in Germany and was brought to the United States by German immigrants in the 18th century. In order to support this tradition, parents of young children will place candy in baskets while the children are sleeping, so that children believe that the Easter bunny has visited during the night. In recent decades,the Easter basket has evolved into a bonanza for merchants as parents have added toys and other gifts to the candy in the baskets. Some families participate in Easter egg hunts, which consist of hiding eggs filled with candy in homes, yards, or at a local park for children to discover. Chocolate bunnies and eggs, pastel marshmallow chicks, jelly beans, and other candies evolved from pagan ways of celebrating the goddess of fertility and the spring season. Now, Easter is second only to Halloween in consumption of candy. Stephanie E. Armes Jason Hans University of Kentucky See Also: Christmas; Christianity; Judaism. Further Readings Carvalhaes, C. and P. Galbreath. “The Season of Easter: Imaginative Figurings for the Body of Christ.” Interpretation, v.65 (2011). Hoffman, Lawrence A. and Paul F. Bradshaw. Passover Easter: Origin and History to Modern Times. Notre Dame, IN: University of Notre Dame Press, 2000. Lewis, E. G. In Three Days: The History and Traditions of Lent and Easter. Charleston, OR: Cape Arago Press, 2011.

Smit, R. “Maintaining Family Memories Through Symbolic Action: Young Adults’ Perceptions of Family Rituals in Their Families of Origin.” Journal of Comparative Family Studies, v.42 (2011). Wills, David W. Christianity in the United States: A Historical Survey and Interpretation. Notre Dame, IN: University of Notre Dame Press, 2005.

eBay EBay is a worldwide online commerce Web site that facilitates consumer-to-consumer purchasing via an electronic proxy bidding system. Since eBay’s inception in 1995, the company has become a trailblazer in online commerce and has successfully developed into one of the world’s fastest-growing and most frequently visited online shopping venues across the globe. EBay has also captured the attention of economists, marketing strategists, computer scientists, and others who have studied the company’s methods, growth strategies, and its data management practices. EBay has incorporated revolutionary online marketing strategies to expand its business, including shifting from a traditional marketing approach of selling products to the public to a vision in which the company’s primary objective is to connect people who share common interests. The company’s efforts have resulted in creating an international online community in which both individuals and businesses of any size can conveniently purchase and sell goods. History Originally founded as AuctionWeb in San Jose, California, in September 1995 by French-born Iranian American programmer, Pierre Omidyar, the company’s name became eBay in 1997, when Omidyar discovered that the URL he wanted for his consulting firm, Echo Bay Technology Group, was registered to another company. Shortly after launching his new endeavor, the site experienced a steady first-year increase that ultimately required Omidyar to implement user fees to help offset increasing operating expenses. This incorporation of fees proved an essential component of the



eBay

401

In 2008, eBay announced its newest building on the company’s North Campus at its corporate headquarters in San Jose, California. The green building uses an array of 3,248 solar panels spanning 60,000 square feet and providing 650 kilowatts of power to eBay’s campus. Some of eBay’s office locations offer perks like dry cleaning pick up and delivery and oil changes while employees are working.

online auction system, and remains a primary source of revenue for eBay today. By 1996, the company had hired its first president, Jeffrey Skoll, and by November 1996, it had entered into its first third-party licensing deal with Electronic Travel Auction. This partnership was an immediate success, and in January 1997, the site hosted more than seven times the number of auctions that it had hosted the entire previous year. The significant increase in activity caught the interest of venture capital firm Benchmark Capital, which offered eBay an investment loan worth $6.7 million in 1997. Fueled by rapid growth and an influx of resources, within a year, eBay had 30 employees, more than half a million users, and revenue in excess of $4.7 million. On September 21, 1998, eBay went public with an extraordinary market response that resulted in its founders, Omidyar and Skoll, becoming instant billionaires. Since 1998, the company has continued to expand both geographically and in services offered via strategic holdings and partnerships,

including consummating the acquisitions of GSI Commerce, Skype Limited, PayPal, Shopping.com, Half.com, and Bill Me Later. EBay now operates in 30 countries, serves hundreds of millions of registered users, has more than 31,000 employees, and in 2012, the company reported annual revenue of more than $14 billion. EBay and the American Family One institution that has found the services offered through eBay particularly useful is the American family. The modern American family is a multifarious unit that is becoming both increasingly mobile and technologically savvy. The eBay online platform provides an optimal venue through which this evolving unit can purchase and sell goods without straining lifestyles or time constraints. Throughout history, American families have sought out ways to increase funds through selling used goods that were no longer needed; however, this generally required substantial time commitments

402

Ecological Theory

and a willingness to host sales or seek out interested purchasers within a particular geographical area. Additionally, as the need for certain products such as toys, books, and sporting gear arose, families were often confined to either purchasing these items in new condition at higher prices, or embarking on the laborious task of seeking out deals for used items locally. By incorporating the ability to efficiently buy and sell new and used items online, eBay has enabled families to more effectively satisfy product need by completing transactions with buyers and sellers across the globe. In some cases, families have even found ways make the revenue generated through eBay a primary source of family income. Not only has this helped alleviate conflicts between financial and time constraints, but it has also opened up new opportunities for unemployed family members to financially contribute without sacrificing time from other needs or taking on financial risks commonly associated with starting a business. Conclusion

As eBay continues to grow as a global, multibilliondollar company, more people are discovering new ways to incorporate the company’s online merchant platform into their daily operations. The American family has found much use in this platform because of its ability to be easily adapted to various family circumstances. The company has incorporated revolutionary online marketing strategies to expand their business, including shifting from a traditional marketing approach of selling products to the public to a vision in which the company’s primary objective is to connect people who share common interests. EBay’s efforts have resulted in creating an international online community in which individuals and institutions of any size can conveniently purchase and sell goods. Zach Valdes Sam Houston State University See Also: Digital Divide; Family Businesses; Information Age; Internet; Online Shopping; Personal Computers; Technology. Further Readings Cohen, Adam. The Perfect Store: Inside eBay. Boston: Back Bay Books, 2003.

Brown, Jennifer and John Morgan. “Reputation in Online Auctions: The Market for Trust.” California Management Review, v.49/1 (2006). Resnick, Paul, et al. “The Value of Reputation on eBay: A Controlled Experiment.” Experimental Economics, v.9/2 (2006).

Ecological Theory Frustrated by the a lack of scientific research, developmental psychologist Urie Bronfenbrenner developed the ecological systems theory of child development in 1970s to explain how the environment affects children’s development and growth. He later extended this model to include more far-ranging characteristics, such as cultural influences and biological processes, and renamed it bioecological theory. Although it was initially most commonly applied to children, the theory can also be used to explain adults’ experiences; therefore, it is now considered a lifespan approach to development. The theory asserts that the complex reciprocal interactions that humans engage in influence development when consistently experienced over time. The characteristics of the individuals who engage in those activities, the nature of the setting where the activities take place, and the cultural and temporal contexts in which the activities occur influence development differently according to the individual characteristics of a person. Those who subscribe to an ecological model believe that from birth, humans are influenced by the world around them, and likewise influence the world in return because of the characteristics that they bring to any situation. In other words, exposure to the individuals, objects, and symbols in which development occurs is important to consider, but the context extends beyond these factors. Bronfenbrenner described individuals’ interactive experience as a set of nested structures, or systems, that interact and affect one’s development. The original systems in his theory were the microsystem, mesosystem, exosystem, and macrosystem. He later added the chronosystem to the expanded theory. The microsystem is considered the most influential of the systems, and consists of immediate relationships or organizations with which the child interacts.

Education, College/University



Components of the microsystem include family, caregivers, schools, or daycare. The mesosystem describes how different parts of the microsystem work together. It connects the relationships within the microsystem to the individual by examining how the elements’ relationships impact the individual. One can say that the mesosystem is actually a system of microsystems. For example, the degree to which a family and school interact would be an element of the mesosystem. The exosystem refers to people and places that impact an individual without significant interaction. The exosystem can include a parent’s workplace, extended family members, or neighbors. The macrosystem consists of the largest and most distant set of people and things that still have a great influence over an individual. The macrosystem includes laws, cultural values, the economy, or wars. Finally, the chronosystem is based on the idea that cultures and societies are always undergoing change. It examines the events of an individual’s life, as well as sociohistorical conditions in which development occurs. Consider how issues considered controversial in the 1950s (such as interracial marriage) may impact one’s development in the 21st century, when this stigma is significantly decreased. Likewise, the impact of divorce changes over time, causing a different type of effect further into an individual’s life than it did at the beginning. Bronfenbrenner heavily relied on his theory as he helped develop Head Start, a federal early childhood program targeting low-income children and families in the United States in the 1960s. The program has substantially expanded since its inception and continues to be a longitudinal application of the ecological theory. Tara Newman Stephen F. Austin State University See Also: Brofenbrenner, Urie; Head Start; Nature Versus Nurture Debate. Further Readings Bronfenbrenner, Urie. The Ecology of Human Development: Experiments by Nature and Design. Cambridge, MA: Harvard University Press, 1979. Bronfenbrenner, Urie. Ecological Systems Theory. London: Jessica Kingsley Publishers, 1992. Bronfenbrenner, Urie and Stephen Ceci. “Nature­– Nurture Reconceptualized in Developmental

403

Perspective: A Bioecological Model.” Psychological Review, v.101/4 (1994). Bronfenbrenner, Urie, ed. Making Human Beings Human: Bioecological Perspectives on Human Development. Thousand Oaks, CA: Sage, 2004.

Education, College/ University Higher education began during antiquity with the formation of the Academy in Athens, Greece, founded by Plato, where such students as Aristotle learned philosophy and ethics. In the Middle Ages, this idea of intense study spread to Europe, where cathedral schools, such as the University of Paris, and monasteries taught generations of religious scholars. In the early medieval period, schools were also founded by kings, such as the University of Bologna founded in 1088, and Cambridge University in 1231. During the Reformation, European universities were oriented toward training clergy and producing theologians, lawyers, doctors, and teachers. In addition to religion, schools of the medieval period incorporated the trivium, comprised of the art of dialectics and logic, and the quadrivium comprised of arithmetic, geometry, music, and astronomy. A college is an institution of higher learning, which may be an individual degree-awarding institution, or part of a university that contains many colleges. Sometimes, the terms college and university are interchangeably used, although “university” generally refers to any institution that provides both undergraduate and graduate school programs. Junior colleges or community colleges generally offer a two-year associate’s degree, whereas fouryear colleges offer a bachelor’s degree. Schools that emphasize an undergraduate liberal arts curriculum are commonly known as liberal arts colleges. Universities are further distinguished by their dual emphasis on teaching graduate classes and conducting research. Some older institutions retain the term college in their name out of respect for their history, such as Boston College, Dartmouth College, and the College of William and Mary. All of

404

Education, College/University

these schools are large institutions with graduate and research programs. In the United States, higher education began in the pre-Revolutionary era with the founding of Harvard University in Cambridge, Massachusetts, in 1636, as an institution to train members of the clergy. This was followed by the founding of the College of William and Mary in 1693 in Williamsburg, Virginia, and Yale University in New Haven, Connecticut, in 1701. For many generations, only men of the highest echelons in society attended college. For example, Thomas Jefferson studied at the College of William and Mary, and later founded the University of Virginia. Most people were taught their trade through apprenticeships, and formal schooling was uncommon beyond learning the basics of reading and writing. Large academic libraries, now an indispensible part of an institution, were slow to evolve, and only became popular after Jefferson founded the first such library at the University of Virginia. The most prestigious private institutions of higher education in the United States comprise what is known as the Ivy League. These are Brown University, Columbia University, Cornell University, Dartmouth College, Harvard University, Princeton University, the University of Pennsylvania, and Yale University. These New England institutions have a reputation for highly selective admissions and academic excellence. Benjamin Franklin founded the Academy of Pennsylvania in 1749, which was renamed the University of Pennsylvania in 1791, and was one of the first universities on American soil not formed for the purpose of religious scholarship. Students in these early universities often studied Greek, Latin, geometry, ancient history, logic, ethics, and rhetoric. Professions such as law and medicine were not yet taught in the university setting; instead, those wishing to become lawyers or physicians commonly learned their trade through an apprenticeship. Women’s Education During the colonial era and continuing well into the 20th century, women’s identities were largely derived through their roles as wives and mothers. Educational opportunities for women were extremely limited. However, beginning around 1820, female seminaries became more common, as did normal schools, where young women trained to be teachers. The term seminary had no religious

connotation; rather, it was akin to high school, and such institutions trained women in reading, writing, and various intellectual pursuits that were compatible with their future roles as wives and mothers. Some of these seminaries gradually developed into four-year colleges. The Girls School of the Single Sisters House, founded in 1772 in Winston-Salem, North Carolina, and the oldest surviving women’s educational institution, eventually became Salem College. In the 21st century, there are only 47 women’s colleges still operating in the United States, which is fewer than there used to be. In the 1950s, the Supreme Court determined that public single-sex universities violated the equal protection clause of the U.S. Constitution. Following this decision, many women’s colleges began to accept men; today, there is an association of seven liberal arts colleges in the northeastern United States called the “Seven Sisters” with a student body that remains primarily female. These are Barnard College, Bryn Mawr College, Mount Holyoke College, Radcliff College, Smith College, Vassar College, and Wellesley College. All were founded between 1837 and 1889. The Rise of the Research Institute Many large state universities were founded under the Morrill Land Grant Colleges Act of 1862. These institutions were partly funded by the government as a way to advance the nation’s agricultural and engineering technology, and to provide a practical form of higher education to a larger percentage of the general public. Many land grant universities were oriented toward biological and agricultural sciences, whereas technical institutes were oriented toward industrial research. Most of the colleges established under the act have become full universities, and many are considered top U.S. universities, including MIT and the University of California, Berkeley. During this transformation, the U.S. system introduced many features that became influential. For example, the increasing focus on science led to the cultivation of specialized research fields, which was also a hallmark of influential European universities, including Oxford and Cambridge. The rise of the modern university in the United States is associated with the astounding growth of science and technology since the late 19th century. Academia shifted from a preservation and transmission of accepted knowledge to an emphasis on



the discovery and advancement of new ideas. This included a shift from the primacy of theology and philosophy, to humanism and science. The defining characteristics of the modern research university are the specialization of institutions, a shift from the overarching control by the church, inclusion of technical studies, more centralized organization, and the formation of distinct disciplinary fields. In 1876, Johns Hopkins University in Baltimore was founded with the intention of “[encouraging] research . . . and the advancement of individual scholars, who by their excellence will advance the sciences they pursue and the society where they dwell.” The new board found the existing models of higher education unacceptable, and decided to develop a new model. Thus, Johns Hopkins was one the first universities in the United States to apply elements of the German university model developed by Wilhelm von Humboldt. Humboldt was a Prussian Minister of Education who founded the Humboldt-Universität in 1810, and based education on lectures in the areas of law, medicine, theology and philosophy. This and other innovations, such as the elective system of classes, led the research institution to become the country’s preeminent research university. Thirtyseven Nobel Prize winners were associated with Johns Hopkins University through 2011. Minorities and Universities In 1837, Richard Humphreys, a Quaker philanthropist, founded the Institute for Colored Youth, which trained freed blacks to become teachers. By 1902, 85 schools had been set up by white philanthropists, where churches had educated former slaves. Many of these institutions have become known as historically black colleges and universities (HBCUs). These institutions include Fisk University in Nashville, Florida A&M in Tallahassee, Howard University in Washington, D.C., Morehouse College in Atlanta, Spelman College in Atlanta, and Tuskegee University in Alabama. Prior to the 1954 Brown v. Board of Education Supreme Court decision, HBCUs were the primary option available for black students interested in attending college. Many of the HBCUs began with the task of training black teachers to work in a segregated school system, although this aim has since changed. The Higher Education Act of 1965 defines an HBCU as: “any historically black college or university that was established prior to 1964, whose principal mission

Education, College/University

405

was, and is, the education of black Americans, and that is accredited by a nationally recognized accrediting agency or association determined by the Secretary [of Education] to be a reliable authority as to the quality of training offered or is, according to such an agency or association, making reasonable progress toward accreditation.” In 2014, there were 106 recognized HBCUs in the United States. These include public and private institutions, two-year and four-year institutions, medical schools, and community colleges. Colleges where at least 25 percent of students are Hispanic are considered Hispanic-serving institutions, or HSIs, according to the Higher Education Act of 1965. Some use the term minority-serving institutions (MSI) as a way to signify educational establishments federally recognized under Title IX. Many HSIs are located in states with high Hispanic populations, such as California and New Mexico. The College Cost Reduction and Access Act of 2007 also acknowledges Asian American and Pacific Islanders serving institutions. American Indian– serving institutions are often tribal colleges and universities (TCUs) or institutions in which American Indian/Alaska native students constitute at least 25 percent of the total undergraduate enrollment. The first tribal college was founded in 1994 by the Elementary and Secondary Education Reauthorization Act, and most are controlled and operated by Native American tribes. Many of these institutions are small in comparison to the land grant universities; for example, the Comanche Nation College in Lawton, Oklahoma, is a community college with roughly 500 students that was established in 2002, and the Haskell Indian Nations University in Lawrence, Kansas, has 1,000 students. Many tribal colleges are located on reservations, and they provide access to postsecondary education. The gap between college enrollment rates for black and Hispanic high school graduates and white high school graduates narrowed between 2001 and 2011. However, the gap between low-income families and middle-income families increased from 42 percent in 1992 to 50 percent in 2002, and to 52 percent in 2012. The rate of middle-income students enrolling in higher education increased from 53 to 55 to 65 percent over the last three decades, whereas the highest-income high school graduates enrolling in college increased from 78 percent in 2002, to 82 percent in 2012. Low-income students are also

406

Education, College/University

over-represented in for-profit and two-year postsecondary institutions, and are under-represented in four-year public and private nonprofit higher education institutions. Contemporary University Structure Public universities are generally run by a president who answers to a board of regents appointed by the state government, but since the 1920s, school faculty have claimed a greater role in the direction of the university. Other factors today, such as market forces and student enrollment, shape the direction and management of these institutions. Perhaps the most sustained challenge that universities face in the 21st century is diminished public funding for higher education. This has led to a greater reliance on tuition and other forms of external funding, and higher tuition rates and the need for students to take out loans severely impacts who can afford college and their financial stability once they graduate. Colleges and universities today are seeking out more income from private donors, sales of services, patents, and donations by alumni. Universities have also looked to industry to help fund research. Additional challenges include the massive growth of online education, student loans, and competency-based education. A college education today has significant individual benefits. According to multiple studies over a span of 40 years, the income of an individual with a bachelor’s degree is 65 percent higher than the median earnings of those with only a high school diploma. The average person who enters college at age 18 and graduates in four years will earn enough by age 36 to compensate for being out of the labor force for four years and for borrowing the full amount for tuition. Median annual earnings for an individual with a bachelor’s degree working full time in 2011 was roughly $56,500, which was more than $21,100 more than the median earnings of high school graduates. Furthermore, individuals with a college education are reported to benefit from better working conditions, are more likely to enjoy employer provided health and pension benefits, and more satisfied with their work than those less educated. College graduates are also less likely to face unemployment. Enrollment in colleges and universities has steadily increased in recent years. Between 1990 and 2000 enrollment in degree-granting institutions

increased by 11 percent. Between 2000 and 2010, enrollment increased 37 percent, from 15.3 to 21 million, according to the National Center for Education Statistics. Much of this growth was fulltime enrollment. During this time, the number of females in rolling in college increased 39 percent, whereas the number of males rose 35 percent. These increases were affected by both population growth and overall rising rates of enrollment. Colleges and universities have also seen an increase in the enrollment of students age 25 and over in recent years. A college education is beneficial to society. Individuals with a college education contribute more to society through higher tax payments and increased civic involvement. College graduates are twice as likely to volunteer for an organization, and are more likely to vote. College graduates are less likely to require public assistance programs. Those without a college education are six times more likely to receive benefits from the Supplemental Nutrition Assistance Program (formerly known as food stamps) and the free and reduced price school lunch program. A 2009 study by the RAND Corporation estimated that an individual with some college to a bachelor’s degree saved taxpayers $9,000 to $32,000 over a lifetime. In spite of these benefits, institutions of higher education in the United States rely more on private funding, and receive less public funding, than most other developed countries. Research indicates that a college education enhances families and family interaction. Mothers who are employed, who earned a four-year college degree, report spending roughly 51 percent more time on their children’s activities then mothers without a college degree. Among mothers not employed, the difference is about 42 percent. In 2012, researchers found that highly educated mothers spend more time involved in active child care than mothers with less education. In addition, highly educated mothers are said to alter the composition of time with their children in a manner more suitable to the child’s developmental needs. Parents with a college education are more likely to provide their children with benefits that increase the future prospects of their children. For example, college-educated parents are more likely to be involved in their child’s educational activities. David J. Roof Ball State University

See Also: Education, High School; Education, Postgrad; Education/Play Balance. Further Readings Geiger, Roger L. To Advance Knowledge: The Growth of American Research Universities, 1900–1940. New Brunswick, NJ: Transaction Publishers, 2004. Kerr, Clark. The Uses of the University. 5th ed. Cambridge, MA: Harvard University Press, 2001. Lovett, Bobby L. America’s Historically Black Colleges: A Narrative History, 1837–2009. Macon, GA: Mercer University Press, 2011. Rudolph, Frederick, and John R. Thelin. The American College and University: A History. Athens: University of Georgia Press, 1962. Solomon, Barbara Miller. In the Company of Educated Women: A History of Women and Higher Education in America. New Haven, CT: Yale University Press, 1986. Wilder, Craig Steven. Ebony and Ivy: Race, Slavery, and the Troubled History of America’s Universities. New York: Bloomsbury, 2013.

Education, Elementary Elementary education and the family unit have been intertwined since the founding of the United States. Traditionally, families have had a say in the education of their children, both with regard to the content and the manner in which it is taught. The ways in which children are schooled affects the way that they eventually interact with the world. Elementary education in the United States has largely been concerned with the needs of the individual, as well as with the skills and functions that these children will one day need to become productive members of society. In the colonial area, many families in rural and farming areas did not have access to schools. Even if they did, children were required to help with the tasks of running the farm, which prevented many from attending school on a regular basis. Similarly, the low household incomes earned by most families precluded them from paying even the most modest of fees for education. As a result, few children during the colonial era received a formal education, although many received a year or two of schooling, especially in New England, just enough to learn how to read and write. Most children were educated in the home,

Education, Elementary

407

through church, and during apprenticeships. As the United States grew, however, so did the demand for the education of children. Throughout this process, the question of what children must learn to function as productive members of a family and as members of the larger society informed the style and content of elementary education in the United States. The Common School Movement The common school movement marked the beginning of the United States’ transition into the education system that exists today. Prior to the mid-1800s, school was primarily a private affair only open to those who could afford it. Families sometimes hired tutors to instruct children, but even then, it was often only the male children who were taught. However, changes in society, including immigration, urbanization, and widening socioeconomic class divisions threatened the structure and stability of the agrarian-centered family unit. In particular, the prevalence of crime, poverty, and alcohol and child abuse associated with mass emigration from rural homesteads to hastily constructed urban areas called into question the moral standing of society. As a result, activists of the era began concerning themselves with the refortification of traditional family life. However, instead of looking at economic reasons behind changing mores, reformers such as Horace Mann believed that a free, common education for all would provide a context for uniformly educating citizens, both academically and more importantly morally. Thus, schooling imparted ideals about the family structure, such as the roles that men and women should fill, and codes of living concerning sobriety, selflessness, and abstinence before marriage made their way into the value systems of many communities. These values were imbued through poems, prose passages, Bible passages, or songs that students memorized and recited in the classroom. By providing children with a uniform foundation of educational and moral principles, early leaders in education reform hoped to provide posterity with the knowledge necessary to bridge the divide between classes, integrate immigrants, and strengthen the United States on an individual and family basis. The Progressive Education Movement While the common school movement was powerful and shapes some behaviors of schools to this day,

408

Education, Elementary

other groups tried to define the public school experience differently. The progressive education movement, which arose toward the end of the 19th century, sought to transform education from a system focused on uniformity to one that provided children with the tools that they needed to participate in an evolving democratic society. These changes in education were necessitated by a shifting structure of daily life, resulting from continued immigration, a developing economy, and evolving gender roles. Sometimes referred to as child-centered education, progressive education has its roots in European philosophy; supporters of this movement included such influential figures as John Dewey, who believed that children’s innate needs, interests, and abilities should be cultivated through education in order to provide intellectual and moral advancement. Part of what children needed, progressive education advocates insisted, were nurturing experiences, rather than rote memorization. The progressives’ conception of how schools should work thus conflicted with the pedagogy of the common school movement. To this day, this dichotomy is the source of many of the conflicting opinions about how best to educate children. Many parents were wary, and believed that reformers wanted to experiment on their children, resulting in resistance from families, especially those in the working- and lower-class groups. Other parents became defensive and felt that reformers were challenging and undermining the quality of the education that they had received in their youth. Additionally, many families felt that progressive ideals regarding the need of public schooling challenged their values and beliefs. As a result of this conflict, teachers faced the need to balance parents’ beliefs and desires with their views of what children should receive through formal education. In general, the progressive education movement was plagued by a somewhat idealistic impracticality, which was heightened by the fact that the main theorists behind its principles were not practicing teachers. However, progressive education contributed to the evolving form of schooling by providing an alternative to the read-and-recite strategies employed by the common schools. The Great Society Movement, instigated by President Lyndon Johnson, sought to improve the standard of living in the United States. President

John Dewey was an American philosopher and psychologist whose ideas were influential in education and social reform. A well-known public intellectual, he was also a major voice of progressive education and liberalism.

Johnson sought to fight poverty, reduce crime, beautify the country, and improve education. Like many before him, he saw education as a key part of the solution to many of the nation’s greatest problems, thus in addition to funding low-income housing and creating Medicare, he founded the Head Start program. Head Start represented a comprehensive approach to supporting the early childhood education of disadvantaged children. The program was designed to meet all the needs of preschool-aged children in a culturally sensitive manner so that they would be better prepared for and more successful in the rest of their education. The ultimate goal was to give children the tools needed to break the cycle of poverty. The Elementary and Secondary Education Act Additionally, Johnson’s Elementary and Secondary Education Act provided funding that affected



public education on a national level for the first time. The legislation set up benchmarks to measure student achievement, and sought to hold schools accountable for making sure that students met them. By requiring an increasing number of students to meet the benchmarks, lawmakers hoped to close the achievement gap between high-achieving students who were typically from white middle- or upper-class families, and struggling students who were often from low-income or minority families. In addition to providing structure and expectations for student performance, the Elementary and Secondary Education Act provided funding for instructional materials, research into better instructional methods, and assistance for students requiring it. Moreover, for the first time, the government established regulated training, qualifications, and credentials for individuals serving as teachers in the elementary education setting. Ultimately, both the Head Start program and the Elementary and Secondary Education Act focused on improving the odds of educational success for children from low-income and minority families, while the Elementary and Secondary Education Act brought new standards to education that would impact future generations. No Child Left Behind In 2001, President George W. Bush reauthorized the Elementary and Secondary Education Act as the No Child Left Behind Act. While much of the act remained the same, new accountability requirements were added. Specifically, high-stakes standardized testing—also referred to as annual yearly progress (AYP)—was implemented to establish how well schools were meeting benchmarks, and schools were penalized if they failed to make advancement toward these benchmarks. As a result, there was a new and sudden pressure for students to do well on small groupings of standardized tests, and schools and teachers found themselves struggling to prepare students to test well. Furthermore, if a school failed, it faced repercussions that could drastically cut its funding and further impact the overall structure of the institution in question, often to its detriment. Yet another effect of failing to make AYP goals was that parents would have the choice to use a voucher system to send their children to a better-performing school. In such a situation, each student’s family is

Education, Elementary

409

offered a voucher equivalent to the funding that the school received by having that child in attendance. The family may then choose to send their child to another school that will then gain the funding by collecting the voucher. This control handed a new capacity and responsibility to individual families to decide whether or not their child should remain at a given school. Even more, it required parents to have a greater understanding of, and involvement in, their children’s education to determine if their academic needs were being fulfilled. Additionally, the No Child Left Behind Act encouraged schools to involve parents in the school community by providing literacy training for parents, working around parents’ schedules to organize meetings, forming parent councils, and connecting with existing community organizations. Even if parents elected not to become an active part of the school community, the No Child Left Behind Act still guaranteed their right to know about the inner workings of the school, the services that their child would be eligible for and/or receiving, and their child’s academic progress. While families had previously had some say in the workings of the school, it was more common to send students off to school and trust that they would come back more knowledgeable. Now, not only were parents guaranteed rights to be informed, they were also encouraged to be involved. Conclusion One problem throughout the history of elementary education in the United States as it relates to family stems from the American dream. The idea that in the United States one can work toward a better life than one’s parents had, and build an even better one for one’s children, means that parents will be unfamiliar with some of the things that their children are encountering, and less able to support them in those areas. This can easily happen in schooling. Parents who were unable to complete their education, or were schooled differently, may not be knowledgeable enough in a given subject to help their children understand it in the manner in which it is being taught. Additionally, parents may feel confused by the school structure and community, or may be too busy to become involved. However, those who want to, especially young or low-income parents, can foster learning from a young age by reading to their children daily, and having conversations using a variety of positive language. Some schools put a priority on

410

Education, High School

establishing relationships between teachers and families, and working with language barriers and busy schedules to maintain an open path of communication. Other times, programs through schools offer to help parents increase their literacy so that they may support their children’s literacy at home. All of these kinds of support make it easier for families to be more involved in their child’s education, and therefore increase the likelihood of academic success. Stephen T. Schroth Jason A. Helfer Hannah B. Bloyd-Peshkin Knox College See Also: Brown v. Board of Education; Childhood in America; Education, High School; Education, Middle School; Education, Preschool; Kindergarten; Segregation Further Readings Ravitch, D. Left Back: A Century of Battles Over School Reform. New York: Touchstone, 2000. Ravitch, D. The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. New York: Basic Books, 2010. Reese, W. J. America’s Public Schools: From the Common School to “No Child Left Behind.” Baltimore, MD: Johns Hopkins University Press, 2005. Urban, W. J., and J. L. Waggoner, Jr. American Education: A History, 4th ed. New York: Routledge, 2009.

Education, High School In the United States, almost all youth enter high school, and nearly 80 percent of the freshmen graduate when they are 17 or 18 years old. Of those who drop out, about half eventually return or earn the diploma’s equivalent by examination. Recruiting and retaining so many students took more than a century to accomplish. How high schools handled growth and accommodated diversity is the central theme in their history. Throughout the 19th century in the United States, most children had only six to eight years

of education acquired in free common schools. Few jobs required advanced coursework, and no laws prohibited child labor. Many families needed another paycheck or field hand, but money was not the only reason to avoid high school. The regimentation inside most classrooms could be onerous. Teachers expected students to sit quietly in desks that were bolted to the floor as their classmates took turns reciting what they had memorized from textbooks. Group work, class discussions, hands-on experiments, and projects outside school were rare—so were the extracurricular clubs and sports that later enlivened 20th-century high schools. Although few teachers hit or humiliated students, the classroom atmosphere was not relaxed, especially when conduct counted for onethird of each student’s grade. High schools therefore slowly grew until the late 19th century. They emerged in the early 19th century, when hundreds of private “academies” already offered education beyond the basics of reading, writing, and arithmetic. Although most academies occasionally received state or local aid, they also charged tuition. The free public high schools, in contrast, depended entirely on taxpayers, and the public often preferred to support the common schools because so few youth planned to attend high school. The defense of the high school as meritocratic (admission on the basis of entrance exams) and practical (many graduates went directly to work) failed to convince the skeptics in the Democratic Party, who viewed it as unjustifiable intervention on behalf of the affluent. The Whig Party leaders usually praised the high school as the capstone of an educational pyramid that strengthened the minds and the morals of all socioeconomic classes. The upshot was a slow and contested growth—by 1890, fewer than 10 percent of 14 to 17 year olds attended high school, with approximately one-third of those students enrolled in private schools. Expansion and Diversity The expansion of colleges in the late 19th and early 20th century convinced many ambitious youth to stay in school, but the rapid growth of high school enrollments from the 1890s through the 1950s had several other causes. None was more important than the economy. In good times, more families could support teenagers who wanted to study.



Prosperity also multiplied the number of whitecollar office jobs open to high school graduates. On the other hand, the hard times of the 1930s forced many students to remain in school, who in better days would have quit to take blue-collar jobs. In addition, the enactment of compulsory attendance laws corralled some otherwise absent youth. Peer pressure also kept some teenagers in school simply because their friends were there. High schools proliferated from the 1890s through the 1950s. Old schools expanded, and new schools opened. Except for a brief drop during World War II, enrollment and graduation rates rose in each decade, with the greatest gains made from 1910 to 1940. By midcentury, approximately 70 percent of American youth 14 to 17 years old attended high school, and at least three-quarters of those students graduated. The gains were uneven, however, with southern and black communities lagging the rest of the nation. Even so, the United States led the world in its commitment to the education of adolescents. No European country, for instance, encouraged so many teenagers to stay in school. The growth would not have been so swift if professional educators had not eagerly welcomed expansion. In their opinion, a good high school offered something useful and interesting for everyone. The 19th-century choice of two academic “tracks”—one for the college-bound, the other for everyone else—seemed too narrow, and the old custom of admission by examination seemed too restrictive. By the early 20th century, larger high schools provided more options. Vocational preparation for skilled and semiskilled trades was very popular, as were commercial courses geared for clerical and secretarial jobs. Students unsure of their plans could select the general track, a medley of introductory and survey courses. Beyond the classrooms, counselors, nurses, and social workers addressed a wide range of needs, and dozens of clubs and athletic teams enlarged the scope of the typical “comprehensive” high school. The responsiveness of the burgeoning schools honored only some of the customs cherished by parents. For immigrants arriving in the United States in unprecedented numbers in the late 19th and early 20th century, high schools were a mixed blessing. They held out the promise of the upward mobility that prompted so many to leave their homelands. Graduation could mean the difference

Education, High School

411

between white-collar and blue-collar work. Schooling was also a fast way to learn the norms and values of a strange new world. At the same time, many newcomers feared that their children would abandon their native languages and forsake old traditions. For example, many Italian immigrants worried about coeducation, extracurricular activities, dating, and academic coursework. To many, they seemed pointless or pernicious. Why should a teenager read Shakespeare, rather than work to help his impoverished family? Italian attendance and graduation rates thus lagged behind those of immigrants with less ambivalence about education, such as Russian Jews and Germans. Architectural styles reflected the priorities of early 20th-century schools. In the mid-to-late 19th century, the typical high school was a simple structure—four to six classrooms symmetrically arrayed on each of three floors, a small assembly hall, lavatories and utilities in the basement, one office, and no lunchroom, gym, or library. By 1910, those designs were replaced by imposing fortresses. The new buildings resembled courthouses and other civic monuments. Although lacking the bell towers and medieval quadrangles that dignified many college campuses, the large structures were nevertheless designed to impress and inspire. Spacious areas unimagined in the 1880s became standard—laboratories and shops, as well as larger rooms for the library, cafeteria, auditorium, and gymnasium. Rural schools lacked the scale of the urban citadels, and hundreds of them shared the same modest building that housed the elementary grades. Reformers assailed the small districts, calling for consolidation, even though many villages cherished their local high school as a special place where everyone shared the pleasures of sports, music, drama and other wholesome entertainment. It was also a safe place where parents felt assured that the community’s values would be passed on, and their way of life would be respected, not questioned, by teachers. By mid-century, however, many educators thought that a decent high school needed at least 400 students in order to offer advanced courses in math, science, and foreign languages. Yet on the eve of World War II, 75 percent of the nation’s high schools had fewer than 200 students. Bigger is better: that was the conventional wisdom among educators as enrollments continued to rise after the war. Population gains from the

412

Education, High School

baby boom of the late 1940s and 1950s, steadily declining dropout rates, and the gradual consolidation of small schools meant that large high schools were no longer just found in major cities. Suburban teenagers often attended schools with 1,000 or more classmates. However, bigger did not necessarily mean more divers, until two seismic changes in the 1960s and 1970s transformed most schools. The desegregation of thousands of school districts mandated by the 1954 Brown v. Board of Education Supreme Court decision changed the racial composition of many high schools by overturning the “separate but equal” injustice sanctioned by the 1896 Plessy v. Ferguson decision. Black and Hispanic students, who were previously isolated from whites, enrolled in integrated schools with more resources and higher standards. Students with disabilities also benefitted from court orders, state regulations, and federal laws that required more services. Instead of isolating students with special needs, most high schools began to mainstream them into regular classrooms without sacrificing the extra help to which they were entitled. The battle against discrimination in the 1970s also helped other groups that had been disenfranchised, especially students for whom English was a second language, and girls yearning to play competitive sports. Winners and Losers In large and diverse high schools of the 1960s and 1970s, average students could easily be overlooked. Many educators assumed that the middle stratum could not, would not, or need not exert itself. Those of modest intelligence seemed to lack the innate ability necessary for serious academic and vocational work. The apathetic and ornery youngsters struggled just to show up and sit still. Future housewives and unskilled laborers allegedly had no reason to toil—why take French or physics when home economics and basic math would be more relevant? Yet, average students abounded, and they usually graduated, often finishing without marketable skills or college plans. In contrast, most of the top-tier, vocational, special-needs, athletic, and even truant students fared much better. They were rarely overlooked. Although those five groups seem worlds apart, they shared four advantages. Powerful advocates inside and outside the school lobbied for their

welfare—savvy parents, community activists, dutiful bureaucrats, and vigilant educators guarded the enclaves that served those students. Another commonality was selectivity. Everyone could go to high school, but only the chosen could enter what have been aptly called the “specialty shops” in the “shopping mall high school.” Restricted choice was a third parallel. To get into Yale, master carpentry, or overcome dyslexia, certain tasks had to be done, so there was less dabbling in unrelated electives. A fourth similarity was close personal attention from the faculty, most of whom cherished their affiliation with a specialty shop. Teacher–student ratios were often low, and teachers mentored their students with the commitment of a good coach. The variety in the shopping mall high school offered something for everyone—dozens of courses (often with multiple sections of varying difficulty), clubs, athletic teams, and social services. Usually, the school staff worked hard to provide those options, but did not lean on students or parents to choose wisely. Educators hesitated to tell youth what to do. Counselors had too many students (300 or so) to carefully advise each one, and state graduation requirements stipulated credit hours, rather than enumerating particular skills and knowledge. Moreover, court decisions in the late 1960s and early 1970s on dress codes, free speech, privacy, and due process compelled the high schools to make fewer demands of their students, who did not shed their constitutional rights at the front door, according to the landmark Tinker v. Des Moines Supreme Court decision (1969) allowing students to wear black armbands to protest the Vietnam War. The young acquired procedural and substantive rights once only held by adults. In many respects, high schools in the 1970s were less directive than in the past. Dissatisfaction with extensive variety and unfettered choice in the shopping mall high schools rose as test scores began to drop. The mean score on the Scholastic Aptitude Test peaked at 975 (1600 was perfect) in the mid-1960s, and then fell steadily to 890 in 1980. That test, which predicted students’ performance as college freshmen, imperfectly measured what was taught in high school, but many Americans trusted the numbers as reliable evidence of a decline in academic achievement. Other evidence looked just as bleak. The federal government’s National Assessment of



Educational Progress (NAEP) scores also dropped, with the slide in social studies greater than math and science. In 1976, for instance, 47 percent of seniors did not know the number of U.S. senators that each state elects, and 58 percent thought that it was illegal to start a new political party. Just as troubling were the NAEP data on students’ writing. When asked to do more than report or summarize information, only a minority of the students could write a decent analytical or persuasive essay. Choice and Variety More variety among schools, less variety within schools—that was the major redirection sought by educatonal policymakers after the mid-1980s. The nature of choice among and within American high schools significantly changed in the last 30 years. Until the 1980s, Americans took for granted that their children would attend the high school closest to where they lived. A few teenagers with distinctive interests went elsewhere—freestanding vocational-technical schools, for example—but nearly every teenager was assigned a school on the basis of geography. Private schools, were always an option for those able to pay (although homeschooling was illegal in most states before the 1980s), and approximately one in eight adolescents went there throughout the 20th century. The pioneers of school choice were magnet schools created to transform segregated school systems. Students applied to schools that featured a particular program. Their popularity paved the way for the charter school initiative begun in the 1990s. Charters are public schools freed from many state and local regulations to encourage innovation. They vary enormously in regard to their curriculum, teaching methods, and student achievement. Some charters are run for profit, a notion unheard of before the 1990s, and others belong to national consortia of like-minded schools, another relatively new development. Vouchers to let parents use tax dollars to pay tuition at either public or private schools did not make as much headway, but the underlying spirit matched the other choices: marketplace competition belonged in the public sector. Why parents and students found these new choices appealing revealed one of the criticisms of the comprehensive high school. By the 1970s, many schools seemed dangerous. Year after year,

Education, High School

413

the Gallup Polls reported lower approval ratings of public education, and “lack of discipline” was ranked as the worst problem. Because most schools were orderly and safe, that phrase also expressed other anxieties, especially the dismay at the sharp rise in illicit drug use and premarital sexual activity in the 1970s. Many parents fretted that their children were growing up too fast and making poor choices; high schools supposedly offered too many temptations and too much peer pressure to misbehave. In their opinion, the schools of choice offered a sanctuary from the perils of a hedonistic youth culture. They sought like-minded families who cared about responsible behavior and academic exertion. Private schools were also attractive—their small size, clear rules, dedicated staff, and lean curriculum held out the promise of individual attention, ethical standards, and rigorous college preparation. Some families simply fled to avoid integration, especially when court-ordered busing took their children to dangerous neighborhoods. The range of choices within the high school narrowed by the 1990s. What drove the change were the requirements that students had to meet in order to graduate. In the mid-1980s, most states raised the bar by stipulating additional academic credits, especially in math and science, and state universities also stiffened their admission requirements. Soon after, the states developed new tests to gauge how much the students had learned. Poor performance could trigger retention or remediation for the laggards, as well as sanctions for the entire school. Alongside the tests were curricular frameworks and standards that prioritized the skills and knowledge to be assessed. Educators pointed out that many European and east Asian countries had already adopted comparable alignment policies, often with very good results. The scale of the American high school did not escape notice in last quarter century. No one has tried to overturn diversity or undermine the equity gains of the past generation, but many thoughtful reformers argued that high schools, which averaged 1,100 students by the end of the century, were too large. Anonymity and poor performance seemed to go hand in hand. Students who dropped out often felt that no one cared about, or even knew them. The best private schools always insisted on personal attention; why not the same

414

Education, Middle School

attitude for public schools? Many specialty shops featured low student/teacher ratios; why not extend that blessing to everyone? The upshot was a flurry of interest in advisory periods (in which one teacher coached 8 to 16 students), ninth-grade transitional programs (i.e., five teachers focusing on 100 freshmen), educational plans for each student (leading to college or a career), and new high schools capped at 400 or 500 students. It soon became clear that small scale by itself did not yield significant gains in academic achievement; instructional methods and student engagement were still paramount. However, the keen interest in small schools marked the latest chapter in the long American quest to find the best ways to educate as many adolescents as possible. Robert L. Hampel University of Delaware See Also: Adolescence; Education, College/University; Segregation. Further Readings Angus, D. L. and J. E. Mirel. The Failed Promise of the American High School, 1890–1995. New York: Teachers College Press, 1999. Gyure, D. A. The Chicago Schoolhouse: High School Architecture and Educational Reform, 1856–2006. Chicago: Center for American Places, 2011. Lassonde, S. Learning to Forget: Schooling and Family Life in New Haven’s Working Class, 1870–1940. New Haven, CT: Yale University Press, 2005. Powell, A. G., E. Farrar, and D. K. Cohen. The Shopping Mall High School: Winners and Losers in the Educational Marketplace. Boston: Houghton Mifflin, 1985. Reese, W. J. The Origins of the American High School. New Haven, CT: Yale University Press, 1995.

Education, Middle School Although education has shaped the American family in many ways, changing perceptions of adolescence have perhaps affected middle school students’

experiences more than those of students at any other level. As of 1900, many children between 10 and 14 years old were employed full time, either on family farms, in factories, or elsewhere. As the cultural understanding of childhood grew and changed, children stayed in school longer. By the 1970s, the middle school movement had begun, which sought to provide children between the ages of 10 and 14 an experience that differed from that of children in elementary school or high school. Although implementation issues remain, middle schools now embrace the growing independence of adolescents while providing them with a safe place in which to academically and socially develop. Background As early as colonial times, education was available to many children, but mostly through informal means. In lieu of formal schools, education was provided by parents, clergy, and others who had little or no formal training in how best to instruct children. After Horace Mann advocated for what he termed the “common school,” education became more readily available, although certain groups, such as girls and African Americans, were often excluded from this experience. Mann’s common school movement was predicated upon the desire to tame the unruly uneducated masses, and transform them into citizens with the intellectual attainments and moral fiber to judiciously engage in their roles as productive members of a democracy. After assuming his position as secretary of the Massachusetts Board of Education in 1837, Mann visited many schools in the state, and was underwhelmed with what he saw. In response to this, Mann began to advocate for reforms that still affect schools today. Troubled by the classroom performance of many of the teachers he observed, Mann founded the first normal school system in Massachusetts in 1839. Normal schools were intended to prepare high school graduates to become teachers by exposing them to a variety of instructional strategies and better understanding of the curriculum that should be in place for students. The changes affected by a trained teaching staff were significant. Although references to a deity were still widespread in the common schools, the institutions were nonsectarian, which made them available to many who had previously been excluded because of their beliefs.



Corporal punishment was also largely curtailed, especially in terms of the severe beatings that had previously been meted out. Mann also advocated for better pay and working conditions for teachers, and worked to ensure that a free public education was available to all. He also founded the Common School Journal, which widely promulgated his views and opinions, instigating educational reform across much of the United States. Mann’s efforts sought to mold the nation’s children into disciplined citizens of the republic—as such, his movement sought to minimize the family backgrounds of children, and to curtail parents’ roles as the shapers of their children’s opinions and morals. By the end of the 19th century, a variety of reformers attempted to alter the practices of the common school, which they found restrictive and too teacher centered. The philosopher John Dewey advocated for schools that permitted a more interactive and social experience for children. Believing that children performed better in an environment where they were able to take part in their learning, Dewey advocated permitting them to experience and interact with the curriculum in ways that were unique to each child. This new style of learning became known as the progressive school movement. Although progressive education proved popular, others believed that the move away from a more teacher-centered curriculum was detrimental to children’s educational attainment. To this day, conflict exists regarding the best way to proceed. Emergence of Junior High Schools The number of elementary school students who continued on to high school grew exponentially after 1930. Traditionally, students had received all their education in a single building, sometimes a single room, which taught students no further than the eighth grade; these years were known as grammar school. The typical elementary school classroom was led by one teacher, who taught students a variety of subjects such as English, mathematics, history, and science. A handful of students proceeded onward to high school, where students were taught by a variety of teachers, each a specialist in his or her subject. Business and civic leaders saw the need for increased numbers of high school graduates, and the economic downturn suffered during the Great Depression precluded many other options for adolescents. As a result, children

Education, Middle School

415

Pupils at the Banneker Junior High School in Washington, D.C., in 1942. The concept of a junior high school began in Columbus, Ohio, in 1909, and became popular post–World War II.

continued on to high school after the completion of the eighth grade. This arrangement continued in most parts of the United States until the conclusion of World War II. The first junior high school was founded in Columbus, Ohio, in 1909. Although there was no limitation on the grades that could be contained in a junior high school, most served students in the seventh, eighth, and ninth grades. Children enrolled were taught by experts in particular subjects, individuals who had much more knowledge of their discipline than most parents. This change diminished the role of the family in a child’s education, as the opinions of these experts were seen as more significant for high school preparation. After World War II, the concept of the junior high school became increasingly popular. Educational reformers such as Charles Eliot, the president of Harvard University, saw junior high school as a bridge between the elementary and high school experiences. To that end, junior high schools were organized around academic departments such as English, mathematics, science, social studies,

416

Education, Middle School

music, and art. These departments operated independently of each other, in many ways as little high schools—families interested in their children’s education had to interact with not one, but multiple teachers, which discouraged many from participating in the school. While this model was effective for some students, some critics believed that it defeated the purpose of providing a separate experience for young adolescents. As perceptions of child development continued to evolve, a growing call for an alternative model grew. The Middle School Movement What became known as the “middle school movement” began in the mid-1960s. This movement sought to rectify some of the problems with the junior high schools. The movement was influenced by the work of developmental psychologists, such as Jean Piaget and Erik Erikson, who developed a theory of development that suggested differences between children at various stages. Although Piaget did not precisely define these stages by age, he suggested that somewhere around the age of 11 children developed the ability to engage in formal operations (i.e., understand abstract thought), whereas previously they had been able to only engage in concrete operations (i.e., logical thought can only take place with practical aids). Erikson, who also did not define a precise moment when the change occurred, suggested that school age children (6 to 11) were mainly focused on building competence at certain tasks, whereas adolescents (12 to 18) were primarily engaged in exploring and defining their identities. The middle school movement believed that children in grades six through eight deserved a specialized learning environment to help them adapt to their intellectual stage of development. To circumvent the isolation that many children in the middle grades often experienced, the middle school movement favored a structure that featured teams of teachers from different disciplines working with the same group of children throughout the year. For example, one teacher might be responsible for English/language arts and social studies, while another may take the lead on mathematics and science. In this way, teachers would be able to plan units of instruction for one area that complemented what was learned in other subjects. The increased time that teachers spent with the children would also

permit them to better address struggles facing each child, and to understand when a child was excelling. Adjusting the configuration of classes was also seen as increasing the participation of families because parents would deal with fewer teachers and have the opportunity to develop a stronger rapport with educators who knew their children better. These reasons caused many school districts across the United States to formally adopt the middle school model for their sixth to eighth grade students. Although many school districts changed their junior high schools into middle schools, not all of these newly branded institutions changed their practices in ways that supporters of the middle school movement envisioned. Many middle school students continued to see six or more teachers per day, and attended classes with an ever changing group of peers. Because increasing divorce rates were altering traditional family structures during this time, many children increasingly relied on their schools for a sense of stability and support. While the middle school model was ideal for providing such support, schools that adopted the model in name only were unable to meet some of these children’s needs. For the middle school model to be successful, it is imperative that administrators provide an appropriate level of professional development and other support to teachers attempting to fully implement the model in a way that best meets student needs. Organization In the typical middle school, students are assigned to a homeroom. The homeroom is intended to permit children a chance to foster a sense of belonging, which is very important for students coming from the single-class/single-teacher format of most elementary schools. Teams of teachers work with the same group of students throughout the year, with each teacher responsible for one or two subjects. This permits increased use of interdisciplinary units. Interdisciplinary units allow students to explore a general topic from the perspective of various academic disciplines. This arrangement is intended to foster a sense of community among the students, and to support the children’s social and emotional needs. Homeroom can be scheduled at various times throughout the day. Some schools have homeroom first thing in the day, some after lunch, and some

Education, Postgrad



at the end of the day. As in traditional junior high schools, homeroom is used to take care of a variety of administrative tasks, such as taking attendance, saying the Pledge of Allegiance, collecting lunch money, and distributing correspondence from the school to parents. Homeroom is also used to assist students in the registration process, allowing them to sign up for classes of interest. For schools that fully adopt the middle school model, homeroom is also used for discussions regarding topical issues, engaging in group activities, preparing for service learning, and other related activities. In such an arrangement, the homeroom teacher almost serves as a counselor, providing a variety of services intended to support the children’s cognitive, social, and emotional needs. Middle school varies from elementary school insofar that students take a variety of electives, in addition to common core classes. The core curriculum includes reading/language arts, mathematics, science, and social studies. Electives cover a variety of topics, including areas that were part of the core curriculum during elementary school, such as art, music, and physical education. Other electives might include technology, foreign languages, and home economics. Many middle schools include students’ families in the development of special school emphases or electives, because this increases the likelihood of the initiatives’ success. The results from transitioning junior high schools to middle schools have been mixed. This is because despite good intentions, some schools have done little more than change name of the school and the ages of their students. While these schools have technically become middle schools, they have not made the changes required to embrace the concept of the middle school model. Increased family participation is often encouraged when the transition to a middle school takes place, which creates an atmosphere that is more supportive for the children. Parents and other family members can assist in classrooms, but they can also help maintain the library, chaperone at dances, and maintain order at athletic events. Stephen T. Schroth Jason A. Helfer Knox College See Also: Adolescence; Education, Elementary; Education, Middle School; Emerging Adulthood; Parenting.

417

Further Readings Furth, H. G. and H. Wachs. Thinking Goes to School: Piaget’s Theory in Practice. New York: Oxford University Press, 1975. Ravitch, D. Left Back: A Century of Battles Over School Reform. New York: Touchstone, 2000. Ravitch, D. The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. New York: Basic Books, 2010. Reese, W. J. America’s Public Schools: From the Common School to “No Child Left Behind.” Baltimore, MD: Johns Hopkins University Press, 2005. Urban, W. J. and J. L. Waggoner, Jr. American Education: A History, 4th ed. New York: Routledge, 2009.

Education, Postgrad Given the competitiveness of the business world in the 21st century, it is not surprising that an increasing number of young adults (and adults in general) are pursuing a degree beyond their bachelor’s degree. This is evident in the growing number of graduate school applicants. According to the National Center for Education Statistics (NCES), since 1983, graduate school enrollment has risen 78 percent. With more people than ever holding bachelor’s degrees and employment opportunities scarce, employers are able to recruit the most educated and highly qualified individuals for their limited positions. A graduate degree strengthens an individual’s academic credentials, and provides him or her with a competitive advantage. The options for post-baccalaureate degrees are numerous. They include a plethora of master’s degrees, specialist degrees, professional degree programs such as law, medicine, and dentistry, and doctoral degrees. Regardless of the chosen field, pursuing such an option is associated with several meaningful benefits, such as becoming an expert in one’s area of specialization, increased career options, and greater potential for earning a high income. History of Graduate School in the United States Graduate school structure and attendance have experienced considerable change since the latter

418

Education, Postgrad

part of the 19th century, when Johns Hopkins University became the first institution in the United States with a dedicated research center. The trend continued when Clark University was established as the first U.S. institution to provide only a graduate program. In response to a campaign by academics and practitioners alike for more rigorous scientific training and increased specialization, informal apprenticeships gave way to more intensive formal school-based training. Increasingly, once independent professional schools and training centers became a part of universities, universities sought to take full advantage of the benefits associated with housing research centers. The primary focus of these centers was to provide ground-breaking research. As state oversight and approval of university credentials became more stringent, professional schools required that students had earned a bachelor’s degree prior to enrollment. Whether because of a more competitive economic market, or perhaps as a result of an increase in the demand for more qualified employees, since its inception, graduate school attendance has steadily increased, and in many fields has outpaced the growth of bachelor’s degrees. Once afforded to the most affluent individuals, primarily upper-class white males, as a way to enhance their social capital as new waves of students began to earn bachelor’s degrees, the opportunity to attend graduate school has dramatically changed over past decades, with most increase in enrollment attributed to women and minorities. Decision to Attend Graduate School Individuals who experienced academic success during their undergraduate years are more likely than others to pursue a graduate degree. Nevertheless, depending on what stage of life an individual is at, the decision to attend graduate school may be relatively easy, or it may be difficult. Some students may perceive it as an inevitable next step because they have always wanted to become a doctor or a lawyer, for instance. For others, the decision may be rooted in family commitments and wanting to secure a better future for themselves and their loved ones. Some students may be encouraged by their undergraduate mentors to explore a particular field, whereas others may choose graduate school as a way to improve or further enhance a particular skill or talent. For a few others, graduate

school may provide an opportunity to avoid entering the real world. Approximately one-third of those with a bachelor’s degree go on to obtain an advanced degree of some sort. These individuals are likely motivated by a combination of factors, but they ultimately believe that the venture will improve their overall well-being in the long run. Benefits Associated With Attending Graduate School Attending graduate school has short- and long-term benefits. During their graduate studies, individuals may benefit from the enjoyment of continued learning, as well as the opportunities and prestige afforded to them as a result of working closely with advisors and experts in order to become experts. Such opportunities may have been rare or nonexistent in their undergraduate years. Furthermore, when their graduate studies are completed, individuals may experience the instant benefits of enhanced social status and further upward social mobility. In addition to the intangible rewards associated with earning a graduate degree, these individuals are likely to earn a higher income than peers who have not received an advanced degree. In almost all instances, more education is associated with substantially more pay. According to the NCES, the median income for adults who held a bachelor’s degree in 2009 was $62,000, whereas those who held a doctorate degree earned a median income well over $100,000. The lifetime earnings of those with a bachelor’s degree is approximated at $2.3 million, whereas the typical lifetime earnings is approximated at $2.7 million ($66,800 a year) for those with a master’s degree, and well over $3.3 million ($81,300) on average, for individuals with doctorate and professional degrees. However, there are significant variations to these numbers based on age, gender, race/ethnicity, and one’s chosen occupation. Challenges Associated With Attending Graduate School The challenges associated with graduate school can be both haphazard and demanding, leading some individuals to question whether or not the benefits of attending do in fact outweigh the costs. Some reports find that only about half of those who enter a graduate degree program will complete it, which alludes to the fact that for some, attending graduate



school is a less-than-gratifying experience. The number-one challenge cited by graduate students is time management, followed by curriculum issues, maneuvering the thesis/dissertation process, managing finances, and learning to maintain a sufficient work–life balance. Learning how to juggle schoolrelated demands while tending to other personal matters, family obligations, and possibly a job, can become quite overwhelming for many individuals. The longer that graduate students are exposed to these numerous strains, the more likely their school performance will deteriorate. Attrition rates for certain groups of people are even higher on average, suggesting that the challenges may be even more compounded for certain individuals than others. Women and minorities are less likely to receive graduate degrees than men and whites, respectively; they maintain lower rates of enrollment, and are less likely to persist toward degree completion. Effects of Graduate School Attendance on Family Dynamics The challenges associated with attending graduate school can be particularly difficult for families. For the individual who enrolls, adjusting to the academic environment can be strenuous in itself, but may be further compounded by family responsibilities such as those most often associated with being married or having dependent children. Students who are married and have children are far less likely than their single counterparts to enroll in a graduate program. Additionally, although married individuals are less likely to enroll than those who are single, this is most evident among married women because they are less likely than their spouses to pursue a graduate degree. Generally speaking, married individuals attending graduate school often note the sacrifices that their families make, such as having to change their roles in order to accommodate the individual’s academic venture. Families often struggle with a reduction in finances, as some individuals leave fulltime employment in lieu of pursuing their graduate degrees, and sacrifice spending quality time with their families because much of their leisure time is dedicated to their schooling. Despite the challenges, some families say that the experience of a family member attending graduate school is positive. For minority families in particular, attending graduate school is a source of pride for the entire family, and often brings them closer together

Education, Postgrad

419

as they encourage and support the individual as they attend. Some families view the sacrifices associated with attending graduate school as small when compared to the possible advancement and opportunities that a graduate degree will afford the family. Gender Differences Among Those Obtaining Graduate Degrees The percentage of graduate degrees awarded to women increased from 37 percent in 1991 to 46 percent in 2011, according to the National Science Foundation. Though the gap is narrowing, women continue to earn fewer graduate degrees than men, and are also likely to take longer to complete their degrees. The steady growth in female graduate degree holders is attributed to an increase in the number of science and engineering-related degrees earned by women. Although the fields were dominated by males in previous generations, the percentage of women receiving degrees in science and engineering has increased from 30 percent in 1991 to 42 percent in 2011. The growth in graduate degrees for men, however, is only attributed to science and engineering fields, while the rates in nonscience and engineering fields have mostly fallen for both men and women since 1991. Furthermore, women are more likely to pursue master’s degrees, whereas men are more likely to enroll in doctorate programs. With the exception of physical sciences and engineering, women earn the majority of graduate degrees in every major field of study. Although men and women tend to experience similar challenges in graduate school, more often than not, women are confronted with the added challenge of balancing school, work, and family obligations. Contradictorily, society often encourages women to pursue their education while encouraging them to start a family. Women in graduate school are likely to be in their optimal childbearing years, but delaying childbearing to finish their studies may put them at risk for fertility issues associated with increased age. Racial/Ethnic Variations Among Those Attending Graduate School Whites continue to earn considerably more graduate degrees than any other racial or ethnic group. Nevertheless, according to reports from the NCES, the percentage of minority graduate students earning degrees has considerably grown over the past

420

Education, Preschool

few decades, from 11 to 25 percent. Much of this growth is attributed to an increase in the number of African Americans and Hispanics who are obtaining graduate degrees. In 2009 and 2010 there was a 3 percent increase in master’s degrees conferred to African Americans, raising the total to 12 percent, while doctorate degrees among African Americans increased by 47 percent to 6.1 percent of the total in 2011. Hispanics earned 7 percent of all master’s degrees conferred in 2009­and 2010, while the number of doctorate degrees awarded to them increased by 60 percent to 6.3 percent of the total in 2011. In graduate school, African Americans tend to be highly concentrated in education-related majors. Hispanics represented the largest minority group to earn humanities and social sciences degrees, whereas Asian Americans were most likely of all minority groups to earn a graduate degree in life sciences, physical sciences, and engineering fields. Several key factors influence minority students’ choice of programs when deciding on graduate schools. Among these are the program’s proximity to their home, the potential to work with certain faculty, and financial considerations. Financial resources among minorities vary. For example, Asian Americans are least likely of all minority groups to rely on loans in order to finance their graduate school education; they are also more likely to report receiving financial support from their parents. On the contrary, African Americans and Hispanics rely more heavily on loans; however, on average, African Americans are heavier borrowers. Being Successful in Graduate School The experience of graduate school is unique for each individual and requires commitment and a great deal of flexibility. Both those who complete their programs, and those who do not, speak to the importance of having a strong support system in order to be successful. Graduate students, both males and females of all backgrounds, benefit from supportive relationships with their advisors, committee members, other faculty members, other students, family, and friends. These relationships are important in helping graduate students balance the demands of academia while maintaining a healthy and active social life. Mellissa S. Gordon University of Delaware

See Also: Education, College/University; Education/ Play Balance; Student Loans/College Aid. Further Readings Carnevale, A. P., S. J. Rose, and B. Cheah. “The College Payoff: Education, Occupations, and Lifetime Earnings.” Georgetown University Center on Education and the Workforce. http://cew.george town.edu/collegepayoff (Accessed January 2014). Gardner, S. “Fitting the Mold of Graduate School: A Qualitative Study of Socialization in Doctoral Education.” Innovative Higher Education, v.33 (2008). Goldin, C. and L. F. Katz. “The Shaping of Higher Education: The Formative Years in the United States, 1890 to 1940.” Journal of Economic Perspectives, v.13 (1999). Nevill, S.C. and Chen, X. “The Path Through Graduate School: A Longitudinal Examination 10 Years After Bachelor’s Degree (NCES 2007-162).” U.S. Department of Education. http://nces.ed.gov/ pubs2007/2007162.pdf (Assessed February 2014)   Malcom, L. E. and A. C. Dowd. “The Impact of Undergraduate Debt on the Graduate School Enrollment of STEM Baccalaureates.” Review of Higher Education, v.35 (2011). National Center for Education Statistics. “Status and Trends in the Education of Racial and Ethnic Minorities”. http://nces.ed.gov/pubs2007/ minoritytrends/ind_6_23.asp (Accessed January 2014). Perna, L. W. “Understanding the Decision to Enroll in Graduate School: Sex and Racial/Ethnic Group Differences.” Journal of Higher Education, v.75 (2004). Portes, A. “Social Capital: Its Origins and Applications in Modern Sociology.” Annual Review of Sociology, v.24 (1998).

Education, Preschool The goal of preschool education in the United States is to provide young children with access to stimulating environments and experiences in which they can thrive and learn before they begin their formal education. Throughout history, children have been cared for by parents, relatives, and neighbors. Organized community programs



are a more recent development that date to the 19th century. As common as preschool is today, the United States does not currently have any universal preschool education programs for children under the age of 5. Family life has considerably changed over the past century. Many more families than in earlier generations have two working parents, and many children live in single-parent households. This has increased the number of children who require care from someone other than a parent. Providing young children with access to high-quality, developmentally appropriate, early childhood education is an issue for many families in the 21st century. Defining Preschool Education The term preschool generally refers to an early childhood education for 3-year-old to 5-year-old children that precedes kindergarten. Children usually attend a preschool program parttime, either for a few hours a day, or for a few days a week, generally between September and May. Preschool programs may be part of a larger program that provides more extensive child care services for working families. Early childhood centers also differ from preschool programs in that they are typically staffed with well-trained, experienced teachers who provide developmentally appropriate activities. Preschool programs are sometimes referred to as nursery schools and may be housed in religiousbased institutions or public schools. Preschool programs also exist as parent cooperatives, as well as university, private, or government-operated programs. The distinctions between preschool education, nursery school, child care, and early childhood education are becoming blurred. Often, the terms are interchangeably used, even though each type of program provides varying experiences for children, and is staffed by adults with different levels of training. Preschool education is currently seen as a supplement, rather than as a replacement to family child rearing. Parents enroll children in preschool programs to help them get an early start on academic and social learning, as well as to improve readiness for formal schooling. However, as more and more children in the United States are placed in group care at younger ages and for longer periods of time, it becomes increasingly difficult for their families to choose and/or afford quality care and programs.

Education, Preschool

421

History of Preschool Education In colonial America, some families were uncertain about their ability to manage the education of their young children. This uncertainty grew from the emerging philosophical understanding that young children should be treated as children, not simply as little adults. For the first time, childhood became a distinct and separate stage in human development. Preschool education was introduced in the United States at the beginning of the 19th century with the thought that education for young children should differ from what was presented in grammar school. Traditionally, young children had been cared for at home as part of the family’s daily routine. However, philosophers and educators of the time believed that many families were unable to provide the type of socialization that young children needed and adequately prepare them to learn to read. Reformers opened schools for children, especially aimed at those living in poverty, and families began to believe that their children would benefit from the educational experiences offered by such programs. Emphasis was placed on the young child’s growth within the family and community. Over time, changes in family life altered the roles of men, women, and children. Families became less economically self-sufficient; women became consumers, rather than producers. All these changes resulted in a loss of home-based education. Nursery schools began to flourish in the 1920s, when many young children participated for the first time in formalized experiences outside the home and family. These early nursery schools were often established for purposes of educational experimentation and research, rather than to relieve working mothers or assist in socializing neglected children. By the middle of the 20th century, about 10 percent of 3- and 4-year-old children were enrolled in a program outside of the home. In 2014, about 50 percent of 3-year-olds and 75 percent of 4-yearolds attended a preschool program. This increasing trend has triggered further development of private preschool programs, child care centers, preschool special education, and state-funded programs. Examples of specialized curricula, philosophies, and programs include Montessori, Waldorf, HighReach Learning, HighScope, The Creative Curriculum, Reggio Emilia, Bank Street, as well as the federal Head Start program for at-risk children and families living in poverty.

422

Education, Preschool

Educational Practices of Preschool Experts agree that the years from birth to age 5 are the most critical to a child’s brain growth and overall development. Earlier in U.S. history, philosophers, theorists, and educators began to delineate childhood as a unique stage of development, a break from previous eras in which children were treated as little adults. Ever since this shift, debates have raged over the best way to educate the very young. Most agree that the key elements of a quality preschool program include a clean, safe, and cheerful environment, with plenty of everyday objects to manipulate, books, art materials, writing materials, play equipment, indoor and outdoor play space, and child-sized furniture. One of the earliest goals of preschool education was to transmit important cultural values, which provided for children’s social development, and guided them toward becoming responsible citizens. Play has become the primary vehicle for fostering the development of young children. Through play, children learn about cooperation and social harmony (e.g., building relationships, getting along with other adults and other children, and sharing); gaining a sense of self (e.g., exploring, building confidence, decision making, problem solving, and controlling behavior); increasing mathematical awareness (e.g., number sense, counting, and puzzles); communicating with others (e.g., talking and listening); enhancing literacy skills (e.g., learning the alphabet and early reading); refining self-help skills (e.g., buttoning, zipping, tying, and pouring), as well as creative development (e.g., learning colors and shapes). The play experiences for young children in quality preschool programs are not random, but are carefully planned by teachers to provide for children’s overall development. Quality preschool programs embrace curricula that recognizes that young children are active learners, meaning that they are very involved in their learning when provided with a stimulating environment. Preschool pedagogy that uses developmentally appropriate curriculum will meet this active learning need. Preschool curriculum should also provide a balance of child-initiated play and exploration, with teacher-led small and large group activities. With the use of child-initiated play, children will have choices that usually involve themes from daily life (e.g., the post office or grocery store) and activities for the development of the whole child (e.g., physical, cognitive, social, and emotional development). Such play

usually incorporates learning centers, which might include a reading and writing area, a creative arts area, a block building area, and a dramatic play area. Children also learn through the daily routines found in the preschool program (e.g., arrival, departure, snacks and meals, hand washing, restroom procedures, clean-up, and transitions). Quality preschool programs will also have a balance of quiet and active learning time. The National Association for the Education of Young Children (NAEYC) has established standards and accreditation for preschool programs. The standards review multiple components of the preschool program, including curriculum, teacher qualifications, class size, and health and safety. In 2014, about 8 percent of U.S. preschools were accredited. Preschool programs are also licensed by their individual state based on standards established by that state. Teacher Training and Professional Development Training programs for teachers working with young children were first added to colleges of education in the 1880s. Despite these long histories of teacher preparation in the higher education system, child care providers continue to be among the lowestpaid workers in the United States. Overall, U.S. society remains unwilling to take the profession of child care specialist seriously, nor does it highly value those who provide quality child care. Ideally, teachers working with young children will have a college degree specializing in early-childhood training. Research has found that teachers with such training engage children more positively and fully and provide richer and wider experiences in all areas of development. Trained teachers are better able to recognize the particular needs of individual children, and are able to adapt the curriculum to meet those needs. Current Issues Preschool education in the United States is an everevolving system. Current policy debates include the need for universal preschool education for all young children. Experts believe in the ongoing need to invest in the health and cognitive, social, and emotional development of the nation’s children. Many families find it difficult to educate their young children due to poverty and social considerations, such as single-parent households. Research has

Education/Play Balance



established that preschool education is beneficial to children and is cost effective to society. By investing in young children, society may prevent later social problems such as incarceration, poverty, crime, and teenage parenting. Children who have participated in a high-quality early childhood program are more prepared for formal education, and experience more success, than children who did not participate in such a program. The long-term improvements include higher test scores, lower rates of grade repetition, and overall higher educational attainment. Parents of all social and economic positions increasingly require extended care for their young children. A new look at meeting this need is called “educare,” combining the need for extended daily child care, and moving beyond custodial care to include educational programs for young children. This evolution will also require that the care of young children be recognized as a profession. Caring for young children is both a science and an art. Maria K. Schmidt. Indiana University Bloomington See Also: Child Care; Childhood in America; Day Care; Education/Play Balance; Head Start; It Takes a Village Proverb; Kindergarten; Montessori. Further Readings Barnett, W. S. Preschool Education and Its Lasting Effects: Research and Policy Implications. New Brunswick, NJ: National Institute for Early Education Research, 2008. Beatty, B. Preschool Education in America: The Culture of Young Children From the Colonial Era to the Present. New Haven, CT: Yale University Press, 1995. Pluess, Michael and Jay Belsky. “Differential Susceptibility to Rearing Experience: The Case of Childcare.” Journal of Child Psychology and Psychiatry, v.50/4 (2009).

Education/Play Balance There is a longstanding theoretical and research tradition regarding the importance of play in early childhood education. As a primary means of learning, play fosters a child’s physical, social, emotional,

423

and intellectual development. From birth through age 8, millions of neural pathways are created in the brain. These pathways support the healthy development of children, and are created and strengthened by exposing a child to numerous nurturing environments, people, and stimuli. In the United States, time for play has been reduced or eliminated in preschool and kindergarten classrooms. Since the passage of the No Child Left Behind Act (NCLB) in 2001, teachers have replaced differentiated learning through play with direct instruction in literacy and math designed to assist students on tests. Despite such trends, research demonstrates the importance of systematically integrating play into early childhood curriculum and pedagogy. History of Play Between the 17th and 18th centuries, numerous philosophers wrote about the importance of play in early childhood education. Moravian educator John Amos Comenius held that children learned best through practical social experience. An early advocate of universal education, he pushed for school and family environments that valued child-centered interests and play. English philosopher John Locke argued that the mind was a blank slate, or tabula rasa, and because children learned through sensory experiences, education should be designed to meet individual student needs. Swiss philosopher JeanJacques Rousseau also advocated for child-centered education that incorporated tactile learning experiences. English political philosopher and writer Mary Wollstonecraft extended Locke and Rousseau’s male-centric educational theories to girls. Arguing that girls were rational beings who should be educated, rather than domesticated, she was an early advocate of coeducation. Late 18th-century educational reformer Johann Heinrich Pestalozzi held that considering student interests and increasing student engagement through conversation supported social and emotional development. A student of Pestalozzi, Friedrich Froebel, stressed the importance of childhood play, selfexpression, and dramatization. Founder of the concept of kindergarten, Froebel advocated for play as a teaching strategy that required learning tools and materials to enhance childhood development. Throughout the 20th century, educational research furthered the concept of play within public school curriculum and pedagogy. In The Child

424

Education/Play Balance

and the Curriculum (1902), social reformer John Dewey discussed the need for hands-on learning that was balanced with rigorous content, and advocated for pedagogy that shifted the teacher’s role from that of lecturer to facilitator. Physician and feminist educator Maria Montessori advocated for enriched learning environments that utilized hands-on active and independent student learning. Having successfully founded Casa de Bambini, or Children’s House, in the low-income San Lorenzo district of Rome, by 1915, the Montessori school was globally acclaimed. The psychological research of Lev Vygotsky discussed how imagination and social rules factored into children’s play. He also introduced the Zone of Proximal Development (ZPD), or the range of emergent childhood concepts and skills that develop through peer interaction and adult scaffolding. Known for the stage theory of cognitive development, psychologist Jean Piaget held that learning occurs as children construct knowledge through play, exploration, and discovery in different environments. According to Piaget, children could overcome egocentrism, learn to accommodate, and understand symbols through play. Finally, educational psychologist Susan Sutherland Isaacs furthered the idea that unstructured, child-centered play was the primary vehicle for self-development. A critic of Piaget and advocate of the nursery school movement, Isaacs believed that through play, children not only engaged in selfexpression, but also learned to develop relationships. Benefits of Play Research supports the value of play in furthering social, physical, emotional, and intellectual development in children. When balanced with rigorous instruction, play extends and expands language ability while providing opportunities for emergent literacy. When children engage in play, it fuels their curiosity and creativity while developing their symbolic thinking and self-regulation. Play is beneficial for improving concentration, personal awareness, motor skills, and self-confidence. Thus, it fosters independence, allowing children to engage in social and independent problem solving. Educators, counselors, parents, and administrators can learn about children and their learning processes by observing them play. However, whether a child is engaging in play at school or home, several factors are important for fostering learning. Educators, childcare providers, and parents must strike

a balance between an environment that is safe and nurturing, and one that stimulates creativity with age-appropriate resources. Through meetings, workshops, and classroom visits, educators can help parents understand the importance of play in early childhood education, and how to facilitate it at home. Socioeconomic Gaps in Play Research demonstrates that regardless of gender, race, or ethnicity, children from low socioeconomic backgrounds are likely to enter school with social, cognitive, and literacy delays. The United States has a long history of progressive reformers, settlement home staff, and social workers devoted to meeting the educational needs of economically disadvantaged children. In 1899, for example, social worker and feminist activists Jane Addams and Ellen Gates Staar founded Hull House in Chicago, which provided services—including kindergarten classes—to poor immigrant women and children. In 1922, Abigail Eliot established the Ruggles Street Nursery School and Training Center in Roxbury, Massachusetts, which provided a full-time day care program for working families. Since 1965, federal programs like Head Start have served low-income families and children. Built on traditional progressive aims of alleviating poverty and improving literacy, Head Start promotes cognitive, emotional, and social development through play-rich environments. However, research demonstrates that localized and federal programs are not enough to reduce the learning gaps created between play-deficit and play-rich classroom environments. Because cognitive function, as well as appreciation for peer race, gender, and sex differences are positively impacted by play, developmental psychologists, counselors, administrators, and teachers should continue to support the presence of play in early childhood education. Education and Play Balance Today Current accountability and funding pressures impinge upon the ability of educators to strike a balance between following developmentally appropriate practices and standards-driven assessments. Such assessments include federal NCLB requirements and state implementation of Common Core and Prekindergarten Common Core Standards. This reality is compounded by research showing that children’s play time at home is reduced by increased

Egalitarian Marriages



hours spent on television, video games, computers, and hand-held devices. This is problematic because brain research indicates that children need to have interactive hands-on experiences that assist the development of metacognition. Prekindergarten and kindergarten are the first experiences that a child has with formalized school. These experiences lay the foundation for future learning and attitudes toward education. Play is a powerful teaching and learning tool that can increase student motivation. Moreover, play allows children opportunities to interact with peers and strengthen criticalthinking skills in all academic areas. Although the current national focus on academic skills prompts an emphasis on state standards, scripted curriculum, direct instruction, and less time for play, educational theory, research, and policy provides support for continued child- and play-centered instruction. Melinda A. Lemke University of Texas at Austin Jessica M. Lemke Niagara University See Also: Child Care; Childhood in America; ChildRearing Experts; Child-Rearing Practices; Day Care; Education, Preschool; Games and Play; Kindergarten; Parents as Teachers. Further Readings Allen, J. and C. E. Catron. Early Childhood Curriculum: A Creative Play Model, 4th ed. Upper Saddle River, NJ: Pearson Education, 2008. Diffily, D. and M. B. Puckett. Teaching Young Children: An Introduction to the Early Childhood Profession, 2nd ed. Clifton, Park, NY: Delmar Learning, 2004. Miller, E., and J. Almon. Crisis in the Kindergarten: Why Children Need to Play in School. Alliance for Childhood (2009). http://www.allianceforchildhood .org/sites/allianceforchildhood.org/files/file/ kindergarten_report.pdf (Accessed January 2014).

Egalitarian Marriages Marriage is a cornerstone of most societies. In the United States, married couples are usually expected to live together and begin a family while fulfilling

425

each other’s needs. Although the structure of marriage has remained relatively consistent over time, defined (with some recent exceptions) as one woman and one man in a committed relationship, the way that marriages function has varied throughout American history. Defining Egalitarian Marriage Egalitarianism embraces the assumption that all people are inherently equal in worth, and thus should also be equal in status as well. When adapted to marriage, egalitarianism invokes the expectation that both spouses share mutual importance, respect, power, and status in the relationship. However, there is not one agreed-upon definition or method to measure egalitarian marriage. While some identify egalitarian marriage with a philosophy, ideology, or intentions, others compare the division of roles and responsibilities through measurable means, such as how many hours one spends on housework, to determine whether or not a marriage is egalitarian. Some terms that are used interchangeably with egalitarian marriage include peer marriage, postgender marriage, equitable marriage, and equally shared parenting. Variables increasing the likelihood that couples will have egalitarian marriages include having a college education, being older than average at first marriage, being remarried, being nonreligious, and living in a community in which egalitarian marriages are culturally accepted. The identities of the wife and husband within a marriage are intimately intertwined with gender role expectations that impact the dynamics of the relationship. Expectations for girls, women, and wives are culturally constructed and taught through socialization. The traditional feminine wife’s role is to complement her husband and take care of the physical and emotional needs of the family. Conversely, the traditional masculine husband’s role is to protect and provide for their families; boys and men are also taught this through socialization. Inherent in these identities are assumptions that each gender is naturally more interested in, or capable of, particular skills and tasks. This binary, gender-reliant thinking illustrates the opposite of egalitarian marriage. History In the United States, cultural norms and laws have favored marriages based on male superiority and dominance. Until the late 1700s, marriage decisions

426

Egalitarian Marriages

were largely economically and politically based. For several hundred years, wives were legally the property of their husbands, and it was assumed that women were less intelligent, less capable, and inferior to men. Husbands assumed public paid work roles, whereas unpaid production in the home, in the form of cooking, cleaning, sewing, and child rearing, was expected of wives. The cult of true womanhood emerged as an ideal, emphasizing women’s purity and need for protection from the harsh outside world, which was provided by their husbands. The differing status and access to power for women and men in marriage persisted through the 1800s and the 1900s. A shift in marital expectations occurred for some in the late 19th century, when many families displaced from the country’s formerly agrarian rural economy found that their economic survival depended on women working outside the home, in addition to men. Women earning the right to vote in 1920 shifted public perception about equality between the sexes; however, unequal marital arrangements and distribution of power persisted. During the 1950s and 1960s “separate spheres” were idealized for women and men, with authoritative protective provider husbands and docile feminine caretaking wives. The women’s movement of the 1960s and 1970s invoked public discussions of marriages based on equality, democratic ideals, shared responsibilities, and minimized power differentials between husbands and wives. The women’s movement significantly changed barriers and expectations for women. The percentage of women in paid work significantly increased over the next several decades, as women graduated from college in greater numbers and invested in their careers. As women’s salaries increased, so did their status and decision-making power within the family—to a certain extent. No matter how much power and responsibility women had at work, they still came home and performed the lion’s share of the housework and child rearing. From the 1970s to the 21st century, this imbalance among couples has persisted. Women are more likely than men to adjust their paid work schedules to accommodate families’ needs. Women complete two-thirds of the housework and caretaking tasks. However, the amount of time that men spend with their children has doubled in the past decade. Men are still more likely to be the primary breadwinners

in married families, but most married couples both work for pay. The majority of married couples say that they want equal partnerships, and the number of couples who identify as having egalitarian relationships has increased. However, few couples have achieved egalitarian marriages when measured by variables such as time spent on paid work, caretaking, and housework. Identifying and Measuring Egalitarian Couples One of the primary reasons that couples identify as having egalitarian relationships when investments are not truly equitably shared is that wives and husbands often measure their performance against others of the same sex intentionally rather than against one another. Couples who work for shared division of labor are likely to be more equal relative to other couples. Couples’ ideology and intentions are important factors in identifying egalitarian couples. People who feel valued and respected by their spouses, and those who perceive equal power and status in their marriages, may identify as having egalitarian marriages, even if their behaviors do not translate to equitable investments. Variables that are typically included in measuring equity among couples include amount of time spent on paid work, importance placed on each spouse’s paid job, amount of income, amount of time spent on childcare or other care work, amount of time spent on delineated household tasks, access to financial control, amount of sexual initiation, amount of leisure time, and participation in decision making. Most people who aspire to have egalitarian marriages perceive themselves as committed to both their careers and their families. Couples may adjust expectations or schedules, they may compromise wants and needs, and they may rely upon assistance from family or paid resources in an effort to achieve equality. Egalitarian couples must regularly communicate and compromise. Work and personal lives change over time, thus egalitarian couples must consistently adjust. Couples with equal relationships consciously resist gender schema that dictate marital roles based on sex. Responsibilities for these couples are allocated based on interests, abilities, and equity. The husband may prefer to cook, whereas the wife may enjoy mowing the lawn.

Elder Abuse



427

Future Considerations As generations become more accepting of flexible gender roles, men and women continue to move into more diverse career fields, more women move into leadership roles, and more men expand their caretaking and household responsibilities, egalitarian marriages will likely become more prominent. As the United States embraces the constitutional right of same-sex couples to marry, these relationships will influence the prominence of egalitarian marriages. Same-sex couples are more likely to embrace equality in their relationships as compared to heterosexual couples, thus the percentage of marriages identified as egalitarian is likely to increase. Marta S. McClintock-Comeaux Amber Preston California University of Pennsylvania See Also: Breadwinner-Homemaker Families; Companionate Marriages; Feminist Theory; Gender Roles; Marital Division of Labor. Further Readings Bernard, J. The Future of Marriage. New York: World Publishing Company, 1972. Coontz, S. Marriage, A History: From Obedience to Intimacy or How Love Conquered Marriage. New York: Penguin, 2005. Risman, B. J. and D. Johnson-Sumerford. “Doing It Fairly: A Study of Postgender Marriages.” Journal of Marriage and Family, v.60 (1998).

Elder Abuse People in the United States are living longer, on average, than in any time in our the country’s history. Medical technology, more knowledge concerning better nutrition, and healthiery lifestyles contribute to a longer life span. The population of people 65 years and older is expected to increase 21 percent by 2040. However, resources such as formal services geared specifically for the elderly have not increased, or are increasing at a slower pace than needed. Increased demands in the informal sector

Research shows that as many as two million elders are abused in the United States. While abuse statistics are not as high as other problems facing the elderly, they represent millions of elders who require safer environments.

may mean more responsibilities for families who are already overburdened. Adding the care for an elderly person to an already stressed environment may lead to an increase in elder abuse. Definitions Researchers, policymakers, and practitioners have been unable to agree on a universal definition of elder abuse. While there is no consensus on a legal definition because of differences in state legislatures, there is agreement that abuse falls into four major categories: physical, psychological, financial, and neglect. These categories are not uniformly defined, and research studies describe them with varying inclusion and exclusion criteria. Furthermore, researchers and policymakers may define subcategories differently. For example, withholding care can either be physical abuse, active neglect, or psychological neglect. Another viewpoint emphasizes that elder abuse needs to be explored in terms of other factors,

428

Elder Abuse

including intentionality, necessity, and intensity. Elder abuse can be difficult to identify for various reasons, depending on the context in which it occurred; for example, because of the high propensity for accidental falls in older adults, some cases for abuse can be misdiagnosed as accidental. Likewise, the need to restrain Alzheimer’s patients to prevent self harm or prevent them from wandering away from home can look like physical abuse. Furthermore, definitions have been established by researchers, social service professionals, and the criminal justice system. Conversely, the elderly and their families may not necessarily agree with professionals as to what constitutes elder abuse. For instance, contrary to the abuse literature, which suggests that physical abuse is the most harmful, elders report that psychological abuse (verbal reprimands and the use of profanity) and psychological neglect (ignoring the elder) have more lasting negative effects than physical abuse. Prevalence Rates Researchers estimate that there are close to 6 million reported cases of elder abuse yearly, which represents about 9.5 percent of the elder population. More than half of the victims are abused in domestic settings by adult children or a spouse. While abuse statistics are not as high as other problems facing the elderly, such as poverty and illness, they represent millions of elders who have a need to live in safer environments. Most reported rates are believed to be a gross underestimate of the real number of abused elderly. Obtaining precise prevalence rates is difficult because there are problems with accurate reporting; one is consistently defining elder abuse. Many reports are drawn from agencies in mostly suburban areas, which omit families who are not receiving services, the poor elderly, and elders of color. Additionally, incidents of abuse may go underreported when elders are reluctant to report the abuse out fear of retaliation or a desire to protect the abusers (who in many cases are family members). Older adults with disabilities may also lack the cognitive or physical ability to self-report incidents of abuse. Profiles of Victims and Abusers Researchers report that 67.3 percent of victims of elder abuse are female, but there are no significant differences in the percentage of male and female

abusers or perpetrators. Frail status of the elder is significantly associated with abuse, and older victims are more vulnerable to abuse. Perpetrators are most often adult children, other family members, and spouses. Adult children and formal caregivers are associated with financial abuse, whereas spouses (particularly husbands) are more likely to be physically abusive. Elder spouses who abuse their mates are not necessarily caregivers. For example, abused wives may only be willing to report more serious maltreatment, particularly if they are used to abuse from their husbands. Spousal abuse among the elderly is influenced by the same risk factors as for younger couples. These factors include substance abuse, emotional problems, and relational conflict. The lack of research focused on older married couples makes it difficult to determine if spousal abuse in older couples is a long-standing problem, or a recent occurrence from the difficulties associated with the aging process. Researchers agree that elders who live alone are less likely to be abused than those who live with others. Caregiving and marriage are not the only types of relationships that elder victims may have with their abusers. Although frail and disabled elders may be dependent on other family or friends for support and companionship, some perpetrators of elder abuse are often dependent on their victims for economic and housing assistance. These abusers often have a history of unemployment, substance abuse, or mental illness, and are likely to be living in the victim’s home. Having a clear sense of victim and abuser characteristics would be useful for making decisions regarding appropriate intervention techniques. If victims are frail and dependent, services could be offered to the caregiver to help reduce the burden of caring for the elder. However, if the abuser is dependent on the victim for basic needs such as shelter, then methods for ending the abuse may involve different kinds of support, such as employment or housing for the abuser. Intervention Issues Intervening in elder abuse is a difficult task. One of the most widely offered interventions for elder abuse is institutionalized care for the elder. This solution is not always helpful to the relationship between the victim and abuser (if related to the victim), and is generally rejected by the elderly. Health care providers and professionals are mandated to report

E-Mail



suspected incidents of elder abuse or neglect in the majority of the United States. However, some practitioners may lack the training to assess and address problems associated with elder abuse. Signs can be misinterpreted as dementia or frailty, which may come from caregiver explanations. Abused victims sometimes receive the same services available to frail elders, regardless of whether the victim is frail or not. Another barrier to intervening is that elders may feel that abuse from children is reflective of their poor parenting skills or failure at child rearing, or they may feel ashamed of what abuse by a family member may imply. This may cause elders to feel a need to maintain family privacy, particularly from their broader support network. It is important that practitioners recognize the warning signs, and intervene in ways that are not intrusive, and allow elders to be involved in decisions involving their care. Additionally, community education about elder abuse should be provided to informal and formal networks such as neighborhood groups, religious institutions, senior centers, mental health centers and health care centers. These networks, if properly informed, can be critical in preventing and intervening in elder abuse. Edna Brown Helena Danielle Green University of Connecticut See Also: Caregiver Burden; Caring for the Elderly; Domestic Violence; National Center on Elder Abuse; Nursing Homes. Further Readings Anetzberger, Georgia J. “An Update on the Nature and Scope of Elder Abuse.” Generations, v.36/3 (2012). Cooper, Claudia, Amber Selwood, and Gill Livingston. “The Prevalence of Elder Abuse and Neglect: A Systematic Review.” Age and Ageing, v.37 (2008). Edwards, Douglas. “Caring for Today’s Elderly—And Preparing for Tomorrow’s.” Behavioral Healthcare, v.26/2 (2006). Singleton, Judy. “Women Caring for Elderly Family Members: Shaping Non-Traditional Work and Family Initiatives.” Journal of Comparative Family Studies, v.31/3 (2000). Wolf, Rosalie. S. “The Nature and Scope of Elder Abuse: Changes in Perspective and Response Over the Past 25 Years.” Generations, v.24/2 (2000).

429

E-Mail The history of e-mail is difficult to trace. In its current incarnation, it dates to about 1993, and the rise of the Internet. However, prior to that many intranet and local computer networks used a type of messaging that was a precursor to e-mail. This type of e-mail required both sender and receiver to be online at the same time, similar to a chat feature today. In this incarnation, most e-mail was workrelated, and was comprised of only short messages to coworkers using the same computer or network. Nowadays, e-mail works asynchronously, with servers storing the messages until the recipient retrieves them. It is just as likely to be used for work or recreation, between people who live in the same house or half-way around the world. In 2014, experts estimated that 2.5 billion people worldwide had access to and used e-mail. The term electronic mail has been used for decades for anything from fax transmissions to instant messaging. Early e-mail was text-based, and was an important contributor to the creation of the Internet. Text-based communications gave rise to the development of standard protocols, including File Transfer Protocol (FTP), developed in 1971, and Simple Mail Transfer Protocol (SMTP), which was published in 1982. The first e-mail message was sent between two computers sitting side by side and is generally attributed to Ray Tomlinson, a computer engineer working with a company hired by the Department of Defense to build the Internet. E-mail includes three parts: the envelope, message header, and message body. The message header controls the to/from information of the receiver and sender. SMTP helps servers read the information in the envelope to ensure proper delivery of the message. The message body is the content added by the sender. History of E-Mail Electronic communication dates back to the refinement of the electric telegraph in the 1840s, which allowed people to communicate over vast distances virtually instantaneously through Morse code. Like the electric telegraph, and subsequent developments such as the telephone and radio transmissions, e-mail cannot be attributed to one inventor. Many consider it a natural outgrowth of networked

430

E-Mail

technology dating back to the early days of timeshared computers in the 1960s. Time-sharing saved space and money by allowing multiple users to access a computer without having to purchase one. They could run multiple programs at one time, and text messages were used to communicate between computer users. In the early 1970s, the Tenex operating system was developed, which allowed for local e-mail messaging. During this time, the “@” symbol began to be used to address e-mail to specific recipients. Other electronic messaging protocol developed at that time included MAIL and MLFL, which provided standard network capabilities to FTP by creating a separate message for each recipient. Later, SMTP protocol replaced FTP with greater functionality, allowing a message to be sent to a domain first, and then onward to a specific recipient. Other protocols followed, including MSG, which is known as the first modem e-mail program and included the capabilities to forward and reply to messages, essentially allowing for conversations rather than individual messages. The first electronic mail message was sent in 1971, and in 1976, the first use of an electronic message by a head of state was sent by Queen Elizabeth II of England. The first e-mail from space was sent from the Space Shuttle Atlantis in 1991. Two e-mail milestones that took place in 1982 were that the term e-mail was first used, and the smiley “emoticon” was invented. In the 1980s and 1990s, other standards were developed, and changes took place so rapidly that it is difficult to trace their development and the people credited with them. In the late 1980s, commercial e-mail services such as Eudora, Compuserve, MCI Mail, and Pegasus catered to early adopters of home computers. As personal computers became mainstream in the early 1990s, America Online (AOL) and Microsoft Outlook became popular. The term spam was added to the dictionary in 1998 to describe unsolicited bulk e-mails that clogged up users’ inboxes with questionable advertisements or business opportunities. Other interesting e-mail milestones include the establishment of the first national standards law for sending commercial e-mails, signed by President George W. Bush in 2003, which was followed by the Federal Trade Commission’s anti-spam laws in 2004. In 2007, anti-phishing security was established by the Internet Engineering Task Force

(IETF). The main e-mail providers as of 2014 were AOL, Windows Live, Hotmail, Yahoo! Mail, and Gmail. Hotmail, the first Web-based e-mail service, was founded in 1996, purchased by Microsoft in 1997, and replaced in 2013 by Microsoft Outlook. AOL, formerly America Online, launched its paybased service in 1992, and in 2006 began to give away e-mail accounts. Yahoo! Mail was launched in 1997. Gmail was announced in 2004, and was available by invitation only until 2007. In 1989 the World, the first Internet service provider (ISP) was launched, followed by many more ISPs, which provided access to the Internet. With greater accessibility of Internet use beginning in 1991, e-mail became more than a messaging tool; it became an effortless way to communicate with coworkers, stay connected with family and friends, and conduct business. E-mail was an asynchronous tool that moved communication to a new level. However, it also brought with it concerns, such as safety. Businesses began to look at security for their systems, and families saw the need to pay attention to children’s activities in a new way. Obstacles to E-Mail Use The digital divide is an obstacle that prevents some people from using e-mail and the Internet. The digital divide refers to issues of equity that may limit a person’s ability to access and effectively use computers and Internet technology for everyday purposes, including e-mail. Access is a person’s ability to procure the tools needed to go online. Those in remote parts of the country, or who are poor, may not have Internet access in their homes. Many organizations are working to help these people attain access by providing community centers, training, and free Internet. This has helped many families develop an online presence; the rise of smartphones for Internet and e-mail use is also helping to close the digital divide. However, even when access is not an issue, many people, especially older individuals, do not have the training to effectively use online tools that are common to both children in schools and adults in business settings. Consequently, the digital divide continues to widen for some people, even while more tools and training become available to them. Efforts to protect e-mail and Internet users from spammers, spyware, malware, and hackers is a huge undertaking. Firewalls that control incoming and



outgoing Internet traffic have been used to protect government agencies and businesses for many years. At the turn of the century, it also became apparent that they were needed to protect personal computer use as well. Internet security software is available and commonly used to protect the data on network servers and the transfer of information via e-mail. The Internet holds a wealth of information and learning opportunities, as well as many dangers from which children need protection. In the mid1990s, the practice known as phishing emerged, in which criminals attempted to gather personal information via scam e-mail messages. The term is related to “fishing” in that it entails using bait to try to gather information for personal gain. Other safety issues relate to cyberstalkers, bullies, predators, inappropriate content or messages, and con artists. With the popularity of chat rooms, cyber gaming, and social networking, Internet and e-mail users were even more at risk. In 2000, Congress enacted the Children’s Internet Protection Act (CIPA) in response to some of these dangers. CIPA targets libraries and schools, requiring them to enact safety standards for safe use of the Internet. However, the Internet is not the only source of danger. E-mail should also be monitored by parents and protected in schools. Programs such as AOL’s Kids Online (KOL) and ZooBuh allow parents to monitor what comes in and goes out of children’s e-mail accounts. By selecting filters, parents can check the e-mails before the children see them or before they are sent. This ensures that children can stay in touch with family and friends, but provides an added layer of protection and parental oversight. Parent controls are not always enough, however. Children need to learn at a young age about the dangers of the Internet and e-mail, and how to protect themselves from predators. Numerous organizations help families to protect themselves from some of these dangers. The Internet Education Foundation is a nonprofit organization committed to protecting the public by providing educational tools and information for safety online. Similar organizations include Webwise Kids, i-Safe, and Edline. Mobile Global Society E-mail use today is closely followed in popularity by social networking. Sixty-one percent of Internet users access e-mail on a daily basis. However, use of e-mail among young people ages 12 to 17 has

E-Mail

431

dropped. They tend to use it for formal interactions in school, but for interacting with friends, instant or text messaging, Twitter, and social media sites such as Facebook are more popular. In fact, 41 percent of teens say that they never use e-mail to communicate with their friends. The biggest increase of Internet use in recent years has been among the older adult population, who are using e-mail and social media sites to stay in touch with family. Another more recent phenomenon is the use of mobile technologies. Up to 65 percent of smartphone users access e-mail via mobile technologies. This helps the age groups that continue to use e-mail, but younger people mainly use their smartphones for texting and entertainment. Contributing to the global use of technology is the increased integration of communication technologies in the classroom. The push for K–12 education to prepare students for future success includes teaching them about Internet research, Web conferencing, and global collaborations. However, many schools have experienced a drastic reduction in funding, which has made their e-mail and Web access sporadic, slow, or otherwise insufficient. Innovative companies have stepped in to provide free, or almost free, resources to educational groups by offering a suite of tools to both provide the means to learn these skills, and the safe environment in which to practice them. This relates back to the issue of the digital divide. E-mail in society has served many functions in families throughout the past two decades. Although from the beginning, e-mail was used as a casual means of communication, it has become a costeffective means of conducting business. Many businesses with a Web presence use e-mail as a primary contact mode; payment reminders and receipts are e-mailed from providers to save printing and mailing costs; and teachers and schools send information to parents and students via e-mail. Even with the diminished use of e-mail among some age groups, it remains a viable and effective means of communication for families in the early 21st century. Suzanne K. Becking Fort Hays State University See Also: Digital Divide; Internet; Personal Computers; Personal Computers in the Home; Primary Documents 2009; Technology.

432

Emerging Adulthood

Further Readings Aamoth, Doug. “The Man Who Invented Email.” Time (November 15, 2011). http://techland.time .com/2011/11/15/the-man-who-invented-email (Accessed January 2014). Partridge, Craig. “The Technical Development of Internet Email.” IEEE Annals of the History of Computing (April–June, 2008). Zickuhr, Kathryn and Lee Rainie. “7 Things to Know About Offline Americans.” Pew Research Center. http://www.pewresearch.org/fact-tank/2013/11/29/7 -things-to-know-about-offline-americans (Accessed January 2014).

Emerging Adulthood The years between adolescence and mature adulthood have come to be labeled “emerging adulthood,” a term coined by psychologist Jeffrey Arnett. Emerging adulthood is a stage in the life course that has resulted in recent decades from a variety of sociohistorical changes. It is characterized by a high degree of instability and self-exploration within individuals, as well as heterogeneity of experiences across a group of individuals. The Theory of Emerging Adulthood Social scientists have long studied the transition to adulthood, typically defined as the period in which one finishes one’s education, begins full-time employment, achieves financial stability, marries, and has children. Beginning in the 1970s, however, it became apparent that the amount of time young people were taking to complete this transition was lengthening. In response to this observation, Arnett proposed the term emerging adulthood to recognize the years between ages 18 and 29—the upper boundary varies in both the research and the lives of individuals—as a discrete life stage with distinct developmental features. Arnett proposed five features of emerging adulthood, which have shaped the study of this life stage. First, emerging adulthood is an age of identity exploration, when young people are actively working to develop their unique sense of self. Second, it is an age of instability, when friendships, romantic and sexual relationships, relationships with parents

and other family members, residences, and work are all in a state of flux. Third, it is a self-focused age, when young people have relatively few obligations and commitments to others. Fourth, it is an age of feeling inbetween, no longer a child or an adolescent, but also not yet an adult. Fifth, it is an age of possibilities, when young adults tend to be optimistic about what the future holds for them. Together, these features lead to significant variation in the paths that young people take through their 20s. Sociohistorical Causes of Emerging Adulthood While delayed adulthood is nothing new, the concept of emerging adulthood represents a break from the transition to adulthood as it occurred in the decades following World War II, when young adults typically began stable careers, married, and had children quite young by today’s standards, and there was one normative order to these life events. This early transition to adulthood was supported by a strong economy and government programs, as well as a cultural emphasis on home and hearth, and even young adults from the working class, if they were white, were often able to afford to marry, have children, and move to the suburbs. Under these circumstances, most young adults attained the typical markers of adulthood by their mid-20s. However, beginning in the latter third of the 20th century, a variety of interrelated sociohistorical factors came together in ways that resulted in a lengthened path to full adulthood. One primary change involved transformations in U.S. and global economies that made extended education increasingly necessary, and created a delay in career entry. A second change was the delay in marriage, which resulted in large part from educational and economic shifts, so that by 2010, the median age of marriage was approximately 28 for men, and 27 for women. A third change had to do with attitudes about premarital sexuality and the increasing availability of reliable birth control, both of which allowed emerging adults to have active sexual lives without marriage or risking pregnancy. Finally, a postmodern worldview emphasized both expressive and utilitarian individualism, which encouraged young people to focus on their identity and development of their life path. Together, these structural, cultural, and demographic changes created the conditions for what we now call emerging



adulthood, positioned between adolescence and adulthood. Positive and Negative Aspects of Emerging Adulthood The features of emerging adulthood—an extended period of identity formation, instability, self-focus, an inbetween feeling, and hopefulness about future possibilities—have many positive attributes. In general, emerging adults appear to be optimistic about their futures. Because of increases in life expectancy, a life stage during which young people can focus on themselves while exploring their educational, vocational, and relationship possibilities before settling down into a career and family life may be a good thing. While most emerging adults ultimately plan to attain the markers of full adulthood, they are not in a hurry to do so. Both young men and women describe their 20s as a time of independence and spontaneity to be enjoyed prior to making the commitments associated with adulthood. There are concerns, however, about the degree to which emerging adults are immersed in networks of peers, many with little day-to-day adult guidance. Emerging adults may feel adrift and uncertain, even while maintaining optimism about the future. Exploration of friendships and sexual relationships can be an important part of self-development, but negative experiences can result in mental health and interpersonal problems. Career exploration may be driven by a desire for a satisfying vocation, but it may also be driven by a lack of good job opportunities, a problem that is sometimes cushioned by parental resources to help pay for educational and living expenses. Becoming an adult in a very individualistic culture may allow the personal freedom to develop a unique identity, belief system, and life path, but may also provide little moral guidance and sense of connectedness to broader society. There is also concern as to whether the period of emerging adulthood is truly preparing young people for full adulthood, with the obligations and commitments that it entails. Finally, there are young people in this stage of life with little education, a poor employment outlook, and uncertain marital prospects, who may be unlikely to experience the positive aspects of emerging adulthood or a smooth transition to a full adulthood. Emerging adulthood affects not only young adults, but also their families of origin. Emerging

Empty Nest Syndrome

433

adulthood can also be a time of parent–child tension. As young adults drift through their 20s, their parents may retain a sense of responsibility for them, a sense of responsibility about which they may have mixed feelings. Parents and their emerging adult children may not agree on the speed at which the transition to adulthood is occurring, and they may disagree on the extent to which the parents should remain involved in the young adult’s decisions. However, there is also the potential of an extended period of time during which parents and children can develop fuller, more mature relationships prior to the young adult reaching full adulthood. There is little question that the typical transition to full adulthood has lengthened since the mid-20th century. The concept of emerging adulthood can help one better understand the dynamics of this life course stage, and the sociohistorical changes of the past few generations. Brenda Wilhelm Colorado Mesa University See Also: Adolescence; Demographic Changes: Age at First Marriage; Education, College/University; Hooking Up; Life Course Perspective; Social History of American Families: 1981 to 2000; Social History of American Families: 2001 to the Present. Further Readings Arnett, Jeffrey Jensen. Emerging Adulthood: The Winding Road From the Late Teens Through the Twenties. New York: Oxford University Press, 2004. Arnett, Jeffrey Jensen and Jennifer Tynn Tanner, eds. Emerging Adults in America: Coming of Age in the 21st Century. Washington, DC: American Psychological Association, 2006. Smith, Christian, Kari Christoffersen, Hilary Davidson, and Patricia Snell Herzog. Lost in Transition: The Dark Side of Emerging Adulthood. New York: Oxford University Press, 2011.

Empty Nest Syndrome Empty nest syndrome is the life stage experienced by parents when their children leave home to live.

434

Empty Nest Syndrome

The departure of children from the family home is compared to baby birds leaving their parents’ nest. While the familial bonds often remain strong throughout this transition, a sense of parental loss and grief is common. Many parents experience loneliness, but because the circumstances are commonplace, anticipated, and celebrated, the parents’ emotional difficulties often go unrecognized. While empty nest syndrome is most commonly ascribed to parents whose children are leaving home, it can also impact guardians, grandparents, and other family members who still live at home. Children move away from the family home for a variety of reasons, including going away to school, starting a job, getting married, buying a house, getting an apartment, or joining the military. When the children have all left, parents often feel a profound change in the home and relationship dynamics, which may not be entirely welcome, even if they have looked forward to this stage for many years. In many cases, the parents experience self-reflection, and may face challenges in their marriage. In some cases, parents have remained together for the sake of the children and no longer wish to be married. Conversely, parents may find relief in relinquishing their active parenting roles and reviving their marriages. History Throughout much of history, especially in agrarian societies such as the United States prior to the late 19th century, extended families typically lived together or close by. This helped the families remain self-sufficient and provided a support network for raising children and taking care of elders and others who needed assistance. Children often did not receive much formal schooling because they needed to help out with the farming or other household chores. Generally, children only moved out if they married and were ready to establish a household. The status quo began to change when compulsory and free education was established in all states between 1852 and 1913, and the United States moved from an agrarian nation to an industrial one. Many people found that a high school education prepared a young adult for professional and financial success. Over time, however, the need arose for young adults to attend technical school or college to remain competitive in the marketplace or to

train for new types of careers fostered by growing technology. These varied elements, in collaboration with types of family living situations from nuclear to extended, contributed to the impact of a child’s departure from the family home. In the early 21st century, however, difficult economic circumstances have resulted in many highly educated children returning to the family home after a brief foray of independence. This “boomerang generation” has resulted in many parents becoming inconsistent empty nesters. Impact on Parents Whether or not the departure of children is welcomed or dreaded, it is bound to change the social dynamics within the home. On the positive side, parents will have more time to themselves, either alone or with each other. They may decide to work on their marriage, or establish new life goals. On the negative side, some couples may find that their relationship is not as strong or as gratifying as it once was and decide to separate or divorce. Likewise, some parents may have been so enmeshed in their children’s lives that they do not enjoy having so much time alone, and have no idea what to do with themselves. This is a particular danger for those who were stay-at-home parents. Changed dynamics and rituals such as eating meals together and enjoying family celebrations and traditions may be deeply missed. It is not uncommon for women to be affected by these mid-life changes to a greater degree than men. Some women may experience menopause at the same time, which brings with it a host of hormonal changes that can affect mood and emotions. The natural feelings of loss or mourning brought about by empty nest syndrome can be amplified by the symptoms of menopause. Mothers, who were once the most important person in their children’s lives, may feel left behind as children experience new adventures, relationships, and milestones that do not involve them. Additionally, women are the primary caregivers of ailing parents more often than men. As parents age and face a range of physical, cognitive, financial, and other challenges, caregiving most often befalls the daughter or daughter-in-law who may be experiencing the empty nest syndrome. These women, as members of the “sandwich generation,” may find themselves torn between meeting the needs of their

Engagement Parties



children, and their parents. A woman may be looking forward to independence from active child rearing, only to find those duties replaced by a myriad of caregiving tasks related to their parents’ aging and health. The void of children leaving the nest is replaced with another reverse dynamic of dependent parents possibly moving in. If their aging parents are healthy, and their grown children are independent, the “empty nesters” may enjoy reinventing their lives. Some parents cherish their roles as caregivers, and encourage their children to return home even as their children’s independence takes hold. Such caregivers feel fulfilled in this role, and do not seek other opportunities for self-expression or social interaction; they want a return to what used to be. In some cases when this happens, the adult children resent their parents’ intrusion into their lives, which may cause conflict or lead them to avoid their parents. Most parents, however, welcome long-awaited privacy, and feel a great relief to have time together and separately. Children’s bedrooms may be turned into home offices, gyms, sewing rooms, or guest bedrooms. On the other hand, many parents find themselves still tied to their parental obligations by helping their offspring pay off student loans, finance homes, or pay for a wedding. This may be on top of the couple’s struggles to meet their needs and plan for retirement. In addition to parents redefining their relationship, other life changes may arise. One parent may decide to return to school, change careers, or retire early. The other may wish to travel extensively or spend time with grandchildren. Couples who have not maintained good communication throughout their relationship may discover that their priorities diverge. Kim Lorber Ramapo College of New Jersey Adele Weiner Metropolitan College of New York See Also: Boomerang Generation; Caregiver Burden; Sandwich Generation. Further Readings Caly, R. A. “An Empty Nest Can Promote Freedom, Improved Relationships.” Monitor on Psychology,

435

v.34/4 (2003). http://www.apa.org/monitor/apr03/ pluses.aspx (Accessed January 2014). Chen, Dianbing, Xinxiao Yang, and Steve Dale Agard. “The Empty Nest Syndrome: Ways to Enhance Quality of Life.” Educational Gerontology, v.38/8 (2012). Parker-Pope, T. “Your Nest Is Empty? Enjoy Each Other.” New York Times (January 19, 2009). http:// www.nytimes.com/2009/01/20/health/20well.html? _r=1& (Accessed January 2014).

Engagement Parties After a couple is betrothed, a number of celebratory parties may be thrown in honor of the bride and groom. The engagement party is the first of these celebrations. Generally hosted by the mother and father of the bride, the engagement party is a way for the couple to announce their betrothal and future wedding plans. The engagement party also serves as a means for the couple’s friends and family to meet and get to know one another. History of Engagement Parties The original engagement parties looked nothing like the parties held in contemporary society. In ancient Greece, the families of the bride and groom would gather without the bride in order to arrange the marriage and discuss the legal and commercial aspects of the union. The dowry would be discussed, and the length of betrothal would be decided upon. Engagement parties more closely resembling those celebrated by contemporary society started out as something called a “flouncing.” This was a formal betrothal announcement that was legally binding to both the bride and the groom. Should either party break off the engagement, that individual was obligated to pay the other half of his or her property. Later, the engagement party evolved into something that was not legally binding, but that served as a means for a couple to formally announce their engagement to friends and family. Prior to the early 1900s, the custom was for the couple to gather friends and family together and announce the betrothal. By 1920, however, the etiquette followed by many couples was to place an engagement

436

Engagement Parties

announcement in a local newspaper, and the party would simply be a celebration where everyone already knew the big news. While the bride’s family still often hosts contemporary engagement parties, many couples choose to throw the parties themselves, opting for venues such as restaurants or hotel ballrooms. Other couples will celebrate with intimate friends and close family at home. Engagement Party Rituals Engagement parties typically fall on the heel of the proposal. The couple should not delay in hosting an engagement party, if they plan to, because a delay may interfere with wedding planning and cause guests to confuse the gathering with the giftgiving bridal shower celebration. Engagement parties should be within a few months of the engagement, with no more than nine months prior to the wedding remaining for longer engagements. Most engagement parties are hosted in the evening, either as a formal dinner or as a cocktail party. It is considered poor etiquette for a couple to invite someone to the engagement party who they do not intend to invite to the wedding. There is only one exception to this rule, and that is if the couple will be marrying in a foreign destination where many guests would not be expected to attend the nuptials. With engagement parties, gifts are not expected. In fact, many brides and grooms instruct friends and family not to give gifts at this occasion. Decorations at the engagement party are generally kept informal, with the idea that the couple does not want to outshine the wedding. Toasts are a common feature of the engagement party. The custom is that first the bride’s father toasts his future son in law and the bride to be. Then, the groom is to toast the bride to be and her parents. Following this, other guests are welcome to stand up and toast the couple and the couple’s parents in turn. Engagement Celebrations Across Cultures In many Middle Eastern cultures, the engagement party is the first of five celebrations for the wedding couple. In this tradition, the party lasts late into the night, with special foods and dancing. The bride often changes her dress up to five times during this celebration. Modern Orthodox Greek culture also celebrates the engagement for an extended time. This is

because it is important to the culture for the families to have the time to get acquainted with one another. Very traditional families will arrange a dowry for the bride of linens and household goods. Some families will present the couple with a furnished home. At the engagement party, a priest blesses the engagement rings. In Chinese culture, rather than hosting a large party, the family of the groom presents the bride’s family with cakes or other gifts. Once the bride’s family accepts such gifts, the engagement may not be broken. Indian culture features an engagement ceremony prior to the wedding. During this ceremony, the families of the bride and groom present the couple with gifts, clothes, and jewelry. The couple is also expected to exchange rings, which will then be blessed by the elders in the bride’s and groom’s families. Next, the families sit down to a dinner party with friends in order to celebrate the engagement. In Judaism, an engagement period before marriage is mandated. The couple writes a contract, the tna’im, consisting of promises that they make to one another. This includes the date, the agreement of who will pay for the wedding expenses, and the agreement that both the bride and groom will set up a household together. In order to set this contract into action, a kinyan sudar takes place. This involves the exchange of a piece of fabric in exchange for the abstract notion of commitment. Once the contract is completed, it is signed at the engagement party. A rabbi must be included in the party, and the groom is expected to deliver some thoughts from the Torah to the guests. Following the kinyan, the couple breaks a wrapped plate. In the United States, people from these and other cultures often adhere to such traditions, especially if they are immigrants or the children of immigrants. Ronda L. Bowen Independent Scholar See Also: Baby Showers; Courtship; Dating; Engagement Rings; Primary Documents 1916; Wedding Showers; Weddings. Further Readings Heaton, Vernon Wedding Etiquette Properly Explained. Rev. ed. Kingswood, UK: Elliot Right Way, 1986.



Engagement Rings

437

Martin, Jacobina and Judith Martin. Miss Manners’ Guide to a Surprisingly Dignified Wedding. New York: Norton, 2010. Martha Stewart Weddings. “Engagement Parties” 2002. http://www.marthastewartweddings.com/226700/ engagement-parties (Accessed January 2014).

Engagement Rings In the contemporary United States, an engagement ring is traditionally worn on the fourth finger of a woman’s left hand, and indicates her betrothal for marriage. A woman wears the ring alone until the day of her wedding, when she typically receives an additional wedding ring. The two rings are then customarily worn as a set. An engagement ring usually contains one or more gemstones, the most popular a diamond, set on a metal band. Yellow or white gold is often preferred, but platinum, silver, stainless steel, and titanium are also frequently selected. Engagement rings can be crafted in one of many styles and settings. The Tiffany setting became widely fashionable after its creation by Tiffany & Company in 1886. This is a six-prong setting with a single diamond, known as a solitaire, on a plain band, which exposes a large portion of the gemstone to view. Another popular style is the trinity ring. The idea of the trinity ring has been around for a long time, and many variations on this style exist, including the trinity knot, which typically involves some form of plait or love knot. Cartier introduced its Trinity Ring (Trinity de Cartier) in 1924. This ring is depicted as three interlocking bands; a rose (or pink) gold band symbolizes love, a white gold band represents friendship, and a yellow gold band stands for fidelity. Included among the many engagement rings of the rich and famous are “celebrity” jewels, jewels with history and notoriety. For example, Diana, Princess of Wales, owned a distinctive oval sapphire surrounded by diamonds. Her son, Prince William, presented his late mother’s engagement ring to his bride-to-be, Kate Middleton (now Duchess Catherine of Cambridge), in 2010. The jewel reinforced a trend in colored gemstone engagement rings. Other popular choices include amethysts, aquamarines, emeralds, garnets, rubies, and topaz.

The tradition of the engagement ring can be traced back to ancient Rome. Wearing the ring on the fourth finger of the left hand stemmed from a belief that a vein within this finger (the vena amoris) leads to the heart.

A couple may select the engagement ring together, or a groom may select it himself and present it to his girlfriend during a marriage proposal, sometimes as a surprise. Marriage proposals may happen in any number of creative ways, but traditionally the man bends down on one knee, presents the ring to his intended bride, and asks her to marry him. If she accepts the proposal, the couple is then engaged, and the ring is worn as a symbol of that fact. Less commonly, a woman proposes to a man, but seldom in the same manner. Engagement rings for men appeared on the market in the 1920s, but the tradition did not take hold. In recent times, the jewelry industry has sought to revive this market. Rather than purchase a new ring, a prospective groom may have the option of proposing with a family heirloom ring. Jewelry that is passed down can tie families and generations together and help to preserve memories. Rings from older generations can be very distinct, and possess unique beauty. Such treasured pieces represent family history and

438

Equal Rights Amendment

can evoke sentiment and emotion among family members. Families may possess a ring long enough for it to be considered antique, which can increase monetary value. If a couple breaks off their engagement, the question of ring ownership may arise. Etiquette in the United States does not provide a universal remedy in this predicament. While many believe that a ring should always be returned when a pending marriage does not occur, others believe that once given and accepted, the jewelry remains a gift. Still others believe that the return of the ring depends on which partner broke the engagement. Even further complications and disagreements can develop when the ring is a family heirloom. If the matter ends up in the legal system, the outcome depends upon the laws of the state in which the couple resides, and such laws vary. The giving of a ring to seal an engagement became commonplace by the late 1800s, when new diamond mines were discovered in South Africa. This made the diamond supply much more plentiful, and prices dropped. But it was not until the 1930s, after a major marketing effort by the diamond industry, that this gemstone become common for bridal jewelry in the United States. Popular designs apart from the Tiffany setting included the three-stone setting, a halfloop of diamonds, and a diamond set directly into the band (sometimes called the “gypsy” setting). Today, any number of designs and settings are available, and the giving of an engagement ring remains one of the most endearing acts of love and commitment. Eternal love is indicated by the symbolism of the ring, a never-ending circle. Glenda Jones Sam Houston State University See Also: Civil Unions; Courtship; Covenant Marriage; Engagement Parties; Promise Rings; Weddings. Further Readings Bare, Kelly. “The History of Engagement Rings.” Reader’s Digest (2013). http://www.rd.com/advice/relationships /the-history-of-engagement-rings (Accessed April 2013). Fales, Martha Gandy. Jewelry in America, 1600–1900. Woodbridge, UK: Antique Collectors’ Club, 1995. FindLaw. “What Happens to the Engagement Ring in a Broken Engagement?” http://family.findlaw.com/

marriage/what-happens-to-the-engagement-ring-in -a-broken-engagement.html (Accessed May 2013). Lee, Jane. “Deconstructing the Tiffany Setting, The World’s Most Popular Engagement Ring.” Forbes (October 2, 2012). http://www.forbes.com/sites/ janelee /2012/10/02/deconstructing-the-tiffany-setting -the-worlds-most-popular-engagement-ring-style (Accessed April 2013).

Equal Rights Amendment The Equal Rights Amendment (ERA) aims to affirm that the Constitution of the United States applies equally to all citizens, regardless of their sex. It was written by early feminist and suffragist Alice Paul in 1923, during the 75th anniversary of the 1848 Seneca Falls Convention, and remains one of the most controversial pieces of American legislature. A full 90 years after it was introduced, and even after being introduced in Congress each subsequent year, it has failed to garner the votes necessary for ratification. Paul and others associated asserted that women needed the principle of equal rights written into the framework of government. Little did she know the resistance that she would face. The language of the Equal Rights Amendment constitutes three sentences that are still controversial today: • Section 1: Equality of rights under the law shall not be denied or abridged by the United States or by any state on account of sex. • Section 2: The Congress shall have the power to enforce, by appropriate legislation, the provisions of this article. • Section 3: This amendment shall take effect two years after the date of ratification. Background The first wave of feminism in the United States included those who were concerned about women’s equality, specifically, the right to vote. Their allies were the abolitionists, and many early feminists met while petitioning for an end to slavery. The feminist cause has always been focused on how



power is constructed, and one of its main objectives has been to prevent the misuse of power. That is the meaning behind the term equality. Activist, intellectual, and homebound mother of five young children, Elizabeth Cady Stanton organized the first women’s rights convention in 1848 in the small town of Seneca Falls, New York, with Lucretia Mott, a renowned abolitionist who she had met at an antislavery convention. As the well-educated daughter of a wealthy judge, Cady Stanton discovered the hard way that women had no legal rights in 19th-century America when her brother died and she tried to carry on as the head of the family. At the Seneca Falls Convention, the first such meeting devoted to women’s rights in the United States, Cady Stanton unveiled her Declaration of Sentiments and Grievances, which called for action in light of injustices against women, and began with the affirmation that “all men and women are created equal.” Shortly thereafter, the women’s rights movement focused on the issue of suffrage, but women did not gain the right to vote nationally until 1920. This was 55 years after the Thirteenth Amendment abolished slavery, and 72 years after Cady Stanton’s Seneca Falls speech. By 1923, Alice Paul wanted to extend women’s rights beyond the right to vote through the Equal Rights Amendment. For her outspoken leadership on women’s suffrage, Paul had been beaten, dragged through the streets, and jailed. She drafted the amendment in 1921, and it was first introduced to Congress in 1923. She rewrote it slightly in 1943 to reflect legislative amendment changes, and it resumed a long slow journey through the system. It was introduced for passage every year, but radicals, rivaling factions of the women’s movement, and male politicians kept it from passing. Resistance The ERA has encountered resistance since it was first introduced. The United States has a long and complicated history of women’s rights issues, which continues into the 21st century with ongoing legislative battles regarding reproductive rights, fair pay, and paid maternity leave. Less than 100 years ago, women were denied the right to vote and were considered the property of their husbands. They could not own homes or property, had few rights regarding their children, and faced many restrictions

Equal Rights Amendment

439

regarding inheritance. Women were considered the “weaker” sex, less intelligent, and generally the inferior sex. In fact, scientists of the mid-19th century likened women’s brains to those of animals, and some scientists built their careers on proving women’s inferiority. During the 1970s, second-wave feminism gained strength and tackled the ERA with newfound enthusiasm, and the promise of a ratified ERA was tantalizingly close. However, the movement attracted vociferous adversaries. While momentum in Washington was building, Phyllis Schlafly, a conservative Midwestern lawyer and activist, worried that the ERA would mean increased federal control over the American family. She mounted a formidable STOP ERA opposition movement to fend off what she believed the ERA would lead to: homosexual marriages, women in combat, taxpayer-funded abortions, unisex bathrooms, and elimination of benefits for divorcees and widows. Ironically, in 2013, same-sex marriages, women in combat, and unisex bathrooms came to pass, even without a ratified ERA. Traditional households with a stay-at-home mother and working father are no longer the norm. More than ever, couples live together without marrying, and fertility advances have made family options dynamic and fluid. At various points along the way, objections to the ERA were vague or made without knowledge of what the amendment actually entailed. By the late 20th century, most people agreed that women should have all of the same legal rights as men. For example, President Barack Obama moved the women’s agenda forward by signing the Lilly Ledbetter Fair Pay Act into law in 2009. But 50 years after the Equal Pay Act was signed into law, women are still on average paid only 77 cents for every dollar that a man earns. While this is technically not legal, it does demonstrates how institutionalized bias against women has proliferated in America. Tradition is still at play from the words used to the policies promoted that do not favor women, mothers, or families. The United States is one of only three developed countries in the world with no nationally mandated paid maternity leave. In 2013, when women earned 40 percent of all household earnings, some conservative media personalities called the rise of female breadwinners a sign of society’s downfall. The resistance against

440

Equal Rights Amendment

ratifying the ERA is another example of the deep cultural ambivalence, that the United States feels for women. It could at least be considered reflective of certain conservative factions maintaining sentiments about “keeping women in their place.” Recent History In her 2010 book When Everything Changed, Gail Collins described the efforts of a stalwart group lobbying on behalf of the women’s agenda. The year was 1972, and Alice Paul had been petitioning for almost 49 years for the passage of the ERA. She was 85 years old. Title IX prohibited sex discrimination in federally funded education programs, and the Equal Credit Opportunity Act and a bill to equalize benefits for married employees were all successfully legislated. The ERA had been introduced in every congressional session since 1923. Contentiousness ruled. Either the leadership for the women’s movement felt that the language had been weakened, or the states failed to adopt the amendment. In 1970, the House of Representatives passed the ERA. Two years later, the Senate approved it, and it then moved to the states for ratification. Many states immediately ratified the amendment; others eventually ratified it or came close. In 1982, the ERA ultimately failed when it fell three states short of the 38 states required for ratification by the congressionally mandated deadline. The ERA continues to be submitted for ratification, and authors and activists continue to debate its viability and legal necessity. Current Status According to Tina Tchen, Executive Director of the White House Council on Women and Girls, President Obama is supportive of the ERA, and has stated that “history shows that countries are more prosperous and more peaceful when women are empowered.” In August 2013, Carolyn Maloney reintroduced the ERA in the 113th Congress.  She argued that the ERA is necessary to keep states from attempting to abridge laws that might discriminate against women based on sex or deny them rights. She outlined three ways that the ERA would guarantee the equal rights of men and women: • By clarifying the legal status of sex discrimination for the courts, by making sex

a suspect category subject to strict judicial scrutiny, as race, religion, and national origin currently are. • By guaranteeing equal footing for women in the legal systems of all 50 states. • By ensuring that government programs and federal resources equally benefit men and women. A 2013 National Public Radio story by Yuki Noguchi cited Joan Williams, a professor at the University of California Hastings College of Law, who popularized the term maternal wall. This term refers to discrimination against mothers based on the assumption she will be less committed to her job than her male or nonmaternal counterparts. This theory is also presented in contemporary maternal theory, drawn from feminist perspectives. Sharon Hays explores the subject in her essay “Why Can’t a Mother Be More Like a Businessman.” In May 2013, Thomas H. Neale prepared a Congressional Research Service report for Congress in which he proposed a Fresh Start Equal Rights Amendment, suggesting that this might avoid future controversy and create an amendment that would be eligible for ratification for an indefinite time period. Those who feel that the ERA is no longer relevant may be representative of the slow wave of social change taking place in the United States. Women have made advances in the home and the work world. However, the words used in every day language and laws created are important. They define who one is and how one thinks about ourself. A society’s language predicated on a male premise does not necessarily apply to women. In that spirit, the ERA is still a subject of debate, petitions, and an inspiration for newly forming legislature. New York governor Andrew Cuomo introduced the Women’s Equality Act in January 2013. This law would combat pregnancy discrimination and create legal protection in the Human Rights Law by requiring employers to provide reasonable accommodations for pregnancy-related conditions. However, while states continue to debate their individual legislature, no national agreement on women’s status is forthcoming. The equalrights amendment.org initiative is a project promoted by the Alice Paul Institute and the National Council of Women’s Organizations. The initiative believes that the status quo will change much more slowly if

Erectile Dysfunction Pills



lawmakers and judges are not mandated to include equitable consideration of female experiences into U.S. law. These issues are tied to Social Security, taxes, wages, pensions, domestic relations, insurance, and domestic violence. Joy Rose Museum of Motherhood Amber Blair Georgia Southern University See Also: Egalitarian Marriages; Feminism; Feminist Theory; Separate Sphere Ideology. Further Resources Collins, Gail. When Everything Changed. New York: Little, Brown, 2009. Crittenden, Ann. The Price of Motherhood. New York: Picador, 2010. Mansbridge, Jane K. Why We Lost the ERA. Chicago: University of Chicago Press, 1986. Walton, Mary. A Woman’s Crusade: Alice Paul and the Battle for the Ballot. New York: Palgrave Macmillan, 2010.

Erectile Dysfunction Pills The origin, commercialization, and proliferation of vasodilator medications like Viagra for the symptoms of erectile dysfunction (ED) have had a profound impact on many relationships since their introduction into the consumer marketplace in the late 1990s. In many Western countries, and in contemporary U.S. society in particular, the popular consensus has always held that “real” men can quickly and effortlessly engage in sex acts at will. Popular culture is replete with examples that equate premature ejaculation with adolescence or an abject lack of physiological maturity. Thus, phallic erections are equated with virility because they symbolize the ever-ready sexual potential of the male body. At the other end of the spectrum of sexual functioning is impotency, rebranded as erectile dysfunction. ED’s historical origins as impotency are important. Men branded with the label of impotency were regularly shunned by their partners, and society abhorred the inability to produce

441

an erection so much that many states used it as a justifiable rationale for divorce. Approximately 25 percent of 65-year-old men experience ED on a long-term basis, though the frequency is much less among men around the age of 40 or younger, ranging between 5 and 15 percent. Failure to achieve an erection less than 20 percent of the time is not unusual, and rarely requires medical treatment, yet failure rates above 50 percent indicate a potential problem. The prescription of ED medications has significantly increased since Viagra’s initial development and marketing. Indeed, its popularity has resulted in what some researcher’s argue is the “passive medicalization” of such drugs, whereby the availability of medications like Viagra passively medicates a diagnosis of erectile dysfunction without adequately attending to the associated psychosocial etiologies of the patient’s condition. Conservative estimates suggest that 10 to 20 million men suffer from ED sufficient to require medication. The increasing prescriptions of Viagra and its analogues has been directly associated with the source of its popularity. Pharmaceutical History and Commercial Origins Abbreviations like ED attempt to minimize the stigma of the term impotent, the word used to describe this problem in the recent past. It is well recognized that the medical profession does not possess a standard definition of what constitutes a “normal” erection, thus leaving this assessment to men on an individual basis. Although ED is just as much a physiological disorder marked by an inability to produce or sustain an erection for sexual acts, it is also a psychosocial disorder influenced by socially constructed notions of hegemonic masculinity. This disorder is often attributed to variety of issues, such as alcoholic inebriation, illness, or exhaustion. The medication that has evolved to treat ED generally improves blood circulation in the penis, thereby lowering the threshold for erection. Considering the significant social stigma and insecurity associated with ED, the attraction of taking a pill becomes evident. Moreover, health insurers have become heavily immersed in the drug’s promotion, although some companies have elected to cover its distribution in limited quantities (e.g., six pills per month), whereas still others have refused coverage. Since 2000, the 8.3 million enrollees of the

442

Erectile Dysfunction Pills

Kaiser Foundation Health Plan have been denied coverage for ED drugs. The diagnosis of ED requires a careful urological examination and assessment of other potential contributing factors. Tests that might be included are urinalysis, blood hormone studies (which test testosterone and/or prolactin levels), and thyroid function. However, the primary treatment is the prescription of one of five FDA-approved drugs to treat ED: Cialis, Levitra, Staxyn, Stendra, and Viagra. All of these drugs work under the same mechanism. They increase the flow of blood to the penis, so that when a man is sexually stimulated, an erection is possible. Generally, the effects of a single dose can last anywhere from hours (Viagra) to 36 hours (Cialis); the onset of their pharmacological effects occurs around 15 minutes after oral ingestion. Although these medications come in a wide variety of doses and formulas, pills tend to be the most popular and thus the most prescribed delivery systems. From 1998 and 2006, Pfizer Pharmaceuticals earned billions of dollars from the sale of Viagra, and Bayer and GlaxoSmithKline (Levitra) and Lilli ICOS (Cialis) have had similar record-breaking profits; Pfizer grossed over $1 billion in Viagra’s first year, and in 2002, it became the fifth-most profitable corporation in the United States. Because 70 percent of the money for clinical drug trials in the United States comes from industry, rather than the federal government, and because the average costs for developing a new drug is estimated to be in the hundreds of millions, companies are under tremendous pressure to see a drug’s popularity translate into profitability. Despite their tremendous popularity, ED drugs as a class remain particularly dangerous for men who are currently taking medications to treat low blood pressure, or those who are recovering from heart failure or stroke. Somewhat contradictorily, the pharmaceutical companies who developed these drugs market them toward the same consumers who are most likely to have conditions that are contraindicated for them—men over the age of 50. Furthermore, these medications may have side effects like headache, heartburn, flushing, back pain, or changes in vision. Another important and dangerous side effect is priapism, which is an erection lasting more than hours. Such a condition requires immediate attention by a physician. Some research has shown that a patient’s decision to risk

heart attack and stroke is more than offset in their minds by the possibility of a regular reliable erection. Americans spent $100 billion for prescription drugs in 1998, which constituted an 845-percent increase over five years. This increase is largely attributable to the removal of restrictions on directto-consumer (DTC) advertising by the FDA. In 2000, 34 million people were over the age of 65, and by 2030, it is estimated that 80 million people will be over 65 years old. With the percentage of people 85 and older doubling (paired with an average life expectancy of 75 years), this means that the population most likely to be receptive to DTC advertising will continue to increase. Why men are willing to undertake this risk remains to be fully investigated, but some plausible reasons can be inferred from data about how central the notion of masculinity is to most men’s identity in American culture. Sexual Intimacy, ED, and Emotional Health ED is commonly interpreted as a barrier to sexual intimacy, particularly in Western society that has come to equate masculinity with sexual potency. Men often experience feelings of inadequacy because of their inability to reliably produce an erection at each and every sexual encounter. This often results in feelings of shame, anger, frustration, and occasionally depression. These medications are not only simple solutions to physiological problems posed by ED, but they also serve as preventative measures against the onset of emotional and psychological trauma commonly associated with ED. The centrality of “normal” functioning is a central concept to the marketing literature and the discourses surrounding ED and its treatment. Urologists have been extensively trained on how to bring their patients “back to normal” by prescribing ED medications, yet with little to no discussion on precisely what “normal” meant. Considering the evolving sociocultural standards of hegemonic masculinity and the complex ways in which it is structured to valorize phallic prowess against the “normal” aging male body and its natural decrease in sexual activity have rendered medical interventions the preferred remedy to contemporary standards. Although ED medications have proven effective in treating erectile dysfunction, some important questions remain about the implications of having a sexually dysfunctional society, as well as the role that pharmaceutical companies play

Estate Planning



in promoting its solutions to such “problems” while earning substantial financial profits. Michael Johnson, Jr. Washington State University See Also: Artificial Insemination; Birth Control Pills; Divorce and Separation; Domestic Masculinity; Fertility; Gender Roles; Gender Roles in Mass Media; Hooking Up; Masters and Johnson; Surrogacy; Swinging. Further Readings Friedman, David M. A Mind of Its Own: A Cultural History of the Penis. New York: Penguin, 2003. Johnson Michael, Jr. “‘Just Getting Off ’: The Inseparability of Ejaculation and Hegemonic Masculinity.” Journal of Men’s Studies, v.18/3 (2010). Loe, Meika. The Rise of Viagra: How the Little Blue Pill Changed Sex in America. New York: New York University Press, 2004. Van Driel, Mels. Manhood: The Rise and Fall of the Penis. Edinburgh, UK: Reaktion Books, 2011.

Estate Planning Estate planning represents one way that American families can prepare for the economic changes brought about by a loved one’s death. While often focused upon the disposal and distribution of financial assets, estate planning also allows individuals to designate guardians for minor children, provide for beneficiaries who are incapacitated, and reduce or eliminate taxes owed to the government. Often perceived by the public as dealing with wills and trusts, estate planning also involves a number of contractual arrangements, such as life insurance policies and retirement accounts. As medical professionals become better able to extend the lives of the chronically ill, estate planning has also begun to consider a variety of other issues, including long-term care, powers of attorney, and do-not-resuscitate requests. Those who die without a will or other instrument have those assets remaining in their estate distributed to a series of relatives as set forth in their state’s intestate succession statute. Wealthy individuals and families have long used estate planning as a means

443

of determining which heir or heirs inherit what money, land, or other property from the deceased. Historically, wills were drafted by lawyers for clients who wished to distribute their estate in certain ways, or to establish trusts to care for dependents. For an individual’s will to be valid, he or she needed to be over the age of majority, and in possession of the mental capacity to make such decisions. Wills needed to be “published” insofar that it was necessary to identify a document on its face as a will, and signed and dated at the end. Most jurisdictions require that wills be executed in the presence of at least two disinterested witnesses, who then in turn also sign the document. Although historically, there was no a requirement that a will be drafted by a lawyer, most were so that errors could be avoided. After the person who has made a will dies, a court action is initiated so that a probate proceeding can be held. Probate involves determining the validity of a will or wills that are presented to the court, as well as appointing an executor to administer the provisions of the will and pay any taxes, debts, and administrative expenses incurred by the estate. The executor is also responsible for distributing bequests and carrying out other provisions of the will. If a will is disproved, or if its provisions are ruled invalid, inheritance of the estate takes place according to the laws of intestacy, which are state statutes that cover the descent and distribution of assets remaining, once debts and taxes have been satisfied. Trusts are another device used for estate planning. When an individual establishes a trust, property is transferred from his or her control to that of a trustee. The trustee in turn manages the property for a third party, called a beneficiary, who has been designated by the original grantor. While the trustee has control of the legal title of the property, he or she must act in the best interests of the beneficiary. All profits resulting from the property belong to the trust and must be used for the benefit of the beneficiary, although the trustee may be paid by the trust, and have reasonable business expenses reimbursed. Trusts are created for a variety of purposes; they may be established to prevent a spendthrift heir from squandering inherited property, to benefit a charity, or to provide for former employees. Commonly used as a means of tax avoidance and asset protection, trusts permit the evasion of certain laws. Trusts may also be used to assist a person or family in maintaining their privacy, or as a means

444

Estate Taxes

of facilitating the ownership of property by more than one person. Life insurance policies represent another way families may plan for the transmission of assets from one person to another or others. Life insurance is a contract between one person (the insured) and a company or association (the insurer). By this contract, the insurer promises to pay a designated amount of money to a third party (the beneficiary), designated by the insured, upon the death of the insured. To obtain the contract, the insured pays a premium, either regularly or as a lump sum, for a designated period. The proceeds of an insurance policy paid to a beneficiary are not taxed because the government encourages families to provide for beneficiaries in the event of the death of one or both parents. Life insurance is generally procured for two reasons, protection and investment. As a means of protection, life insurance is attractive to the insured because the insurer will pay beneficiaries the amount designated in the policy upon the death of the insured, even if the premiums collected have been insignificant. Life insurance procured as protection is frequently only in force for a certain period of time, and as a result, is sometimes referred to as term insurance. As an investment, life insurance policies are used as a way to facilitate the growth of capital, either through a single or regular payments. Although part of each premium paid is used to provide term insurance, the excess is invested on behalf of the insured. As a result, the policy accumulates cash value, which grows until the policy matures. The most common form of investment insurance is called “whole life” because it is in force for the entire lifespan of the insured, so long as the premiums are paid. Certain forms of retirement accounts can also be used for estate planning. An Individual Retirement Account (IRA), or a defined contribution pension— a 401(k) or a 403(b)—may be used by an individual to accumulate savings for retirement. In the event of that individual’s death, however, funds may remain in the account. When this happens, the remaining amounts are turned over to the beneficiary designated by the individual who opened and funded the account. Because some IRAs and all 401(k) and 403(b) accounts are funded with pretax income, beneficiaries may have to pay taxes on the amount turned over to them. Careful planning with investment advisors can minimize this tax penalty.

As medical technology has advanced, so has interest in being able to specify the extent of actions to be taken in the event that one becomes incapacitated. A “living will,” also sometimes referred to as an advance health care directive, is a document that permits an individual to provide written instructions regarding the steps that should be taken for his or her health in the event of incapacity. Another way to obtain similar results is for an individual to grant a power of attorney to another so that that person can make decisions on his or her behalf in the event of incapacity. More individuals are using such instruments as a result of longer life expectancies, and as media coverage of family disputes regarding the steps that should be taken to prolong an incapacitated individual’s life grows. While court decisions regarding the “right to die” have diverged, those who have documented their wishes have been able to avoid such controversy. Stephen T. Schroth Knox College See Also: Adoption Laws; Almshouses; Child Custody; Community Property; Estate Taxes; Inheritance; Inheritance Tax/Death Tax; Power of Attorney; Wealthy Families. Further Readings Beyer, G. W. Wills, Trusts, and Estates: Examples & Explanations, 5th ed. New York: Wolters Kluwer Law & Business, 2012. Dukeminier, J., R. H. Sitkoff, and J. Lindgren. Wills, Trusts, and Estates, 8th ed. New York: Aspen Publishers, 2009. Gates, William H., Sr. and Chuck Collins. Wealth and Our Commonwealth: Why America Should Tax Our Accumulated Fortunes. Boston: Beacon Press, 2003. Shapiro, Ian and Michael J. Graetz. Death by a Thousand Cuts: The Fight Over Taxing Inherited Wealth. Princeton, NJ: Princeton University Press, 2005.

Estate Taxes An estate tax is a tax levied on the assets of a person after his or her death, and is assessed before those



assets are transferred, sold, or divided, in accordance with the decedent’s will (or in the absence of a will, the state’s intestacy laws). These assets may include some life insurance benefits paid to beneficiaries. Estate tax is legally distinct from an inheritance tax, which is assessed on the assets received by any given heir. A related tax, included in the same part of the tax code (the unified gift and estate tax system), is the gift tax, which is imposed on asset transfers during the transferer’s lifetime. Gift taxes exist in order to prevent deathbed gifts enacted to avoid estate taxes. In the United States, there are a federal estate tax and federal gift tax, and there may also be estate, inheritance, and gift taxes at the state level. Generally, assets left to a spouse are exempt from taxation under the principle of the “unlimited marital deduction.” (The spouse must be a U.S. citizen, however. Citizens who reside in the United States and are married to noncitizens may establish a special trust called the qualified domestic trust in order to enjoy the same benefit.) Similarly, taxfree money may be left to charity. Furthermore, the federal estate and gift taxes are only levied on large amounts: anything over $5.25 million. There are extensive guidelines for determining the value of an estate, which can be a lengthy process. The gross estate includes the assets and property interests owned by the decedent at the time of death, plus a number of additions in specific categories, most of which do not apply to most decedents, including the proceeds of certain kinds of life insurance policies, the values of specific kinds of annuities and jointly owned properties, the value of properties in which the decedent retained a life estate or reversionary interest, and the value of certain types of property other than gifts that the decedent transferred in the three years before the decedent’s date of death, if the property was not sold for full value. Many of these additions exist because of amendments made to the law in order to close loopholes that had been exploited by wealthy individuals seeking to avoid paying the estate tax. From the value of the gross estate, the Internal Revenue Code calls for a number of deductions. The unlimited marital deduction, for instance, states not only that assets left to a spouse are exempt from taxation, but that they are also deducted from the value of the gross estate, as are qualifying donations to charity. Funeral expenses and expenses incurred because of the administration of the estate

Estate Taxes

445

or because of claims made against the state are also deducted, as are inheritance or estate taxes paid on the state level (or to the District of Columbia). The estate tax was lower in 2014 than it has been at many times in the past, and the exclusion amount—the maximum value of an estate exempt from the tax—was much higher than it has been in the past. As recently as 2001, the exclusion amount was only $675,000, still high enough to exclude the middle class and many wealthy decedents with a surviving spouse, and the top tax rate was 40 percent. The exclusion amount was gradually raised over the 2000s—and estate tax was completely repealed in 2010, until the following year—while the top tax rate has gradually fallen, except in 2013 when it was raised to 40 percent from the previous two years’ 35 percent. The estate tax is a recurring bone of contention between the Republican and Democratic parties when working out the budget for a new fiscal year, though in its current form, it impacts only the wealthiest 1 percent of Americans. Because those who are subject to estate tax can easily afford such services, a cottage industry has developed among financial planners and tax lawyers who attempt to help individuals and families plan an estate so that as little as possible is lost to taxes. The same phenomenon in the income tax and capital gains sphere has resulted in the wealthiest Americans being shielded from the effects of taxes, but because the exclusion rate is so high, it simply limits the revenue that the government collects through estate tax, which then must somehow be collected from other sources. There are a number of reasons to support the estate tax, the simplest being that if the government needs to collect a certain amount of revenue in order to remain healthy and perform its duties, and the amount it needs to collect is high enough for the tax burden on American families to be significant, then it is rational to shift some of that burden to those Americans best able to endure it. No one is better able to endure a burden than the deceased. That reasoning is not pithy: it is the core of the rationale for estate tax because without such a tax, other taxes paid by the living would need to be raised in order to generate the same level of revenue. Furthermore, there is a particularly American way of thinking about wealth and estate tax, which sees as beneficial the long-term effect of estate tax: it slows down the accumulation of wealth by families and reduces

446

Ethnic Enclaves

the number of Americans who can purely survive on their inheritances. This outcome is appealing for several reasons. The American self-image was initially constructed in contrast to Europe, which for so long was home to an extensive aristocracy. The American work ethic and sense of pride in work results in unease over the idea of large numbers of wealthy Americans who need not work for a living. Bill Kte’pi Independent Scholar See Also: Estate Planning; Estate Taxes; Inheritance; Inheritance Tax/Death Tax; Power of Attorney; Wealthy Families. Further Readings Beyer, G. W. Wills, Trusts, and Estates: Examples & Explanations, 5th ed. New York: Wolters Kluwer Law & Business, 2012. Dukeminier, J., R. H. Sitkoff, and J. Lindgren. Wills, Trusts, and Estates, 8th ed. New York: Aspen Publishers, 2009. Gates, William H., Sr., and Chuck Collins. Wealth and Our Commonwealth: Why America Should Tax Our Accumulated Fortunes. Boston: Beacon Press, 2003. Shapiro, Ian and Michael J. Graetz. Death by a Thousand Cuts: The Fight Over Taxing Inherited Wealth. Princeton, NJ: Princeton University Press, 2005.

Ethnic Enclaves Known as a nation of immigrants, the United States has served as host to a variety of ethnic groups from its origins. Moving to a new country can bring about feelings of confusion, leaving the immigrant yearning for social support, as well as a need for more instrumental economic and systemic support. With the passing of time in the new country, immigrants might often question their ethnic identities, having to determine a balance of what extent they want to maintain their home culture while incorporating the new culture. In the 21st century the United States is experiencing an even larger surge of immigration than that seen in the 19th and early 20th centuries, so the importance of understanding the

effects of acculturation on the lives of immigrants is imperative. One central feature of the immigrant experience to consider is how ethnic enclaves have played a role in the lives of families in U.S. history thus far. Encompassing various definitions of ethnic enclaves in the literature, the ethnic enclave will be described here as a geographic area marked by residential proximity, economic overlap, and shared cultural values for a group of people who share the same ethnicity. When studying enclaves it is important to consider the ways that they relate to a person’s job opportunities, closeness to people with the same ethnic background, and presence of valued cultural elements. Enclave Formation Since the beginning of U.S. colonization, ethnic groupings have formed out of a desire to congregate with others who share a similar attribute. During the earliest years of U.S. history, enclaves were often created in areas that reminded the settlers of their homeland, or in locations that could provide economic opportunities, such as waterways. For example, many Norwegian immigrants chose the East River in New York for their settlement because of both the similarity to their homeland and the proximity to shipyards. Enclaves continue to form around areas where new immigrants can find a gathering of people with shared backgrounds and culture, and immigrants often seek out these communities where they can expect help in establishing a life in the United States. Especially for Hispanic immigrants, states along the southern border have been the primary destination, most likely because of geographic proximity. However, one interesting recent development has been the influx of Hispanics in the more northern part of the south. Nashville, Tennessee, has become an unexpected gathering spot for Hispanic immigrants, and the city now boasts an enclave of Hispanic businesses, service providers, and churches. Another reason for this emergence was the economic upturn during the 1990s that many southern cities enjoyed, leading to the creation of many employment opportunities, such as construction and service jobs, which encouraged more Hispanics to move to the south. Yet another reason that ethnic enclaves formed as they did reflects certain laws and political climates in U.S. history. For example, Chinatowns



were established as early as the 1850s in reaction to racial violence that the Chinese were experiencing. Prior to World War II, Chinese, Japanese, and Filipino immigrants formed traditional ethnic enclaves in urban areas as a result of laws concerning housing segregation. After the 1965 Immigration Act, many Asian Americans moved to cities, and this influx led to a need for satellite enclaves that could provide additional space while still allowing proximity to the traditional enclave for resources. Across time periods and ethnic groups, ethnic enclaves have formed for a variety of reasons, spanning both economic and cultural considerations. Major Urban Enclaves Throughout U.S. history, major cities such as Houston, Los Angeles, Miami, New York, and San Francisco have served as some of the prime havens for immigrant groups. Houston is home to a large Mexican enclave, whereas Los Angeles hosts multiple ethnic groups: Korean, Chinese, Japanese, Mexican, and Cuban. Likely because of the close geographic proximity to other countries’, Miami is home to a large Cuban enclave in Little Havana, and San Francisco possesses Chinese and Japanese enclaves. New York parallels Los Angeles in that it maintains a variety of enclaves, such as Chinese, Korean, Dominican, and Colombian. Major cities such as these have often hosted ethnic enclaves, likely due to the large concentrations of people, yet enclaves are also experiencing suburbanization like the rest of the country. The Acculturation Process When moving to a new country, there are opportunities for both conflict and negotiation as people determine how to navigate the new cultural environment. If the potential conflicts are successfully resolved, both the incoming and receiving cultures can benefit from the positive transition. Four types of acculturation strategies—integration, assimilation, separation, and marginalization—are identified for their variations in psychological and sociocultural adaptations that differentially predict the negative outcome of acculturative stress. Research suggests that an integration strategy, meaning that individuals welcome aspects of both their home cultures and the new culture, is the most successful for positive outcomes, with separation and marginalization the most stressful.

Ethnic Enclaves

447

Benefits of the Enclave for Immigrant Families When families and individuals move to the United States, ethnic enclaves can serve as an important resource during the acculturation process. Enclaves can provide an outlet for emotional support from ethnic familiars who have experienced the same transition, a channel to gather information about how to navigate the new environment, and even a place to share comforting foods of the home culture. However, it is important to note that not all effects of the ethnic enclave benefit everyone in the same way. For instance, the quality of the kinship network in the family and the gender of family members are some recognized factors in the benefits of participation in ethnic enclaves. Thus, ethnic enclaves require somewhat of a cost-benefit analysis for how they serve immigrant families, but the factors that have the ability to positively contribute to the acculturation experience are notable and warrant further exploration. Kinship networks are the ways that people are connected, such as through family or friends, and these can play a role in the transition for the incoming family if they already have relatives or friends living in the United States. Sociologists have suggested that the importance of kinship networks comes from the social capital that they can provide. Social capital can be defined as the ability for people to obtain goods, information, or services via their social connections. This social capital is valuable in that it provides the family with an opportunity to learn about and access several types of resources, such as cultural, economic, and institutional resources. By living within a closely bounded community of people with shared values, culture, and identity, there are many opportunities to learn from each other. For example, if a family relocates from Mexico to the Mexican enclave in Houston, and has relatives with whom they share a positive relationship already living there, this kinship network will provide social capital. This social capital will in turn help the incoming family learn about necessary institutions and opportunities, such as where to locate jobs, how to enroll children in schools, and where to obtain medical care. In addition to these institutional concerns, kinship networks can aid in providing cultural opportunities, both to preserve and recognize the home culture and to learn about the new culture.

448

Ethnic Enclaves

Ethnic enclaves can also provide an outlet for immigrants to maintain and express their prior ethnic identity, as well as negotiate or modify that ethnic identity as they incorporate elements of the new host culture. Because ethnicity is not a stagnant construct, but rather an alterable concept, a person’s ethnic identity is a result of this continuously changing process. Further, the enclave provides a place for immigrants to consider and explore the ethnic boundaries between that of the enclave and that of the host country’s majority. Ethnic identity is demonstrated as a valuable protective factor in a number of studies. For example, ethnic identity pride has been found to moderate the relation between discrimination and depressive symptoms for Korean American college students, predict psychological well-being for ethnically Mexican and Chinese American adolescents, and moderate the potential negative effects of low socioeconomic status on academic achievement for Latino college students. Thus, enclaves can provide an important base for immigrant families to establish their new lives, seek opportunities, and explore their ethnic identities, yet these immediate benefits are not universally seen, and may be counteracted by some negative long-term effects. Potential Negative Effects for Families

While there are a number of benefits to participation in an ethnic enclave, it is important to note the possible negative effects as well. For instance, ethnic enclaves can lead to segregation from the majority culture. In some cases, the ethnic group may choose to self-segregate out of fear of discrimination or negative interactions with people in power, such as police. For instance, in a qualitative study of Hispanic immigrants in Nashville, some participants indicated a preference for self-segregation due to the perceptions of certain laws in an attempt to avoid legal trouble. Further, the ethnic community model suggests that immigrants may prefer to self-segregate even when there are not strict boundaries, perhaps due to the desire to preserve shared interests. While immigrants may enjoy access to the familiar languages, products, and services that ethnic enclaves can provide, the absence of a relationship with the host society can be seen as a negative outcome. In John Berry’s model of acculturation, separation is defined as when an immigrant chooses

to maintain their original culture while rejecting the new culture. In contrast, integration, also sometimes called biculturalism, refers to when the immigrant incorporates aspects of both cultures. Biculturalism has been shown to predict seemingly optimal adjustment and positive outcomes, such as prosocial behavior for Hispanic immigrants who choose to maintain both some Hispanic culture and adopt some U.S. culture. In contrast, a separation strategy is linked to some negative outcomes, such as substance use and deviant behavior for Mexican American adolescents. The economic effects of ethnic enclaves are complex because of the conflict between immediate benefits and long-term solutions. According to the ideas of kinship networks and social capital, new immigrant families can benefit from the access to jobs, understanding of the labor system, and other economic resources that established members of the kinship network can provide. However, the jobs that are often made available through these networks are unskilled or semiskilled positions, meaning that immigrants may find it difficult to rise to higher-paying jobs in the long term. Thus, separation from the host culture and limited job opportunities are some negative outcomes that should be considered in evaluations of ethnic enclave participation. Future Considerations for Studying Ethnic Enclaves Some recent phenomena, including suburbanization, commercialization, and activism, will be important factors for studying ethnic enclaves in the coming years. In recent decades, suburbanization has been increasing, and this is likely due to fewer hindrances to moving into the suburbs. As of 2002, 54 percent of Hispanics were living in U.S. suburbs. Some ethnic enclaves have also become tourist destinations, especially known for the availability of authentic restaurants, shops, and festivals. For example, The Hill, an Italian neighborhood in St. Louis, Missouri, is one such enclave. In a case study of The Hill, researchers found that most members of the enclave who were surveyed accepted the idea of tourism in their community if it is created in a genuine manner, rather than by commercial development. Residents in other enclaves, such as Little Italy in Schenectady, New York, are opposed to this commercialization, especially when it comes from

Ethnic Food



outside influences; instead, they prefer to maintain an authentic identity. In addition, ethnic activism could be an increasingly important factor as the United States becomes more diverse. For example, Asian American communities have often been active in advocating for important concerns, such as access to housing, jobs, and community resources. There are debates between groups of scholars who argue for either the costs or the benefits of being part of ethnic enclaves in the United States. Many immediate benefits may arise from participation in an ethnic enclave, such as social connections and exchange of information, yet these may be negated by potential long-term costs, such as economic limitations and segregation. How the ethnic enclave will change over time as the United States becomes increasingly diverse is a question that remains unanswered. The ethnic enclave has endured for centuries, and grown thus far; but only time will reveal whether enclaves will become even stronger havens for families to preserve their original ethnic identity, or places where different cultures begin to blend closer together as families become more diverse. With increasing immigration, enclaves in the United States, as well as the ways that their structure affects the families that live within them, will continue to evolve. Sarah L. Pierotti University of Missouri See Also: Acculturation; Assimilation; Immigrant Families; Latino Families; Tossed Salad Metaphor. Further Readings Berry, John. “Immigration, Acculturation, and Adaptation.” Applied Psychology, v.46/1 (1997). Hagan, Jacqueline. “Social Networks, Gender, and Immigrant Incorporation: Resources and Constraints.” American Sociological Review, v.63/1 (1998). Logan, John, et al. “Immigrant Enclaves and Ethnic Communities in New York and Los Angeles.” American Sociological Review, v.67/2 (2002). Portes, Alejandro. “Social Capital: Its Origins and Applications in Modern Sociology.” Annual Review of Sociology, v.24 (1998). Schwartz, Seth, et al. “Rethinking the Concept of Acculturation: Implications for Theory and Research.” American Psychologist, v.65/4 (2010).

449

Ethnic Food The study of food history borrows not only from the social sciences, but also from folklore. William Graham Sumner identified cultural behaviors as “folkways” in 1906; “foodways” now refers to the cultural attitudes and behaviors regarding food, and how food is used to reflect family roles and relationships. Foodways can also be symbols of ethnic identity. Ethnic foodways can be a family’s way of expressing nostalgia for “the old country,” or a time before assimilation, or of a simple declaration of one’s ethnicity. First-generation immigrants tend to hold onto previous foodways the longest. Adjusting or abandoning ethnic foodways often reflects assimilation among second- or later-generation immigrants. Recapturing ethnic foodways is one common way that third or still-later generations discover and appreciate their roots. Living in ethnic enclaves, immigrant families often started their businesses around food—either running markets that imported goods from their home country, founding manufacturing enterprises that made a specialty ethnic food item (e.g., tamales, pasta, or piergoi), or opening a restaurant that served familiar food. Family members have been, and still are, conspicuously active in these businesses. Initially, the ethnic restaurant was the one business that drew other American families into the enclave, for example, in a city’s Chinatown or Little Italy. Later, ethnic family restaurants moved into other neighborhoods and the suburbs, reaching a wider audience who came to appreciate these foodways. Now ethnic restaurants proliferate everywhere. Ethnic foods have become such a fundamental part of popular culture that many families’ weekly diets include a diverse spectrum of ethnic food, although it may be in an Americanized, frozen, or fast-food form. Food has been a means by which earlier Americans learn about later arrivals. The food recreates an ethnic family experience, however altered, and engenders a point-of-pride comparison for family members who know how it should really be made. Recipes have genealogies, and the ethnic family serves a special function in preserving and sharing them. Ethnic foodways reveal family social history and family roles and relationships. Sharing ethnic foodways at home-cooked meals provides family members with a sense of inclusion and belonging.

450

Ethnic Food

Preparing and ritualizing ethnic foods connected urban people with their rural, ancestral ways of life. Some cultures used foodways to connect with ancestors, not only through memory, but also through festivals such as All Souls’ Day, or Day of the Dead, when Mexican Americans make panes de muertos (bread of the dead) in the shapes of skeletons, and candy in the shape of skulls. Foodways often defined family roles. Tejanas, or Mexican American women, for example, have for many years made a ritual of cooking tamales, which were typically made and cooked hundreds at a time. These are cylinders of corn husk filled with pork (originally pig’s head) mash. Traditionally, only women could make them, and they spent several days doing so, to the exclusion of all other housework. From shopping for the right ingredients to serving, giving, or selling, women conveyed love and honor for their husbands and families by making tamales. Many families, Mexican or not, still enjoy them at home and many people receive or purchase homemade tamales from coworkers or peddlers’ vans. For many immigrant families, their foods were the primary way, they kept ethnic traditions “pure.” They might not exhibit their ethnicity publicly for fear of rejection. In their homes, however, they and their children relished their comfort foods of heritage. Because of this, it became a common response for some Americans to be reluctant to eat in immigrant homes because their hosts might serve “exotic” foods. Most “American” food, historically and to this day, had English roots, or represented America’s early dominant cultures. Few ethnically different customs were integrated early enough into American foodways to be granted status as “American.” Most historians isolate 1600s Dutch (i.e., cookies, coleslaw, and waffles) and German foodways as succeeding in this manner, partly because of ethnic similarity. The upper Midwest and Great Plains became an expanded German enclave in the 1800s, with so many successive waves of immigration that their foodways became commonplace. German culture brought frankfurters, hamburgers, sauerbraten, knackwurst, sausages, potato pancakes, red cabbage, sauerkraut, and German chocolate cakes, all accepted without much controversy for many years. Yet, during World War I, when German Americans suffered guilt by association with a vilified European enemy, some of these foods

Traditional panes de muertos (bread of the dead) molded into shapes for the Day of the Dead festival in Mexico. The holiday focuses on gatherings of family and friends to pray for and remember friends and family members who have died.

became an unfortunate symbol of ethnic identity. Americans renamed frankfurters “liberty pups,” hamburgers “liberty steaks,” and sauerkraut “liberty cabbage,” rather than eat food with German names. With benevolent prejudice, in the Progressive (but largely anti-immigrant) Era of 1880 to 1920, educators and social workers tried to help the new southern and eastern European immigrants. It was thought that their diets needed to reflect American ideals of health and nutrition. Spicy food, reformers believed, as well as mixtures of food varieties that occurred in soups and stews, were indigestible, and the spice might even lead to alcoholism. Progressive schools offered American lunches, but many immigrant children preferred to go home for lunch. After World War II, however, continuing studies of ethnic foodways changed the reformers’ approach. Anthropologist Margaret Mead, for example, represented a new era of understanding among scientists and professionals, who believed that other cultures deserved



more respect. Yet, there remained a bland sameness in school systems, and in standardized food processing, fast foods, chain restaurants and in many small towns and rural areas. By the late 1960s and 1970s— an era of civil rights campaigns and college ethnic studies programs—reclaiming one’s ethnic heritage became popular, and along with it came a revived interest in ethnic foodways. African American food, for example, had deeply influenced southern cooking, and now gained recognition as a new positive symbol of ethnicity, and was rebranded “soul food.” How immigrant families maintained, altered, or abandoned ethnic foodways follows patterns that had major impacts on American social history. The first generation—those who immigrated—held onto ethnic foodways longer than any other. It is more remarkable that immigrant families retained any foodways than it is that they lost any; their diets changed from the start of their journeys, as they suffered on ships and then during insensitive processing. After arrival, struggles continued—familiar food ingredients were unavailable or expensive, whereas the pressure to assimilate was relentless. Other factors discouraged many from perpetuating their foodways; indentured servants and African slaves were oppressed while Scotch-Irish isolated themselves on rugged frontiers. Native Americans, while not “immigrants” in the strict sense, became forced migrants, subjected to chronic war and depredations, and were ultimately forced onto reservations. Irish immigrants arrived without a complex set of ethnic foodways. Most had been relegated to such an intense exclusive dependence on potatoes that they suffered for it on an unprecedented national scale. The potato famine left them alienated from food and foodways. They had to reach into family history of several generations back to regain food traditions. This background added to American prejudice and anti-Catholicism, causing thousands of Irish servant girls to be stereotyped as hopeless cooks. For many families, then, survival of any ethnic foodways at all testify to their determination. Immigrant families diversified the American diet by operating outward from their ethnic enclaves. Yet not all enclaves were located in urban neighborhoods. On the country’s western coast, Chinese immigrants formed California’s seafood industry, harvesting shrimp, abalone, crab, and seaweed. In Wisconsin, Swiss immigrant families developed

Ethnic Food

451

dairying and cheese making. In the upper Mississippi Valley, Scandinavian farm families shared their foods (lefse, pickled herring, meatballs, beet salad, and crisp rye breads), including one that caused many to joke that they would never want to share it (lutefisk: lye-soaked fish); and introduced the smorgasbord. Italians in northeast and west coast cities clung to their ethnic foods most vigorously. In order to do so, Italian and other immigrant families started small restaurants and food factories (often from their kitchens) in order to make their food more available for themselves and started stores to be able to import ingredients, then planted gardens for fresh ingredients. Some of the larger Italian gardens became substantial to deliver fresh produce to whole cities. As entrepreneurs, they imported olive oil, cheese, sausages, wine, pasta, and herbs from Italy. Later, Greek families did the same. Italian ice cream parlors, Greek candy kitchens or bakeries, and Italian or Greek lunch restaurants or fruit and vegetable markets developed in ethnic neighborhoods. New York City’s delicatessens provided for Jewish families: gefilte fish, lox, bagels, pastrami, chopped chicken livers, cheesecake, and pumpernickel. Every link in this ethnic food chain tended to be family operated. Ethnic family restaurants drew other people into the neighborhoods. In the late 1800s, Chinese family restaurants introduced egg foo young, spareribs, fried shrimp, and chop suey to Americans. By the 1890s, Italian restaurants appeared in New York City and Philadelphia. New York City’s first pizzeria opened in 1905. Irish Americans designed a holiday, St. Patrick’s Day, that was not even celebrated in their homeland; the idea behind it was that everyone in the United States was Irish for a day. Food traditions that came to be associated with the holiday were soda bread and corned beef and cabbage. By the 1920s, most American cities had enough Italian restaurants that urban folks were familiar with spaghetti, macaroni, cannelloni, spumoni, lasagna, antipasto, minestrone, and chicken cacciatore. By the 1930s, pizzerias proliferated in Italian neighborhoods. Dining out provided immigrant families with the means to survive and perpetuate their ethnic foodways, while it diversified other Americans’ diets and cultural literacy. Gourmet dining arrived with a French chef. In 1939, Henri Soule demonstrated French cuisine at

452

Evangelicals

the New York World’s Fair, and before long, French restaurants spread in the big cities. Ethnic food and neighborhoods appealed to artists, intellectuals, musicians, poets, and 1950s Beatniks. Thus, San Francisco’s Italian North Beach district and New York’s Greenwich Village attracted diversity to ethnic neighborhoods again. By the 1950s, pizza was common outside of Italian areas, especially near colleges and other welcoming spots. Before World War II and well into the 1960s and 1970s, most of rural small-town America remained closed to “exotic” immigrants and their ethnic foodways. Even recipes for spaghetti did not appear in standard American cookbooks until after the war. Supermarkets did not yet carry ethnic food varieties, and a Midwestern town might only have met its first Italian or even African American family. Yet, war veterans had tasted and still wanted foreign ethnic foods. The Immigration Act of 1965 opened the door for more immigrant families from China, Taiwan, Hong Kong, Japan, Korea, Thailand, India, Pakistan, the Middle East, east Africa, Mexico, the Caribbean, and Central and South America. Thus, within two to three decades, scattered ethnic foodways and restaurants became still more diverse. The already-established ethnic foods lent themselves to national chain restaurants and manufacturing: Taco Bell became popular in the 1960s, multiple pizza corporations spread throughout the country, and Kraft Food bought Lender’s Bagels to complement their Philadelphia cream cheese. Ethnic foods had become so generic and popular that these companies were not operated by the matching ethnic families. Pizza was available in most towns and city neighborhoods. Supermarkets added ethnic food aisles to compete with the small ethnic markets. Today, chain supermarkets, fast food and other chain restaurants, and ethnic families who open businesses (even if they are the lone families of that ethnicity in that town) all add up to ethnic food dissemination. There are almost too many varieties of yogurt to fit in that burgeoning supermarket section. Almost any eatery offers chili. The proliferation of cookbooks, television cooking shows, and Web sites that perpetuate ethnic foodways is astounding. Genealogists of the third-plus immigrant generations are recording the recipes that mothers passed to daughters through oral tradition. Nevertheless, after the terrorist attacks of September 11, 2001, some Middle Eastern restaurateurs changed their

business names to “Persian” or even “Greek” to avoid prejudice and retain clientele. The patterns of immigrant families and ethnic foodways continue. Katherine Scott Sturdevant Pikes Peak Community College See Also: African American Families; Asian American Families; Central and South American Immigrant Families; Chinese Immigrant Families; Ethnic Enclaves; Family Businesses; Genealogy and Family Trees; German Immigrant Families; Immigrant Families; Indian (Asian) Families; Irish Immigrant Families; Italian Immigrant Families; Japanese Immigrant Families; Korean Immigrant Families; Latino Families; Melting Pot Metaphor; Mexican Immigrant Families; Native American Families; Passover; Polish Immigrant Families; Slave Families; Southwestern Families; Supermarkets; Tossed Salad Metaphor; Vietnamese Immigrant Families. Further Readings Brown, Linda Keller and Kay Mussell, eds. Ethnic and Regional Foodways in the United States: The Performance of Group Identity. Knoxville: University of Tennessee Press, 1984. Camp, Charles. American Foodways: What, Why, and How We Eat in America. Little Rock, AR: August House, 1989. Diner, Hasia. Hungering for America: Italian, Irish, and Jewish Foodways in the Age of Migration. Cambridge, MA: Harvard University Press, 2001. Gabaccia, Donna R. We Are What We Eat: Ethnic Food and the Making of Americans. Cambridge, MA: Harvard University Press, 1998. Thursby, Jacqueline. Foodways and Folklore: A Handbook. Westport, CT: Greenwood Press, 2008. Ziegelman, Jane. 97 Orchard: An Edible History of Five Immigrant Families in One New York Tenement. New York: HarperCollins, 2010.

Evangelicals From their historical origins in the back-and-forth transatlantic Protestant movements between the American colonies and the British Isles, to the recognition of their impact on American society today, evangelicals have been defined in various



ways by themselves, the media, political pundits, cultural critics, and other commentators. Various approaches, sometimes overlapping, have been used to examine, define, and measure their influence, including historical, sociological, and theological analysis. However defined, current estimates of the proportion of Americans who define themselves as evangelicals range from approximately 25 to 30 percent. Evangelicals have and continue to exert a significant influence in American society, though of varying intensity over the course of their history. Trans-Atlantic Historical Roots “Evangelical” is a derivative of the Greek word euangelion, used in the New Testament, and meaning “good news.” Evangelicals are a subset of the Protestant Reformation who grew out of a northern European and British context, and traveled to and from the emerging American colonies. Major 18thcentury evangelical figures, during what came to be known as the “Great Awakening,” included Jonathan Edwards, George Whitefield, and the Wesley brothers, John and Charles. The clergy of that era were highly educated and elite, but the seeds that they planted in that spiritual awakening sprouted and contributed to inverting the religious and the social structures that they inhabited from top-down to bottoms-up popular movements. This democratization of religion spawned several religious movements that rapidly grew during the Second Great Awakening in the early 19th century. Many historians have commended David Bebbington, author of Evangelicals in Modern Britain for noting four beliefs and practices that characterize most evangelicals: (1) the necessity of conversion, (2) active evangelistic activity, (3) devotion to the Bible coupled with belief that it is true and inspired by God, and (4) the centrality of the cross in (the crucifixion and death of Jesus Christ and all it theologically entails. Such a definition, however, fails to account for many nonevangelicals who hold the same beliefs. Further sociological and historical refinement is needed, especially to account for the phenomenon as it is manifested in the United States. American Scene Beliefs that had been a broad Protestant consensus in the late 1800s became contested at the outset of the 20th century. Fearful of new scientific, social, and religious developments that many in the

Evangelicals

453

mainline denominations embraced, many others responded defensively to them with militant opposition. In the modernist-fundamentalist debate that followed, beginning in the 1920s, these fundamentalists attempted to take control of major denominations, failed to do so, and then left en masse, creating new institutions in their wake. This is the context out of which a new movement (they were first called “neo-evangelicals”) arose in the 1940s. These evangelicals desired to intellectually, socially, politically, and religiously engage culture while maintaining their orthodox Christian beliefs. With the fundamentalists, they shared disdain over what they considered growing heretical beliefs. However, against the fundamentalists they argued that Christian unity was taught in the Bible, and should be taken seriously. Strict separatism was counter to the scriptures in this respect, and engagement with the world was necessary to carry out the biblical mandate as they understood it. Leaders from the interdenominational New England Fellowship tirelessly worked to attract likeminded Christians, inviting Anabaptist, Holiness, Pentecostal, and other nonfundamentalist leaders to attend a 1942 meeting in St. Louis, which culminated in the founding of the National Association of Evangelicals for United Action, shortened to the National Association of Evangelicals (NAE) in the following year. This tension between separatism and engagement is nowhere better seen than in the early evangelistic ministry of Billy Graham in the late 1940s and early 1950s. Graham’s evangelistic organization sought the help of all churches, including theologically liberal ones, to pull off their logistically complex urban evangelistic meetings. Counselor training for the evangelistic crusades was open to members of all Christian churches. Fundamentalists, some of whom articulated a doctrine of double separation, condemned not only those who did not share their beliefs, but also those who associated with them. They accused Graham of unscriptural compromise. Graham, the New England Fellowship evangelical leaders, Christianity Today editor-in-chief Carl F. H. Henry, and a host of other emerging evangelical leaders started new publications, institutions, and parachurch organizations (or restructured old ones), as they threaded the needle between maintaining the faith and engaging the culture. The resulting transdenominational structure and movement

454

Evangelicals

did not replace traditional denominations, but became a rallying point around which many people, organizations, and denominations could relate. These organizations maintained their distinctive identities, but identified with the larger NAE mission, as did many who chose not to formally join. As with the evangelistic campaigns, social ethics were also addressed by these early evangelicals. Fundamentalists had largely conceded social action to the liberal churches, and considered it tainted by the “social gospel.” Henry, in his influential The Uneasy Conscience of Modern Fundamentalism, lamented fundamentalism’s disinterest in biblical social justice issues, and called for cultural engagement. Fundamentalism and Fundamentalisms Recent social scientific study, most notably the University of Chicago’s Fundamentalism Project, which ran from 1987 to 1995, examined the phenomenon of a wide variety of fundamentalisms across nations, religions, and cultures. The name of the project was not derived from The Fundamentals, a 12-volume work written between 1910 and 1915, designed to counter modernist theological trends, and which factored into the fundamentalist label assigned to American fundamentalists in the 1920s. Rather, this resulted in the multivolume work, Fundamentalisms, which was derived from the study of a wide range of diverse religious movements by religious scholars, who found commonalities among them. The editors of this important multivolume work on world fundamentalisms point out that researchers were uneasy with the term fundamentalism as stated in The Fundamentalism Project: A User’s Guide: [M]ost of the essayists take some pains to say why they are they are uneasy with the term fundamentalism, and they say so often, with evident awareness that some of their colleagues who specialize in the same topics will criticize their assent to use the term. If contributors to this project were uneasy with the term, this wariness was lost on many media, cultural, and religious commentators, who viewed “religious fundamentalism” as a pejorative label and an ancient, yet now modern, plague on modern society. Historical distinctions between evangelicals and fundamentalists in the United States were underplayed or lost altogether, to such an extent

that even Jerry Falwell (who formerly proudly wore that label and founded The Fundamentalist Journal) and Bob Jones University (a fiercely separatist Fundamentalist school that well symbolized the movement) eschewed the term because of its negative associations in the mind of the public. Here is a case where academic research not only explains the phenomenon, but (along with its interpreters) changes its object of study. Evangelicals Today Failing to recognize the diversity of the evangelical movement, commentators often identify it with a single outspoken leader who they take as representative of the movement as a whole, or oversimplify it for lack of solid data. Some evangelical leaders today eschew the “evangelical” label, believing that it obscures the deeper distinctions of the movements of which they are a part (e.g. Pietist, Mennonite, Holiness, or Southern Baptist). The most recent comprehensive study and analysis of evangelicals in the United States, the Evangelical Identify and Influence Project, with principle investigator sociologist Christian Smith, is described in American Evangelicalism: Embattled and Thriving. The study focused on the beliefs, attitudes, opinions, commitments, and behaviors of ordinary evangelicals. Among the findings, they found that evangelicals place a higher value on the importance of their faith (compared to fundamentalist, mainline, liberal, and Catholic Christians), they doubt the least, they attend and participate in church activities the most, they listen or watch Christian media the most, and they believe (at a higher rate than other traditions) that Christians should go beyond their family and try to change society for the better. They tend to be active in their desire to convert people to Jesus Christ, to live differently than mainstream America, in working for political reform, and in defending a Christian worldview. This activism and defense of their worldview in the midst of a pluralistic environment with plausible alternative belief systems appears to be a key to their vitality. More than any other faith tradition, including self-identified fundamentalists, they believe that there are moral absolutes and that these should be the basis for the rule of law in the United States, even for non-Christians. They are divided, however, on which moral values should be taught in public schools. Against the judgment of

Every Child Matters



most historians of the American Revolution, over 90 percent of evangelicals believe that the United States was founded as a Christian nation. This, combined with the belief that their nation has now turned its back on God, creates a desire for restoration of the country’s former glory. Evolving Families The individualistic concern for making a personal faith commitment, combined with a stance of engaging with society, places each new generation of evangelicals at risk of losing their faith or accommodating it to other beliefs. The family has often been understood by evangelicals as the place where saving faith is cultivated and then nurtured. Perceived threats from without and troubles from within have given rise to ministries, literature, and other media designed to protect and enhance healthy family life. Some recent surveys indicate the divorce rate among evangelicals is at best no better, and sometimes worse, than the general population. Evangelical children often choose to engage a different set of issues and see them differently than their parents. Their ability to carry on the movement will depend on how they choose to engage with an increasingly pluralistic country and world. Douglas Milford University of Illinois at Chicago See Also: Christianity; Family Values; Great Awakening.

Further Readings Bebbington, David. “Evangelicalism in Its Settings: The British and American Movements Since 1940.” In Evangelicalism: Comparative Studies of Popular Protestantism in North America, the British Isles, and Beyond, 1700–1990, Mark A. Noll, David W. Bebbington, and George Rawlyk, eds. New York: Oxford University Press, 1994. Bebbington, David. Evangelicalism in Modern Britain: A History From the 1730s to the 1980s. Grand Rapids, MI: Baker Book House, 1992. Carpenter, Joel A. Revive Us Again: The Reawakening of American Fundamentalism. New York: Oxford University Press, 1997. Dayton, Donald W. and Robert K. Johnston, eds. The Variety of American Evangelicalism. Downers Grove, IL: InterVarsity Press, 1991.

455

Marsden, George M. Fundamentalism and American Culture: The Shaping of Twentieth Century Evangelicalism 1870–1925. New York: Oxford University Press, 1980. Noll, Mark. “Evangelicals Past and Present.” In Religion, Politics, and the American Experience: Reflections on Religion and American Public Life, Edith L. Blumhofer, ed. Tuscaloosa: University of Alabama Press, 2002. Noll, Mark A. The Rise of Evangelicalism: The Age of Edwards, Whitefield and the Wesleys. Downers Grove, IL: InterVarsity Press, 2003. Smith, Christian. American Evangelicalism: Embattled and Thriving. Chicago: University of Chicago Press, 1998. Smith, Christian. Christian America? What Evangelicals Really Want. Berkeley: University of California Press, 2000.

Every Child Matters Established in 2002, the Every Child Matters Education Fund (ECM) is a 501(c)(3) nonprofit nonpartisan organization working to make public investments in children, youth, and families a national political priority.  ECM provides a vehicle for children and families to promote their fuller representation in the democratic process. Michael Petit, ECM’s president, served as commissioner of the Maine Department of Human Service, and then as the deputy director at the Child Welfare League of America. In 2001, he received a small anonymous grant for a feasibility study on how to better advocate for children in the political process, which ultimately led to the establishment of ECM. Petit has a master’s degree in social work from Boston College, and served as a delegate to the United Nations Convention on the Rights of the Child in Helsinki, Finland. ECM is located in Washington, D.C., and has two satellite offices in New Hampshire/Maine, and Long Island, New York. It also opens temporary ad hoc offices during election cycles, generally contracting with area child advocacy organizations, and has been active in more than 50 elections. What makes ECM different is its focus on raising the visibility of children’s issues during elections. It urges candidates to support, and the public to demand, greater

456

Every Child Matters

investments in programs that address the needs of America’s families. While many national and state organizations play critical roles in helping formulate good public policies for children, ECM believes that experience shows that working on policy, while important, is not enough when children’s needs collide with special interest politics. Children’s issues are more likely to gain political attention when office seekers believe that they can gain public approval by supporting pro-children policies. ECM promotes the adoption of smart policies, such as access to affordable comprehensive health care services, expanding early-care and learning opportunities and after-school programs, preventing violence against children in their homes and communities, alleviating child poverty, and addressing the special needs of children with parents in prison. They do this by raising the visibility of these issues during the election cycle, and urging the candidates to support child and family-friendly policies. ECM campaigns for children, not for candidates, using all the tools and tactics that are available to 501(c)(3) organizations, including polling, voter registration, get-out-the-vote efforts, candidate surveys, and candidate forums. Strategies to reach these goals include educating candidates and policymakers; participating in a variety of media activities; direct public education and outreach campaigns; the distribution of resources, information, and state-specific data on the status of children and youth; communication with the ECM network of tens of thousands of child advocates in local communities; and building strategic partnerships with state and local child-­advocacy and childserving organizations. Current ECM focus issues include: child care, early learning and pre-kindergarten, child abuse and neglect, after-school programs, health care, and poverty. These issues were primarily selected by examining federal data, and by evaluating which pose the greatest challenges for children and families. In this regard, ECM operates as a convener and organizer, bringing together groups that have an interest in furthering child well-being in the United States. It is not involved in direct service, nor does it compete for federal, state, or local tax dollars. ECM works with organizations to expand their outreach, and to show the strength in numbers necessary to get the job done. It relies on statistics from the U.S. Census Bureau and the U.S. Department of Health and Human Services.

Other frequently used sources include the Congressional Budget Office, the Annie E. Casey Foundation (Kids Count Data Book) Center on Budget and Policy Priorities, Child Welfare League of America, Children’s Defense Fund, National Center for Children in Poverty, National Women’s Law Center, Prevent Child Abuse America, and UNICEF. ECM’s primary campaign strategy is to mobilize child-serving sites—child care, early learning, after school, and health care—and the people involved with them: staff, families, and other providers. More than 5 million people who work in the helping professions are not registered to vote, and are potential child-friendly voters. Compared to the electorate as a whole, these individuals are more than 80 percent female, disproportionately minority, lower income, and are more likely to have children under age 18. Millions of parents are also unregistered. ECM works directly with the sites to educate them on the issues, and to encourage them to register to vote and then vote for the candidate who has the best pro-children, pro-family platform. Because ECM cannot have a one-on-one relationship with all those sites, it works with local child-advocacy organizations and child-serving associations to get out its message and materials, and to conduct voter registration. The ultimate goal of their ECM getout-the-vote effort is to conduct voter education, and through other organizations, voter registration, and to see candidates elected who support greater investments in children and families. ECM measures outcomes by conducting before and after polls on public awareness of children’s issues. It researches candidates’ stands on issues before and after ECM’s children’s campaign. It looks at the stands of the newly elected officials in the states in which it has run campaigns. ECM is primarily funded by foundation grants, and by some corporate and individual contributions. Michael Kalinowski University of New Hampshire See Also: Child Abuse, Child Advocate, Child Care, Health of American Families, Poverty and Poor Families. Further Readings Every Child Matters. http://www.everychildmatters.org (Accessed June 2013).

Every Child Matters. How to Make Your Vote Count for America’s Children. Washington, DC: 2012. Knowles, Gina. Ensuring Every Child Matters. Thousand Oaks, CA: Sage, 2009.

Evolutionary Theories Evolutionary theories aim to uncover how humans have changed over time. As applied to the family, evolutionary theory can offer explanations pertaining to how current familial practices are linked to how humans lived in prior eras. Adaptation is one theme in evolutionary theories; humans will adapt to their environment in order to survive. Therefore, evolutionary theory can give further understanding of why particular family characteristics have changed over time. Another central tenet of evolutionary theory is reproductive fitness, meaning that individuals strive to spread their genes through successful reproduction. In order to achieve optimal reproductive fitness, males and females often rely upon different mechanisms relevant to their biology, with males employing a reproductive strategy that emphasizes a higher rate of reproduction (i.e., higher investment in mating), and females instituting a reproductive strategy that stresses childrearing, due in part to long gestational and postnatal care periods. Considering that a noted function of families is to produce offspring, one can further apply evolutionary theory to the family institution. While some may argue that reproduction is no longer a central feature of the family, evolutionary theory gives the field of family science some insight into how families have evolved over time, and why particular familial tendencies persist in today’s day and age. Evolutionary theory posits that the family unit arose from humans’ adaptation to various environmental, historical, and social factors. The family unit has always been necessary for the survival of children because it allows individuals to maximize resources and allows children, who require extensive resources to survive early in life, to thrive during their development. Evolutionary theory also explains the roles deemed as natural in the family: females as caretakers, and males as providers.

Evolutionary Theories

457

Roles The gendered division of labor within a household has roots in humans’ past. According to an evolutionary perspective, gendered division of labor serves humans according to their biological dispositions. Furthermore, anthropologists note that a gendered division of labor is present in all known societies. Individuals had to rely on their biological tendencies in order to survive. Some argue that gendered stereotypes are the result of this initial household division of labor, with males’ more aggressive and domineering tendencies the result of hunting large game, and females’ nurturing and caring tendencies the result of raising children near the homestead. Males and females needed to possess these specific traits in order to successfully reproduce and survive. As a result of women’s biology, they took care of small children because they were uniquely suited to nourish them through breastfeeding. This task required females to remain close to the child; to leave an infant for an extended period of time would result in his or her malnourishment, or even death. However, the male role in the family is much different. Because men do not have a biological tendency that requires their presence around young children, they were able to leave the home for longer periods. Males also possess a larger stature, enabling them to hunt large animals and perform tasks that require more upper-body strength. Therefore, males’ roles in the household largely became associated with providing. A gendered division of household labor still persists in modern times, with females performing more unpaid housework than males. Even when both partners work outside the home, the woman typically completes a majority of day-to-day household tasks. This trend even occurs when a female outearns her partner; such women actually experience an increase in household duties. The function of a family is less clear cut than in previous eras because many traditional male and female duties have blended together, despite statistics; for instance, many men cook, and many women fix things. However, the family unit still serves as a foundation for raising children. Children’s Outcomes Evolutionary theory offers an explanation that links child outcomes in adulthood to the environment in which an individual develops. Children living

458

Evolutionary Theories

in environments characterized by high stress and a high risk of mortality are more likely to utilize a reproductive strategy that employs early engagement in sexual activity. Living in a high-risk environment may attune an individual’s reproductive strategy to maximize opportunities available to reproduce, potentially resulting in sexual activity with a higher number of sexual partners over the lifespan, and a younger age at first intercourse. This is in contrast to a trend exhibited by children growing up in an environment considered resource rich and stable. Children growing up in the latter environment are more likely to engage in sexual intercourse for the first time at a later age, and have fewer sexual partners over their lifespan. Individuals growing up in more stable households have more opportunity to hone skills that will benefit future offspring, whereas growing up in a more risky environment shifts thinking to reproduce early because fewer opportunities may present themselves if mate selection decreases sharply over time, or if mortality risk increases over time. One must note that this is on average and individual differences persist. In sum, the family serves as a mechanism to prepare children the future, depending upon the context in which the child develops. Even today, children exhibit better outcomes when reared in a family environment with their biological father and mother. Evolutionary theory offers the explanation that males are more likely to invest in their children if they have paternal certainty, and humans tend to make greater investments toward genetically related individuals. However, one should exert caution when interpreting evolutionary theory in this manner; this does not generalize to all single-parent households, blended households, and take into account other factors (e.g., level of conflict in the household prior to divorce). For example, the addition of a stepparent into a family system after a divorce may create benefits for the child, including greater access to financial resources and more supervision. Gender Differences in Reproduction At first glance, one may make the assumption that evolutionary theory proposes that males reproduce as fast as possible, whereas females carefully choose a mate due to long prenatal period. Translated to current times, this idea would imply that males are more likely to engage in extramarital affairs

and have more sexual partners over their lifespan, whereas females remain in monogamous relationships and prefer fewer sexual partners. However, the situation is much more complicated. Males are physically able to reproduce faster than females, creating a male reproductive strategy that tends to emphasize mating, rather than childrearing. However, males may invest more in offspring when contextual factors necessitate such investment. In high-resource households characterized by stability, paternal investment (i.e., through childrearing) enhances reproductive fitness as children’s survival in this type of environment. Therefore, fathers in stable high-resource contexts may find themselves focusing more on parenting and childcare, instead of on high reproduction, because the socioecological context of childrearing emphasizes that in order for children to thrive, resources must be present. An example of this is a child reared in an environment where a college education is necessary in order to “thrive” in a particular context. A college education is costly and it requires an investment in a child’s abilities. Considering the resources and time that it takes to rear a child in this manner, evolutionary theory would argue that males in this type of socioecological context would shift their reproductive strategy to focus on childrearing, instead of high reproduction. Paternal investment also manifests as a function of paternity certainty. Males who are more certain of paternity are more likely to invest in that child. Evolutionary theory functions in a similar manner for females; females’ seemingly natural investment in children can be viewed as a result of their ability to nurse their children. However, females are not limited to childrearing and passive participation in mating; they can utilize various reproductive strategies in order to increase their reproductive fitness. Interpreting evolutionary theory in this manner accounts for potential individual differences in females’ mating behavior. Females may employ mating strategies that emphasize access to resources and that seek to find the best possible mate, even when a woman is in a stable partnership. From this perspective, females may seek stable marital partners who will invest in her children, but go outside of this partnership (e.g., through an affair) to secure better genes for her potential children. Research suggests that females who engage in extramarital affairs are likely to engage in such

Extended Families



affairs close to ovulation, heightening the possibility of a child sired by a man other than the females’ partner. While evolutionary theory often purports males and females as rigidly adhering to gender roles, a more encompassing interpretation of evolutionary theory allows for further understanding of familial processes. What Else Can Evolutionary Theory Explain? Evolutionary theory can give insight into patterns after a divorce occurs. Noncustodial fathers are less active in their children’s lives in terms of daily activities and visits, whereas noncustodial mothers pursue a more active role in their children’s lives. This pattern is in accord with evolutionary theories, as females tend to invest more in their offspring, whereas males often invest more of their resources in mating. Viewing remarriage patterns, males are more likely than females to remarry after divorce, and males tend to marry younger spouses. The rise of family planning methods can also be explained using evolutionary theory. Access to birth control methods allows reproductive fitness not to decrease, but rather enables individuals greater reproductive fitness by postponing childrearing until an individual or family unit is ready to more fully contribute to a child’s well-being. This example highlights how families have adapted to the constraints placed on them from social context; a large number of children is no longer an economic necessity. Today, evolutionary theory can explain many factors in family life. For example, it allows an understanding of why the household division of labor is still rigid in a time when both males and females provide for their families outside of the household. Evolutionary theory cannot explain all factions of current family life, but it enables an understanding of nuances in today’s world that require an understanding of humans’ past. Ashley Ermer University of Missouri–Columbia See Also: Breadwinner-Homemaker Families; Courtship, Domestic Ideology, Gender Roles, Parenting. Further Readings Belsky, Jay, Laurence Steinberg, and Patricia Draper. “Childhood Experience, Interpersonal Development,

459

and Reproductive Strategy: An Evolutionary Theory of Socialization.” Child Development, v.62 (1991). Eagly, Alice, Wendy Wood, and Amanda Diekman. “Social Role Theory of Sex Differences and Similarities: A Current Appraisal.” In The Developmental Social Psychology of Gender, T. Eckes and H. M. Trautner, eds. Mahwah, NJ: Lawrence Erlbaum Associates Publishers, 2000. Geary, David. “Evolution and Proximate Expression of Human Paternal Investment.” Psychological Bulletin, v.126 (2000). Geary, David and Mark Flinn. “Evolution of Human Parental Behavior and the Human Family.” Parenting: Science and Practice, v.1 (2001).

Extended Families Contemporary American extended families, formed through blood, law, and language, reflect the country’s history and the uniqueness of its populations. Extended families fall on a continuum, ranging from (1) families extensively formed through blood and legal ties; (2) families formed through a blend of blood, legal and discursive ties; and (3) families extensively formed through discursive ties. This rather unique configuration of family formation types reflects the comparatively short history of the United States, and its unique and diverse immigration patterns. Less than five centuries ago, European immigrants arrived in a land sparsely populated by Native Americans living within tribal cultures. Contemporary extended families reflect centuries of immigration and mobility patterns, as well as diverse cultural norms. The concept of extended family significantly expanded in recent decades, a pattern that is unlikely to change. Background Historically, multiple variations of extended kinship in the United States developed according to diverse groups’ needs. Three major migration patterns significantly contributed to the nature of contemporary immediate and extended families. First, when European explorers arrived, Native Americans existed within tribal structures that were conducive to their survival. Most lived in extended family groups that might include parents, siblings,

460

Extended Families

children, grandparents, aunts and uncles, cousins, and others who needed shelter and support. Many tribes moved from location to location, searching for good hunting grounds, although some tribes settled in one place to raise crops. As a result of Western expansion, Native Americans were eventually forced onto reservations, where extended family ties remained a way of life for the small number that chose to live there permanently. Second, the arrival of European explorers was quickly followed by immigrants, primarily men, seeking adventure or political or religious refuge. Many of these pioneers created small communities to meet their needs for protection, companionship, and mutual assistance. Over time, more immigrants arrived, both men and women fleeing religious persecution or seeking adventure or fortune. Many immigrated without biological relatives, some of whom arrived later; others arrived with multiple blood relatives and spouses. Large numbers of these early immigrants died of illness or in childbirth, leaving other family members to depend on the community to fulfill some of their social and functional needs. Much of the nation’s westward expansion involved groups of biological and/or informal extended families that formed caravans in order to survive the trip and thrive in new, often harsh landscapes. As their wagon trains crossed the plains, mountains, and deserts, families banded together for safety and support. Those who survived the passage created communities of interdependent families, who provided mutual support and companionship. Many families became more closely interlinked through marriage and childbirth or through informal adoption of children when parents died. In times of pestilence, famine, or attack, settlers relied on these informal extended families for protection and support. Third, slavery resulted in the creation of painful and complicated extended families formed through personal commitment, and eventually, through blood ties. The slaves’ African lineage, including kinship groups or clans, led to replications of African family life in the form of extended families among the enslaved black population. Most slaves arrived without blood relatives, but relied on traditional societal codes of family life to establish communities. Certain slave households were comprised of a conjugal pair, their children, and eventually their grandchildren; sometimes, they included

other slaves or children who were not kin. Because many slave owners sold their male slaves away from their families, women became the central adult figures in many plantation households. In some cases, biracial extended families formed as a result of the rape of female slaves by plantation owners. Contemporary Family Diversity American families in the 21st century reflect this tripartite history. Centuries of ongoing European immigration contributed to a highly diverse population, representing multiples races, ethnicities, languages, and religions, eventually creating a reality often referred to as a melting pot or patchwork quilt. Persecution of immigrant newcomers reinforced the importance of biological ties, as well as the need for an extended family community in order to survive and thrive. Recent immigration patterns reflect increased arrivals from eastern Asia, southeast Asia, the Indian subcontinent, the Middle East, and Mexico and Central America. Although these groups hold specific beliefs about the nature of family, intermarriage, remarriage, and adoption across racial, ethnic, and religious lines, ongoing immigration patterns have generally led to an increasing tolerance for multiple ways to create familial ties. In the contemporary United States, individuals live within diverse extended family forms that fulfill various functions for their members. These variations may be viewed on a continuum ranging from family ties fully formed through biological and/ or legal ties, to family ties fully formed outside of biological and legal ties. Some families are formed through discourse, rendering their ties discourse dependent. Today, most American extended families’ identities reflect multiple types of relational ties. Biological and Legal Ties In these cases, all the close relatives have been born into the family, married into the family, or been adopted into the family. Many children grow up interacting with their cousins, aunts, uncles, and grandparents on a regular basis. Some such families have witnessed few divorces and remarriages. Those without biological or legal ties are seldom viewed as members of the family. Immigrant families tend to retain this structure across a number of generations. In families formed in large part through biological and legal ties, as well as select discursive ties, relatives have been born into, married into, or



adopted into the family. In addition, some relatives represent fictive or voluntary kin, who have participated in the family’s life for many years and are viewed as family members. This may be a mother’s best childhood friend who is called “Aunt Emily,” or a father’s college roommate who serves as the oldest child’s godfather. It might include a child who was informally adopted into the family because of biological parental neglect or physical absence. Such fictive kin are expected to remain highly involved with the family. Discursive Ties In these cases, most significant familial ties form outside biological or legal connections. In cases of same-sex partners who cannot jointly adopt a child or marry within their state, these couples must depend on language or discourse to establish their familial ties. Verbal statements such as “this is my Daddy Joe and my Poppa Daniel” serve to establish the familial connection. In addition, many couples who choose not to marry find themselves explaining their exclusive romantic tie through terms such as “my partner” or “my life partner.” A single woman may account for her strong ties to her close friend’s daughter by referring to her as “my niece.” The higher the number of relationships discursively formed, the greater the family members’ reliance on communication strategies to establish and manage family identity. When talking with outsiders, family members rely on boundary management strategies such as naming, explaining, justifying, or defending the family. When interacting with each other around family identity issues, members rely on boundary management strategies such as naming, discussing, narrating, and ritualizing. Complicating Issues Current factors such as immigration, intercultural/ interracial partnerships, delayed first marriage, and economic concerns contribute to the rise in extended families. Newly arrived immigrants may expect extended family members to provide economic and social support. In cases of low economic status and limited English, extended family may provide financial and acculturation support for newcomers. Frequently, starting between the ages of 8 and 12, children serve as language brokers for extended family members, serving as translators in medical, school, and immigration contexts for

Extended Families

461

relatives who struggle with English. These responsibilities may carry well into adulthood. Marriages or partnerships depicted as “international or interethnic” may encounter complications as extended family members attempt to interact. In cases of international marriage, each partner must undergo an extensive socialization process to adapt to life in a large, extended, Vietnamese American family, for example, or a small Anglo-Saxon one. Language barriers may result in limited contact between legally formed extended family relatives. Some extended families form due to members’ stresses. Many middle-aged parents watch their recent college graduates and unemployed young adult children return to the “empty nest” due to a desire to save money or mitigate serious economic difficulties due to job loss. Less often, middle-aged or older parents find themselves moving in with adult children and grandchildren after experiencing unforeseen economic reversals. In addition, many grandparents spend each day taking care of their grandchildren because both parents need to work. Finally, because of a growing number of second, third, and even fourth marriages or committed partnerships, many children and young adults live within a sequence of additive extended families, resulting in challenging holiday celebrations and ongoing attempts to manage the multiple ties. Imagine eight grandparents seeking tickets for a grandchild’s graduation ceremony. As lifespans lengthen and older people increasingly form new partnerships after they have become widowed, for example, extended family ties will continue to multiply, although the intensity of such ties will shift. Adoption and Reproductive Technologies In the 21st century, many families formed through adoption and new reproductive technologies encounter complicated extended family issues. Most domestic adoptions now involve open adoption, a practice depicted on a continuum of varying degrees of interpersonal contact. Thus, the adoptive family creates a connection to a birth family, creating an “adoption triangle.” The adoptee, the birthparent(s), and the adoptive parent(s) form an extended family characterized by some level openness between the family systems. Points of the triangle may involve multiple individuals because the birth mother or birth father may have

462

Extended Families

parents, partners, or other children, and the adoptive parent(s) may have other children. Open adoption provides the child with information and connections to adoptive and biological members, who form an extended family network. Even some adults internationally adopted as children report meeting their birth family members through international reunions. Adoptive parents tend to report that their children have benefitted from meeting birth family members, and birth parents report that ongoing contact increases their satisfaction with their adoption decision. Families formed through assisted reproductive technologies represent highly complicated extended families. Many heterosexual and homosexual individuals and partners seek donated sperm, egg, or embryo donors, known or unknown. Some prospective parents work through agencies that agree to provide donor identification information at a later date; others rely on family or friends as donors. Many individuals who achieved parenthood in this way report no plans to tell their children of their biological connections to another family, whereas others plan to reveal the information when the child is old enough to understand the circumstances. In either case, the truth may emerge, resulting in extended family connections. Finally, as embryo adoption becomes more common, families will be linked through full biological siblings, a connection only recently foreseen. The manner in

which family members manage such information will impact the extent to which children learn their histories and manage those complexities. Kathleen M. Galvin Northwestern University See Also: Assisted Reproduction Technology; Foster Families; Immigrant Families; Multigenerational Households; Remarriage; Social History of American Families 2001–Present; Stepfamilies. Further Readings Galvin, K. M. “Diversity’s Impact on Defining the Family.” In The Family Communication Sourcebook, L. H. Turner and R. West, eds. Thousand Oaks, CA: Sage, 2006. Hertz, R. “Turning Strangers Into Kin.” In Who’s Watching? Daily Practices of Surveillance among Contemporary Families. M. K. Nelson and A. I. Garey, eds. Nashville, TN: Vanderbilt University Press, 2009. Schmeeckle, M. and S. Sprecher. “Widening Circles: Interactive Connections Between Immediate Family and Larger Social Networks.” In Handbook of Family Communication, 2nd ed., A. L. Vangelisti, ed. New York: Routledge, 2013. Sudarkasa, N. “Interpreting the African Heritage in Afro-American Family Organization.” American Families: A Multicultural Reader, S. Coontz, M. Parson, and G. Raley, eds. New York: Routledge, 1999.

The Social History of the

American Family

F Facebook Facebook is an online social networking platform in which individuals create unique profiles; upload personal photographs, videos, and status updates; and interact with people throughout the online world. Facebook accounts are sometimes used for workrelated business; developing romantic relationships; sharing interests such as politics, music, or hobbies; and writing on other friends’ “walls” (i.e., their profile page). Profiles can be shaped into large groups based on shared interests, creating limitless networks. Facebook was officially created to help friends and family stay connected with each other, as well as a way to create new relationships with people who have only ever met in the online world. Facebook founder Mark Zuckerberg once told a reporter that the mission of Facebook is not just to help maintain relationships, but also to give power to those who wish to share ideas and opinions with the world. Since its founding, Facebook has become the most influential social networking site in the world. In 2013, the site has\d over 1 billion active users worldwide and nearly 5,000 employees. Even those who do not have a personal account on the site are likely aware of what Facebook is, and why so many use it. History Zuckerberg, the disputed creator of Facebook, was inspired by his previous creation, a site called

Facemash that went live on October 28, 2003. For Facemash, Zuckerberg collected individual photographs from several of his fellow Harvard University students and placed them side by side in pairs so that users could vote on who was more attractive. The controversial site was removed from the Internet within four hours, but not before 20,000 votes were recorded. Zuckerberg faced possible disciplinary action because of protected privacy and copyright violations; however undeterred, he created another site specifically for an art history course that allowed students to collaborate on notes for a series of paintings. The site’s success prompted Zuckerberg to continue designing similar sites that required social interaction and exclusivity. Within the following year, Facebook was officially created. Controversy quickly arose as a result of other Harvard students accusing Zuckerberg of stealing the original idea for Facebook from them. Their idea was to create an exclusive social Web site only for Harvard students and faculty members. Because of the compelling story involving Zuckerberg’s promise to contribute to another social media site—and his simultaneous creation of Facebook—great controversy stemmed from the question of who came up with the idea. This controversy was the basis for the award-winning 2010 film The Social Network, written by Aaron Sorkin, directed by David Fincher, and starring Jesse Eisenberg as Zuckerberg. 463

464

Facebook best known for creating the music-sharing Web site Napster, and eventually Parker became the first president of the company. Upon moving Facebook’s headquarters from Zuckerberg’s dorm room in Cambridge, Massachusetts, to Palo Alto, California, and receiving their first venture capital investment from PayPal cofounder Peter Theil, Parker was able to secure control of the company for Zuckerberg, but he also made Facebook into a marketable entity in the competitive world of Internet businesses. Companies that joined Facebook gained a platform from which to entice potential customers and to advertise in new ways. Microsoft purchased a small percentage share of Facebook in exchange for an exclusive advertising deal for $240 million. Today, Facebook is the largest social media company in the world, worth billions of dollars, and competitive with Internet greats Google and Amazon.

Mark Zuckerberg, computer programmer, entrepreneur, and chief executive officer of Facebook speaks about the Facebook messaging system at a press conference in 2010.

Exclusivity Originally called “The Facebook,” the site was launched on February 4, 2004. Initially, only Harvard students were permitted access. Shortly, students at other Ivy League schools were given access, and Facebook’s membership quickly grew. Years later, Zuckerberg reported that membership growth doubled every six months after Facebook was made available to all colleges, high schools, and private companies. By opening Facebook to everyone above age 13 (The Children’s Online Privacy Protection Act prohibits children under 13 to register) in September 2006, Facebook was able to exponentially grow and spread throughout the world. Because he was starting out, Zuckerberg often received advice from Sean Parker, an entrepreneur

Friend-Networking Users People of all ages are found on Facebook. While a large percentage of the network’s users are young adults and adolescents, senior citizens are also a considerable percentage of users. It is difficult to determine how many children under the age of 13 are users due to laws prohibiting them from using social networking sites; many, however, falsify their personal information to gain access. A recent Pew Internet report found that a majority of young adults in the United States are registered on Facebook, and a majority are active users. The same is true for adults between the ages of 30 and 64. Well over a quarter of senior citizens reported using Facebook in 2012. Among genders, more women are registered users of Facebook than men. Generally, across multiple social networks, gender differences are fewer or nonexistent. Among Facebook, Instagram, and Pinterest, Facebook is the most used by men. Networking on Facebook takes place at all levels of education and household income. There is not a significant difference between those who live in urban versus rural settings, although more people from urban neighborhoods report being registered on Facebook. Because of the ability to effectively communicate, share and receive information, and maintain or create new relationships via Facebook, this reach across all demographics is not surprising.



A recent study reviewed what users share on two popular social media sites (including Facebook) to determine why networking is so popular. The majority of those who have accounts report checking them over four times per day and spending over an hour on them. The majority also share personal information via description about themselves, and personal interests. Just over half (51.5 percent) of those who have accounts reported setting them to private, requiring individuals to seek the account owner’s permission to view them. Ninety-six percent of users report using their accounts to maintain relationships with old and current friends. Nonusers It is important to note differences between Facebook users and non-users. The Pew study reported various reasons why some individuals choose not to engage in social networking. The two most frequently reported reasons are lack of desire (73.3 percent) and being too busy to maintain an account (46.7 percent). Others shared concerns regarding personal business, safety, or lack of Internet access. Key differences between users and nonusers were magnified when age differences were compared. Users of Facebook are typically younger than nonusers. Ethnic differences were also examined, and it was found that Native Americans are less likely to be users than the other ethnic groups, including Hispanics, Caucasians, African Americans, and those who are multiracial. Impact and Influence In spring of 2010, the “like” button was released as a new way to share articles, videos, news, and games from outside the site. The social networking feature provides a plug-in for companies and various markets, but also allows users to “like” a comment, photo, and other content that friends will be able to see on their walls, which maintain up-to-date news feeds of a user’s status, more commonly known as “notifications.” Recent negative events happening on Facebook (as well as other social networking sites) have garnered media attention. Some users fail to conduct themselves responsibly, resulting in cases of privacy abuse and cyberbullying. However, the benefits of Facebook and other social networking sites relate to the users’ abilities to control their social capital. College students from Michigan State University

Facebook

465

were assessed to determine if positive resources are accumulated through the online relationships that they maintained on Facebook. While Internet behaviors can help or diminish an individual’s social capital, Facebook was found to increase it. Nicole Ellison and colleagues tested the value of being connected to others through Facebook. Users were reported to have on average between 100 and 200 friends. Most friends were said to have originated from offline connections. In other words, Facebook excels at maintaining relationships that originate in person, rather than those that are established online. They found that incoming students are likelier to seek out and meet new friends online than older students, but across all years, more users reported using Facebook to stay connected with offline acquaintances. Perhaps as individuals (not just college students) find themselves in unfamiliar contexts, they will spend more time on Facebook or networking online until they meet new acquaintances in real life. Through Facebook, relationships may be established and maintained, thus creating a support network system, also called “social capital.” Over time, acquaintances are made that provide opportunities for future relationships to develop. Compared to those who report having considerable social capital, those who do not use Facebook report having less social capital and self-esteem. Facebook is useful for all ages to remain connected to family and friends, become aware of community activities, meet new people, and thus have more opportunity to maintain and increase social capital. Criticism Facebook has been blocked from many countries throughout the world for a number of reasons. Within the United States and other parts of the world, many companies block access to prevent employees from spending company time on the site. Furthermore, because of its wide popularity, bullying and constant teasing happens, and occasionally leads to very serious outcomes. There have also been incidents in which private information has been leaked, such as passwords, emails, and photos that are potentially harmful to an individual’s reputation. Also, because of the ease of creating an account, fake accounts such as parody accounts or underage accounts are frequently created. This potentially compromises the safety and

466

Fair Labor Standards Act

well-being of others, especially if too much personal information is shared, such as an address or sexual orientation. Other criticisms of Facebook have risen because of the perceived increase of Internet addiction. Cecilie Andreassen and colleagues tested a new scale and found that heavy Facebook use interferes with sleeping habits, causing some individuals to stay up late and sleep in longer. Women scored higher on the Facebook addiction scale, a finding that contradicts other addiction research. Timothy Phoenix Oblad Elizabeth Trejos-Castillo Texas Tech University See Also: Adolescence; Children’s Online Privacy Protection Act; Emerging Adulthood; Internet; Myspace; Personal Computers in the Home; Twitter; YouTube. Further Readings Andreassen, C. S., T. Torsheim, G. S. Brunborg, and S. Pallesen S. “Development of the Facebook Addiction Scale.” Psychological Reports, v.110 (2012). Carlson, N. “At Last: The Full Story of How Facebook Was Founded.” Business Insider (2010). http://www .businessinsider.com/how-facebook-was-founded -2010-3?op=1 (Accessed January 2014). Duggan, M. and J. Brenner. “The Demographics of Social Media Users—2012.” Pew Internet & American Life Project (February 14, 2013). http://pewinternet.org/~/media/Files/Reports/2013/ PIP_SocialMediaUsers.pdf (Accessed June, 2013). Ellison, N. B., C. Steinfield, and C. Lampe. “The Benefits of Facebook ‘Friends’: Social Capital and College Students’ Use of Online Social Network Sites.” Journal of Computer-Mediated Communication, v.12 (2007). Friesen, N. and S. Lowe. “The Questionable Promise of Social Media for Education: Connective Learning and the Commercial Imperative.” Journal of Computer Assisted Learning, v.28 (2012). Locke, L. “The Future of Facebook.” Time (July 17, 2007). http://www.time.com/time/business/ article/0,8599,1644040,00.html (Accessed January 2014). Raacke, J. and J. Bonds-Raacke. “MySpace and Facebook: Applying the Uses and Gratifications Theory to Exploring Friend-Networking Sites.” CyberPsychology & Behavior, v.11 (2008).

Fair Labor Standards Act The Fair Labor Standards Act (FLSA) is a U.S. federal law first enacted in 1938 to protect American workers from being forced to work excessive hours for meager wages. This particular piece of Depression-era legislation was lauded by President Franklin D. Roosevelt as the most important law passed since the Social Security Act of 1935. The FLSA set a federal minimum wage, established maximum work-week hours, guaranteed time-and-a-half for overtime in certain jobs, banned oppressive child labor laws, and instituted a system of recordkeeping. What is more, the FLSA was part of a larger legislative strategy to stimulate relief for the American worker while helping American businesses recover from the stock market crash of 1929. Considered a landmark piece of legislation that was responsible for changing the course of the nation’s social and economic development, the FLSA was established only after contentious negotiations between liberal Democrats on the one hand, and Republicans and conservative Democrats on the other, as well as judicial setbacks. Since 1938, the FLSA has been revised and amended over 40 times, and it remains a cornerstone of U.S. labor policy. These amendments were intended to clarify various aspects of the law’s benefits, and to expand the law to include previously exempted work sectors and groups. The impact of the FLSA on American families has greatly differed through the years, and has largely been shaped by the social milieu of the day, whereby race, gender, age, and class are the predominant predictors of this policy’s effects over time. In general, support for higher labor standards was extremely popular among the American public at the time that the law was enacted. Nonetheless, some groups were apprehensive about the more nuanced aspects of the legislation. The debate over setting fair labor standards in the United States stemmed from differences of opinion concerning the need for and constitutionality of government regulation of workers’ wages and hours. Liberal Democrats, who were proponents of higher labor standards, reasoned that shortened work hours would relieve some workers from working unnecessarily long hours while creating new jobs for others. A minimum wage, they argued, would anchor the whole wage



structure at a point from which collective bargaining could take place. Proponents characterized the existing labor environment as “sweated labor.” They supported the president’s view that no selfrespecting democracy could justify the existence of child labor, the chiseling of worker’s wages, or the lengthening of work hours. Republicans and conservative Democrats, both opponents of higher labor standards statutes, characterized the whole process as tyrannical industrial dictatorship, arguing that such laws only served as thinly veiled attempts at socialist planning. Republicans and conservative Democrats casted American businesses as victims of a multiplying and hampering federal bureaucracy, and maintained that government interference would stifle the “genius” of American business. The Supreme Court was another major obstacle to wage-hour and child-labor laws. In several major court decisions, the constitutionality of government regulations of business trade and industry codification was challenged. In Hammer v. Dagenhart, the court struck down a federal child-labor law. In Schechter Corp. v. United States it was decided that the newly established industry codes restricted trade practices, and the Court unanimously agreed that the industry code system was an unconstitutional delegation of government power to private interests. In the case of Adkins v. Children’s Hospital, the Court narrowly struck down the District of Columbia law establishing equal rights for women. In particular, African Americans, women, and those in exempted occupational groups have historically been the driving force behind the expansion of fair labor standards to include all American workers. African Americans African Americans were conflicted regarding their support for the FLSA. Because the FLSA emerged from the National Industrial Recovery Act (NIRA), which established the National Recovery Administration (NRA, variously dubbed the “Negro Removal Act,” “Negroes Ruined Again,” and “Negroes Robbed Again” by the African American leadership and press), there was immense suspicion regarding how “fair” labor standards would be, considering historical evidence to the contrary. While ostensibly race neutral, the FLSA was seen

Fair Labor Standards Act

467

by African Americans as effectively antiblack. Specifically, section 7(a) of the NIRA expanded the collective bargaining rights of trade unions, providing them with increasing power over specific industries. As labor laws strengthened the power of trade unions, African Americans experienced massive job loss, and were increasingly excluded from union activities when many businesses became “closed shops.” Under closed shop agreements employers agreed to hire union members only, once employed, one must remain a member of the union to remain employed. As a result, African Americans appealed to legislators for an antidiscrimination clause to be added to fair labor standards in order to curtail the discriminatory practices upheld in trade unions and by potential employers. It was not until the passage of the Civil Rights Act of 1964 that antidiscrimination legislation became codified in federal law. Women White males were the primary benefactors of the FLSA, particularly those who worked in skilled and semiskilled labor. Women were more likely to be domestic workers or engage in unskilled labor, which was not covered by the FLSA, thereby remaining unprotected by fair labor standards. The bifurcation of labor in the United States into “women’s” and “men’s” work effectively locked women out of many industries. Other sexist laws, such as women not being legally allowed to be the head of the household, fed into justifications for lower pay rates for them. The passage of the Equal Pay Act of 1963 legally prohibited employment discrimination on the basis of gender. Additionally, future amendments to the FLSA considered the role of motherhood in the lives of working women. For instance, the Family and Medical Leave Act of 1993 ensured that eligible employees received 12 weeks of unpaid leave to tend to family and medical issues such as childbirth, adoption, ill parents, or other family and medical responsibilities. Age Age was one of the central factors surrounding the push or fair labor standards. Initially, the emphasis was on protecting the young; the later focus shifted to security for aging workers. Before FLSA, children

468

Fair Labor Standards Act

worked under the same conditions as adults: long hours for minimum pay. Initially, FLSA set a minimum working age for boys at age 16 and for girls at age 18. Over time, child labor laws have become more detailed as the government has attempted to both protect children from labor abuses and allow them to help to make a financial contribution to their families. As a result, the FLSA and child labor regulations have established minimum age standards for youth employment. According to these standards, employable youth are divided into three categories: 16 and 17, 14 and 15, and under 14 years of age. The FLSA, through its child labor regulations stipulates standards specific to this population, governing maximum work days per week, hours per day, and type of duties performed. Similarly, the Age Discrimination in Employment Act of 1967 was established to prohibit discrimination against persons 40 years of age or older. Before this, as workers aged, they faced denial of health benefits and training opportunities. Although the FLSA’s initial focus was on abolishing child-labor abuses, through many amendments it has codified protections for the safe and productive employment of youth and much-needed protections for aging workers.

policy. The FLSA established the 40-hour maximum work week and the federal minimum wage standard, abolished oppressive child labor, guaranteed overtime pay, and established an administrative recordkeeping structure to allow oversight over the labor conditions of the American worker. Just as American society has changed, so has the FLSA and its impact on American families, and the impact has been as diverse as American families. The struggle for antidiscrimination, led by African Americans, has resulted in the protection of all racial and ethnic minorities from discrimination in the labor force. Similarly, the challenge to gender inequality has led to the narrowing of the gender wage gap, as well as expanded benefits that consider the condition of motherhood. Likewise, the trade union movement is responsible for the wide array of industries that are unionized, whereby workers are empowered through collective bargaining.

Occupational Status Occupational status impacted the effects of the FLSA on American families. At the time that the FLSA was established, only 700,000 employees were covered by the law based on industry of occupation. Occupations such as domestic, agricultural, and unskilled labor were not protected by the FLSA. Since its enactment in 1938, FLSA has been expanded and currently over 130 million workers are protected by the law. A 1974 amendment expanded, coverage to domestic workers, and the 1983 Migrant and Seasonal Agricultural Worker Protection Act expanded coverage to agricultural workers. Overall, the FLSA applies to all U.S. workers employed in interstate commerce or in the production of goods for commerce, or those employed by an enterprise engaged in commerce or the production of goods for commerce. Once thought of as a stop-gap to relieve the American society from the effects of the stock market crash of 1929, the FLSA lives on as a major component of U.S. labor

Further Readings Bernstein, David. Only One Place of Redress: African Americans, Labor Regulations, and the Courts From Reconstruction to the New Deal. Durham, NC: Duke University Press, 2001. Grossman, Jonathan. “Fair Labor Standards Act of 1938: Maximum Struggle for a Minimum Wage.” http://www.dol.gov/oasam/programs/history/ flsa1938.htm (Accessed July 2013). Johnson, James P. “Drafting the NRA Code of Fair Competition for the Bituminous Coal Industry.” Journal of American History, v.53/3 (1966). U.S. Department of Labor, Wage and Hour Division. “Fact Sheet #2a: Child Labor Rules for Employing Youth in Restaurants and Quick-Service Establishments Under the Fair Labor Standards Act (FLSA).” http://www.dol.gov/whd/compliance/ childlabor101_text.htm (Accessed August 2013). U.S. Department of Labor, “Fair Labor Standards ActFLSA 29 U.S. Code Chapter 8.” http://finduslaw .com/fair-labor-standards-act-flsa-29-us-code -chapter-8 (Accessed July 2013).

Alice K. Thomas Howard University See Also: Child Labor; Civil Rights Act of 1964; Family and Medical Leave Act; Living Wage; Minimum Wage.



Families and Health Recent scientific findings have demonstrated how close, loving relationships have a powerfully beneficial effect on the health of individuals. Such findings would have come as no surprise to the inhabitants of precolonial America, or to the first European settlers of the land that would become the United States. While cultural conceptions of kinship, family, and physical well-being are in a state of constant flux, less subject to change are the appetites of the body, the dependency of children and the elderly, and the inevitability of conflict, disease, and death. The destructive effects of disease and war might be buffered by the resources of healthy families, whereas excessive interpersonal conflict and other disturbances of family functioning can create new problems or worsen existing problems. Increasing recognition of the reciprocal interaction between family functioning and physical health has led to new biopsychosocial innovations in health care. This entry traces the development of health-related issues as they pertain to families who have inhabited North America for more than 13,000 years. Indigenous Peoples The nomadic peoples of Asia who first populated the Americas were hunter-gatherers who had to contend with a changing glacial climate and uncertain availability of vital resources. Most healthy members of these groups would disperse each day to forage for food and other necessities and return to camp at the end of the day, while the young, sick, and infirm would remain at home. Technologies such as fire and weapons would help manage environmental threats and contribute to the overall health of the group. Development of agriculture led to less need for travel, more sedentary time, and more complex forms of social life, including the large and sophisticated cultures of Mesoamerica and North America. European colonists would forever change the lives of indigenous peoples of North America, who came to be known as American Indians. The arrival of European colonists had severe consequences for the health of American Indian families. These colonists displaced the existing population and brought contagious diseases, to which American Indians were especially vulnerable. For much of the 17th and 18th centuries,

Families and Health

469

more American Indians died of smallpox than by any other means. The devastating impact of disease on the elderly caused power to shift from older to younger generations, altering family structure, weakening the bonds of tradition, and increasing domestic violence. Further colonial advancement led to the decimation of the American Indian population, which has only recently begun to make a recovery. Today, the health problems of Native American families continue as a reflection of the genocide, displacement, and disease that resulted from European colonialism. These problems, which include diabetes, alcoholism, heart disease, and psychosocial problems such as depression and anxiety, are suffered by Native American families more than any other ethnic group. African Americans Enslaved Africans in America had scarce opportunities to find partners or start families, and they were especially vulnerable to illness. Poor-quality housing provided to the families of slaves was unsanitary, uncomfortable, and exposed to animal waste, leading to infectious diseases such as dysentery and parasitic infections. As increasing numbers of slaves were born in America, their immunity to local diseases increased, along with the birth rate. Blacks in New England had greater success in establishing and maintaining families than in the south, where tropical diseases claimed countless lives. Throughout American history, families of people of African descent have borne the effects of racism inflicted by citizens and public policies, leading to widespread discrepancies in the quality of health care and disproportionate levels of disease. For much of the 19th and early 20th centuries, the ethos of African American families had generally been to tend to sick and elderly family members without outside interference. Black orphans of the Civil War, for example, were frequently cared for by both biological and nonbiological kin. These extended families reaped the health benefits of strong kinship ties and bore the burden of illnesses caused or worsened by stress. Large and complex kinship systems still thrive today. However, in the late 20th and early 21st centuries, publicly funded social services have to a great extent supplemented or replaced these families for large numbers of orphaned and neglected children, and disproportionate numbers of African American men are

470

Families and Health

incarcerated, leading to the weakening of family structures and concomitant worsening of stressrelated health problems such as obesity, diabetes, malnutrition, depression, and anxiety. Historically founded mistrust of public institutions and whitedominated health care professions has led to widespread underutilization of health services throughout the black community. Women Historically, females were treated as inferior to men both socially and physically. The female anatomy put women at risk for ascribed mental health disorders. These concerns did not begin in the Americas. Hysteria, or “wandering womb,” is traced back to ancient Greece. Hysteria has evolved from women being emotionally excessive to comprise functional neurological symptoms. The popularity of the illness contributed to a quarter of women being diagnosed with hysteria by the 1850s. The diagnosis fit most issues experienced by females, and the reproductive organs were viewed as the cause for the nervous disorder. Hysteria has also been associated with sexual dysfunction, and the health professions put great effort into distinguishing normal sexual behaviors of women from excessive or inadequate (“frigid”) behaviors. Pregnancy and childbirth put females at risk of death. Carrying the baby in the final stages of pregnancy is hard on the body. For example, women may develop eclampsia from high blood pressure that leads to organ damage. Complications during delivery also occur. However, once the babies are born, mothers are still at risk of death. The vacant womb is highly susceptible to infections (e.g., puerperal fever). While rates of death have decreased for females during pregnancy, childbirth, and postpartum, women are still at risk. Specifically, the United States has one of the highest death rates in developed nations for pregnancy and pregnancyrelated conditions. The medical field of gynecology was pioneered by the controversial physician J. Marion Sims, a South Carolinian of the mid-19th century who developed an early treatment for a prolapsed uterus. This procedure helped to eliminate vesicovagnial fistulas, which cause pain, fever, and emotional distress from urine leaking from the bladder into the vagina. Sims conducted his research by performing dozens of surgical procedures on

African American slaves without anesthesia. Since that time, fistulas rarely appear in the developed world, and obstetrics and gynecology have solidified their place in modern medicine, combating disorders such as painful menstruation, infertility, and cervical cancer. The middle of the 19th century brought greatly expanded technologies for contraception, including diaphragms, cervical caps, and other devices made possible by the development of vulcanized rubber. However, due to the efforts of anticontraception social activists, these forms of birth control were not widely used before the early decades of the 20th century. Hormone-based oral contraceptives appeared in 1960, giving women much greater control over their reproductive lives. Today, many forms of modern contraceptives have lowered the birth rate and given many women increasing opportunities to develop careers and private lives unrelated to pregnancy and child rearing. Sexuality While many Native American cultures made room for a wide range of sexual practices, including samesex activity and polygamy, and remained flexible on matters of relationship initiation and termination, religious European colonists tried to contain sexual behavior within monogamous marriage, and strictly regulated divorce and remarriage. The American Puritans, for example, were very sexual within the context of marriage and engaged in a great deal of sexual intercourse. Their church mandated frequent sexual encounters between married couples, not only for procreation, but also to please their spouses. The end of the 19th century brought the social hygiene movement, whose members were intent on reforming sexual morality and eradicating venereal disease. In 1904, an influential treatise was published, Social Diseases and Marriage, by Prince Morrow. Morrow warned families of the dangers of syphilis, thought to have reached epidemic proportions due to husbands’ dalliances with prostitutes. Organizations such as the American Social Hygiene Association advocated strict sexual morals, the elimination of premarital sex, and increased willingness to discuss sexuality in order to raise awareness of the dangers of sexually transmitted disease. The movement successfully advocated for state laws requiring couples to be tested for venereal disease before marriage. Culturally, male sexuality came



to be viewed as a dangerous, yet irrepressible form of masculine aggression, whereas female sexuality remained taboo, associated with moral corruption and prostitution. Sigmund Freud’s 1909 lecture at Clark University sparked a revolution in the medical treatment of sexual problems in America, as well as a muchexpanded vocabulary that found its way into the popular consciousness regarding the diversity of sexual behaviors. Although Freudian theory postulated that homosexuality was a condition with environmental causes, Freud also speculated that all individuals were fundamentally bisexual. Throughout the 20th century, large cities saw the development of communities where sexual minorities had the freedom to create families, but all nonheterosexual behavior would be considered pathological until 1974, when the American Psychiatric Association voted to remove homosexuality from its list of mental disorders. In the mid-20th century, sexology emerged as a distinct academic field. Beginning with his 1948 book, Sexual Behavior in the Human Male, Alfred Kinsey revolutionized Americans’ view of sexual behavior by arguing that same-sex sexual activity was far more widespread, and sexual behavior far more various, than had previously been believed. In the 1960s, sexologists William Masters and Virginia Johnson debunked many outdated views regarding sexuality, including the belief, popularized by Freud, that “mature” female sexual pleasure was centered in the vagina, and that clitoral stimulation represented an immature form of sexual pleasure that interfered with normal heterosexual relations. In the 1980s, Dr. Ruth encouraged couples to take a playful, experimental attitude toward sexuality. Starting in 1980, the AIDS crisis touched all American families, and devastated the families of gay men in particular; approximately 325,000 died of the disease between 1990 and 1995. Spurred by vigorous civil rights advocacy over the following decades, growing concern for, and awareness of, the plight of gay men with AIDS eventually led to a greatly expended definition of families, which included the families of gay and lesbian couples. Children The first European children of North America faced grim realities. In the 17th century, infant mortality

Families and Health

471

was between 10 and 33 percent, with poor families suffering the highest rates. As early as possible, children were put to work as household servants, and older children were given extensive responsibilities for the care of younger siblings. Many poor families with both mothers and fathers who worked outside the home were forced to leave primary child-rearing responsibilities with older children, leading to inadequate care for many of the youngest and most vulnerable. In colonial times, families were powerfully linked to larger communities, and these connections did much to support children, even when most did not reach adulthood with both of their parents still living. Diseases such as scarlet fever and typhoid plagued the lives of American children, and were ameliorated by improvements in medical care and the rise of public health institutions. In 1908, the New York City Health Department established the Division of Child Hygiene, under the direction of the physician S. Josephine Baker. Through teaching child-care skills, distributing milk, keeping careful public health records, and sending nurses to make home visits to new mothers, infant mortality was dramatically reduced, from 27 percent in 1885 to just over 9 percent in 1926. As more people moved from rural to urban industrial areas, couples began to employ birth control techniques to limit the size of their families. An average white woman at the start of the 19th century had seven children. By 1900, that number was between three and four. Divorce rates also increased at the turn of the last century. Women were living longer lives and spending a greater portion of them free of the obligations of child rearing, reflecting their growing sexual, political, and economic independence. At the same time, the medical profession grew in importance in the realm of child rearing. At the turn of the last century, most babies were born at home, but by 1945, nearly 80 percent were born in hospitals, further reducing infant mortality. Today, most children are born in hospitals, and birth is increasingly a medical event. In addition, the number of infants born by cesarean section has greatly increased, peaking in 2009 at 32.9 percent. Child-rearing experts drew from the mystique of medical science and psychology to replace or supplement traditional wisdom regarding feeding, toilet training, playtime, education, and emotional

472

Families and Health

life. In 1928, psychologist John B. Watson published Psychological Care of the Infant and Child, which instructed parents not to kiss or emotionally comfort their children, lest they fail to achieve independence and self-discipline. A contrasting movement toward a more natural, instinctive approach to parenting started in 1946 with the publication of Dr. Benjamin Spock’s manual on child care. The debate over indulgent versus permissive child-rearing styles continues today. Infectious Disease Most American families today are free of infectious diseases that once cut short lives and disrupted relationships. Among the worst threats were malaria, yellow fever, diphtheria, and typhoid fever. In colonial times, these diseases claimed far more lives in the humid climes of the south than in New England. Advances in medical science in the 19th century led to the understanding that many diseases are caused by bacteria and viruses, which can be combated with medication and sanitary practices. In the 18th century, Englishman Edward Jenner invented immunology with his smallpox vaccine. The same period saw advances in the distribution of water with the use of water pumps. In 1751, the first general hospital in the United States opened in Pennsylvania, and a second opened in New York 40 years later. As hospitals grew in numbers and sophistication, health care functions once borne by families, such as the care of pregnant women, delivery of infants, and care of the elderly and other family members especially vulnerable to infection, were increasingly absorbed by these institutions. From 1950 to 1955, an average of 25,000 Americans, many of them children, were afflicted each year by polio, a viral infection which can attack the central nervous system and cause paralysis and other permanent physical disabilities. Ponds and other natural bodies of water were thought to be sources of contagion, and protective parents forbade their children to swim in them, lest they acquire an illness that could paralyze their limbs and require the lifelong use of mechanical ventilators like the “iron lung.” In 1955, a vaccine developed by Dr. Jonas Salk, which consisted of an injection of dead viruses, underwent a successful field trial and was distributed throughout the United States, eventually nearly eradicating

the disease. Shortly thereafter, a live-virus,orally administered vaccine, developed by Dr. Albert Sabin gave patients lifelong immunity. American enthusiasm for new technology, including medical technology, has historically existed in tension with American mistrust of intellectual authority. Recently, popular movements have eroded American families’ trust in professional medicine. In 1998, Dr. Andrew Wakefield, a British physician, was the lead author of a paper that purported to establish a link between the vaccine for measles, mumps, and rubella—a vaccine commonly given to children—and the development of autism. This article became the seminal text for a large anti-immunization movement among parents in the United States. The number of parents refusing to vaccinate their children grew—a development that likely accounted for a dramatic rise in measles infections after 2008. Rates of other illnesses like whooping cough and mumps also rose. Subsequent investigations revealed that Wakefield’s research methods were dishonest and his findings spurious. Some parent groups continued to oppose childhood immunization, despite growing awareness of Wakefield’s disgrace. Congenital Disease and Eugenics For much of American history, families had little recourse against congenital disorders, which can have either genetic or environmental causes. In the late 19th and early 20th century, wealthy and powerful families, hoping to defend their hegemony against the perceived threats of immigration, racial impurity, crime, and congenital disorders such as birth defects and intellectual disabilities (“feeblemindedness”) became interested in the emerging eugenics movement, which advocated forced sterilization, segregation, and limiting immigration in the hope that future generations would see fewer instances of problems that were seen as sapping precious resources from well-to-do American families. The social hygiene movement also influenced the eugenics movement, and successfully advocated for the forced sterilization of sexual, racial, and ethnic minorities and people with intellectual disabilities. Sterilizations declined following World War II, but continued in some states for a much longer time; compulsory sterilization was legally sanctioned in Oregon until 1983.



War The history of the United States is marked by conflicts and wars, each of which has presented special challenges to the health of American families. At the time of the Revolutionary War, the U.S. military did not provide support for the wives or children of enlisted men. For most of American history, urgent health concerns of military families were informally addressed; and before the 1960s, there was little institutional support for spouses or dependents of military personnel, the majority of whom were single men whom the military had actively discouraged from marrying or having children. Today, a majority of military personnel have spouses who receive health benefits. The deadliest of all American wars was the Civil War, with over 600,000 combat deaths creating vast numbers of widows and orphans, and in juries creating the largest proportion of disabled citizens in U.S. history. Following the exceptionally bloody Battle of Antietam in 1862, Dr. Jonathan Letterman, medical director of the Union’s Army of the Potomac and pioneer of military medicine, instituted the policy that relatives of gravely injured soldiers would not be permitted to assume the care of wounded family members due to the risk of further injury and death on the journey home. Smallpox, measles, and scarlet fever also afflicted enlisted men, and it is likely that deaths due to infectious diseases outnumbered combat deaths. After the first cases of “shell shock” were reported in the wake of World War I, understanding of the psychological consequences of warrelated trauma grew, along with an awareness of the profound effects of post-traumatic stress disorder (PTSD) on families. Returning military personnel with PTSD frequently disrupt the daily routines of their spouses and children with unprovoked anger and fear when memories of violence, injury, and death are triggered. The Vietnam War shook Americans’ confidence in their country’s military might and troubled not only the health of combatants, but also their families. Cancers such as leukemia, birth defects, and diabetes have been linked to the use of Agent Orange, a herbicide used to clear the dense jungles of Vietnam, Laos, and Cambodia in order to deprive communist guerillas of their strategic cover. Up to the present day, Vietnam War veterans and their families are sustained by disability

Families and Health

473

benefits, largely because of illnesses related to Agent Orange. Old Age Even in the colonial era, political thinkers such as Thomas Paine had called for policies to combat illness and poverty among the elderly, but before the Social Security Act of 1935, there was no federal support for the welfare of older Americans. The economic woes of the Great Depression had led to widespread poverty and homelessness, which was especially devastating to elderly Americans. Elderly relatives have grown in numbers and significance in American families as life spans have increased, in part due to medical advances. By 2020, 55 million Americans will be age 65 or older. In 1900, that number was just over 3 million. Elderly relatives, roles within the family are frequently of great importance. Nearly half a million older adults are the primary caretakers of their grandchildren. Older adults must contend with high rates of arthritis, heart disease, cancer, dementia, and other health conditions. Although most of these adults receive Medicare and Medicaid benefits, their outof-pocket health expenses far exceed that of the general population. Death and Grief For most contemporary Americans, death is rarely a visible part of life. This was not always the case. High rates of infant mortality, infectious disease, and violent conflict ensured, for most of American history, that people from all walks of life would see how people die. Moreover, terminally sick and elderly family members were most often cared for at home, where they would remain among kin until their deaths. The task of making meaning of death and grief fell to religious authorities, whose sway over the household was great at the start of the colonial era, and has considerably diminished in the modern era. Even as mortality rates decreased, the waning influence of organized religion, and the growing influence of medical doctors and psychologists, meant that the outward manifestations of grief and bereavement—sorrow, lethargy, and anger— were seen not as signs of spiritual struggle, but as symptoms of psychiatric disorders. For much of the 20th century, individuals were urged to work through the “stages” of grief efficiently, to discard

474

Family and Medical Leave Act

their connections to the deceased, and to return to work as soon as possible. More recently, mental health professionals have recognized the importance of close relationships for the management of grief, which is by nature isolating, so that marshaling the resources of the grieving family becomes a major strategy for preventing prolonged, particularly agonizing forms of grief. Today, death is often a professionally managed affair, with hospices, nursing homes, and home health care attending to the needs of the dying and bereaved. Mental Health In the early 20th century, psychologists such as Watson were pioneering the idea of the family as a locus of mental health. Early family therapy dates back to the 1930s and 1940s. Many of the early family therapists were medical doctors. Freud, considered the father of psychoanalysis, established the requirement for psychoanalysts to be medical doctors. Therefore, many early pioneers in the field of family therapy completed their medical training in order to practice in the mental health field. The convergence of family therapy and family medicine led to the founding of medical family therapy in the 1900s. The specialty of family medicine arose in 1969, and developed in parallel with the mental health field. Currently, at least half of mental health services are executed by primary care physicians. Furthermore, the majority of psychopharmacological medication is prescribed by a primary care physician, and the majority of visits to a primary care physician are for psychosocial concerns. In fact, primary care physicians and physicians from other specialties have frequently been default mental health professionals. Despite the explicit associations, physical health and mental health have been viewed as separate entities. Integrative care between family therapy and physicians serves as the bridge between the corresponding fields. Experts in the fields of family therapy and family medicine began to acknowledge the inseparability of mind and body, and to insist that the medical model acknowledge how psychological factors influence disease progression. A biopsychosocial model was necessary to understand the health of patients. In this new method of assessing a patient, physicians were encouraged to understand the cultural, social, and emotional context of the patient’s

life. The development of the biopsychosocial model created a systemic way of assessing the health of patients. All factors (physical, emotional, social) are interwoven, and each factor influences the other. Aaron S. Cohn Dixie Meyer Saint Louis University See Also: African American Families; Caregiver Burden; Caring for the Elderly; Child Rearing Experts; Child Rearing Manuals; Child-Rearing Practices; Contraception: IUDs; Contraception: Morning After Pills; Contraception and the Sexual Revolution; C-Sections; Death and Dying; Disability (Children); Disability (Parents); Dr. Ruth; Family Medicine; Family Therapy; Fertility; Health of American Families; HIV/ AIDS; Infertility; Masters and Johnson; Medicaid; Medicare; Mental Disorders; Native American Families; Nature Versus Nurture Debate; Nursing Homes; Obesity; Polio; Spock, Benjamin; Watson, John B. Further Readings D’Emilio, John and Estelle B. Freedman. Intimate Matters: A History of Sexuality in America. 3rd ed. Chicago: University of Chicago Press, 2012. Marlowe, Frank W. “Hunter-Gatherers and Human Evolution.” Evolutionary Anthropology, v.14/2 (2005). Riley, James C. “Smallpox and American Indians Revisited.” Journal of the History of Medicine and Allied Sciences, v.65/4 (2010). Rosen, George. A History of Public Health. Baltimore, MD: Johns Hopkins University Press, 1993.

Family and Medical Leave Act Balancing work and family responsibilities is challenging for many Americans. Through the years as the number of women in the workforce has increased, so has the need for family and medical leave. Congress passed the Family and Medical Leave Act (FMLA) in 1993 to help employees deal with these challenges by enabling them to take



reasonable, job-protected, unpaid leave for certain family and medical reasons. In addition to caring for family members, employees may experience serious illnesses that require them to take time off beyond typical sick leave. Americans are also living longer tod­ay, and many in the workforce are caring for elderly parents. The FMLA was created to support these employees. It also allows families to take care of sick children or siblings. In 2013, women made up approximately 47 percent of workforce; more than ever before. Women also comprised 66 percent of the caregivers in the U.S. workforce, a figure that continues to increase. Balancing work and family responsibilities is a mounting challenge for all caregivers, and with the Baby Boomer generation now moving into their golden years, the challenge is exponentially increasing. Even years before the FMLA was enacted, employers recognized that it was unfair to require individuals to choose between work and family when medical issues arose, such as the birth or adoption of a baby, employee illness, or illness in the employee’s family. Realizing that employees are more productive when they are able to take reasonable leave for these issues, some employers voluntarily provided such a benefit. Since 1993, the FMLA has ensured that leave is available in these situations; the Wage and Hour Division of the Department of Labor is responsible for overseeing the program. Employers mandated to provide leave under the law are those in the private sector with 50 or more employees who have worked 20 or more weeks in the current or preceding calendar year; public agencies from federal, state, or local governments regardless of the number of employees; or public and private elementary schools, regardless of the number of employees. The FMLA entitles eligible employees to take up to 12 work weeks of unpaid leave a year and requires both the person’s job and group health benefits to be maintained during the leave period. At the very least, employees must be able to return to an equivalent job at the end of their leave. To be eligible to take FMLA leave, an employee of a covered employer must have worked for the employer for a total of 12 months, have worked at least 1,250 hours over the previous 12 months, and work at a

Family and Medical Leave Act

475

location where at least 50 employees are employed by the employer within 75 miles. An employee may take leave to care for a newborn within one year of birth; to care for an adopted child within one year of placement; or to care for one’s spouse, child, or parent with a serious health condition. An individual can request personal leave if medical conditions require hospitalization, for conditions incapacitate for more than three consecutive days and require ongoing medical treatment, for chronic conditions that cause incapacitation, and for pregnancy. The 2008 National Defense Authorization Act (NDAA) amended the FMLA to provide two types of military family leave for FMLA-eligible employees. The 2010 NDAA and the new Final Rule amended FMLA yet again to expand militaryrelated leave entitlements. (Some provisions of the Final Rule were effective when it was enacted February 15, 2012 while others were effective beginning March 8, 2013.) Eligible employees may take FMLA leave for specified reasons related to certain military deployments of their family members. Additionally, they may take up to 26 weeks of FMLA leave in a single 12-month period to care for a covered service member with a serious injury or illness. In addition, a special eligibility provision for airline flight crews was also included. Joel Fishman Duquesne University/Allegheny County Law Library Karen L. Shephard University of Pittsburgh, Barco Law Library See Also: American Association of Retired Persons; Caregiver Burden; Caring for the Elderly; Child Care; Health of American Families; Military Families; Working-Class Families/Working Poor. Further Readings Decker, Kurt H. Family and Medical Leave in a Nutshell. St Paul, MN: West Group, 2000. Grabe, Erin M. “Gradual Return to Work: Maximizing Benefits to Corporations and Their Caregiver Employees.” Journal of Corporation Law, v.37/3 (2012). Jenero, Kenneth A. and Staci L. Ketay. “The Evolving FMLA: Guidance From Some Recent Court Decisions.” Employee Relations Law Journal, v.27 (2002). National Alliance for Caregiving and AARP. Caregiving in the U.S. Washington, DC: Author, 2009.

476

Family Businesses

Susser, Peter A. “Employer Perspective on Paid Leave & (and) the FMLA.” Washington University Journal of Law and Policy, v.15 (2009). U.S. Bureau of Labor Statistics. “BLS Reports: Women in the Labor Force: A Databook” (February 2013). http:// www.bls.gov/cps/wlf-databook-2012.pdf (Accessed July 2013). U.S. Department of Labor, Wage and Hour Division. “Family and Medical Leave Act.” http://www.dol.gov/ WHD/fmla/2013rule (Accessed July 2013).

Family Businesses A family business is a firm owned and managed by members of a single family or a firm in which the family owns sufficient voting shares to appoint top management and determine the firm’s strategic direction and approaches. Family-owned businesses are a vital force in the U.S. economy, employing almost two-thirds of all workers and creating more than three quarters of all new jobs. Family businesses were key players in industrialization. Farming, manufacturing, retail sales, and services are some of the earliest sectors for family businesses. Most businesses during the colonial and antebellum periods were family enterprises, operated for the primary purposes of family survival, in which most family members worked without pay. Women have always participated in family businesses. Historically, most minority small businesses have been family owned and operated. Nonfinancial goals are important drivers of most family businesses. Family-owned businesses last on average 24 years; the top causes of family business failure include inadequate estate planning, failure to plan for the transition to the next generation, and inadequate funds to pay estate taxes. More than 1,000 U.S. family businesses have been in existence for at least 100 years. Definition of Family Business The definition of family business can vary based on the criteria used to define it. Most narrowly, a family business is owned and managed by members of a single family. However, family businesses may be family owned or family controlled. Family ownership means that family members own enough

voting shares of company stock or fill enough seats on the board of directors to decide who serves as the chief executive or the general manager of the company. “Family controlled” means that family members fill major management positions; for example, a member of the family serves as the chief executive or general manager of the business. A narrow definition of family business based on ownership is a business with private ownership (family members control more than 80 percent of voting stock shares), majority ownership (family members control more than 50 to 80 percent of voting stock shares), or minority ownership (family members control between 20 and 50 percent of voting stock shares). A broader definition of a family business is when at least 4 or 5 percent of the business capital belongs to one or more members of one or more families, and one or more family members has a position the board of directors. This wider definition reflects businesses in which the family’s (or families’) ownership capital is large enough to appoint top management, determine the firm’s strategic direction and approaches, and impede efforts of other shareholders to join together and influence who runs the company. Family Business Statistics Family-owned businesses are a vital force in the U.S. economy, comprising about 90 percent of all businesses. According to 2011 figures, about 5.5 million businesses were family enterprises, ranging from the smallest two-person market to giant corporations such as Walmart Stores Inc. (the largest American family-owned business), the Ford Motor Company, and Marriott International Inc. Family-owned businesses make up about 57 percent of the U.S. gross national product (GNP), which is the value of the goods and services produced in the country and the value of the goods and services imported into the U.S. economy, minus the value of the goods and services that were exported. Families control more than a third of all Fortune 500 companies. The return on assets (ROA) for family businesses is higher than the ROA for other businesses. History of American Family Businesses American families have started and run businesses from the colonial and antebellum periods through the Industrial Revolution into the present day. The colonial period refers to the time between



settlement of the first colony in Jamestown, Virginia, in 1607 to the founding of the country in 1776. Antebellum refers to the period following American independence until the outbreak of the American Civil War in 1860. The American Industrial Revolution, which occurred from 1820 to 1870, refers to the shift from production of goods by hand and at home to production by machines in factories. The Industrial Revolution had a profound impact on the nature of family businesses, particularly in farming and manufacturing. Many family businesses adapted to transformations in technology and markets, and many family businesses prospered during the first great period of industrial expansion between 1880 and 1920. The founders of family businesses who adapted and thrived include still-recognizable names such as Andrew Carnegie (steel); Marshall Field (retailing); Jay Gould (railroads); J. Pierpont Morgan (banking); Charles Goodyear (rubber vulcanization); Samuel F. B. Morse (the telegraph); E. B. Bigelow (carpets); John Deere (farm machinery); I. M. Singer (sewing machines); Thomas Edison (the light bulb, phonograph); George Eastman (the camera); Cyrus McCormick (grain); Henry Ford (automobiles); P. D. Armour (meat packing); E. Remington (rifles); and Alexander Graham Bell (the telephone). The nature of family businesses shifted over time, from the colonial era to industrialization, to the Great Depression and the economic collapse that followed the stock market crash of 1929 and lasted until the end of World War II. Most businesses during the colonial and antebellum periods were family enterprises operated for the primary purposes of family survival and family advancement. The family farm is a prime example. During colonial times, most families farmed for self-sufficiency, meaning they mainly grew food and raised meat for their personal consumption. Family farms were mainstays of agriculture during the antebellum period. Family farms that were sufficiently large were able to produce cash crops for the market. The emphasis of family farms began to shift from self-sufficiency to farming of crops and other foodstuffs that were sold locally and long-distance. Even with the shift to production for the market, family farming provided nonfinancial rewards, such as independence and security, even when other, more financially lucrative industries emerged. In the late

Family Businesses

477

1800s, the tension between family self-sufficiency and the lure of the market economy continued to grow, with a concomitant increase in farming on a larger scale. Family farms still dominated the agriculture industry in the early 1900s, but farming was beginning to shift toward agribusiness, in which several agricultural concerns combined, with profit as the major motive. These businesses hired farm workers instead of family members to carry out most jobs. Family farmers typically relied on the unpaid labor of women and children in the family. The number of small family farms sharply decreased between 1920 and 1945, but some family farmers continued to prosper by specializing in fruits, nuts, or other luxury items. After World War II, family farmers encountered increasing difficulty in raising the financial resources that they needed for their farms to grow and prosper, such as tractors and other mechanized equipment, hybrid seeds, and artificial fertilizer. Farmers also needed to farm more land in order to be profitable and competitive with agribusinesses, but obtaining additional acreage was often cost prohibitive. While agribusiness has replaced family farms as the dominant model, a return to small-scale family farming for self-sufficiency and selling to niche markets is occurring alongside large-scale industrial food production in the 21st century, especially as the market for organic and artisan produce expands. Prior to 1880, skilled tradespeople operated family businesses as shoemakers, cabinet makers, and saddlers, producing and selling goods they made out of wood, metal, or leather. In colonial times, these families took in apprentices who learned the trade without pay. Several famous figures in the American Revolution worked in family businesses, including Paul Revere and John Hancock. New methods of production and sources of power during the antebellum period changed how these skilled artisans produced their goods, yet most skilled artisans continued to run and operate their enterprises as family businesses. Paul Revere’s family business in copper and metals lasted for five generations; even after a merger in 1990, the Revere family continued to operate the business. Despite major shifts in production during the American Industrial Revolution from 1880 to 1920, many manufacturers continued to operate as family

478

Family Businesses

businesses. For example, Pittsburgh iron and steel companies were dominated by family businesses that survived well into the mid-20th century. Retail businesses sell goods or commodities directly to consumers, such as food, clothing, and supplies. In the colonial and antebellum periods, most retail shops were run by a sole proprietor in a single location. Family members assisted as unpaid help. Extended family members sometimes ran chains of shops located in separate geographic areas. The shift to chain stores and department stores accelerated from 1880 to 1920, although family-owned retail was a dominant model well into the late 20th century, specializing in groceries and other goods. After World War II, small retailers began to form associations to help them access discounts for volume purchasing. These associations, such as the Independent Grocers’ Alliance (IGA) and retailer-owned cooperative warehouses, helped to reduce business costs. Service businesses provide expertise to customers. Examples of services include banking, accounting, insurance, dry cleaners, and tailors. Prior to 1880, service businesses providing insurance and banking typically operated as family businesses out of a single location. Banking, particularly in the New England region, functioned as financial components of extended family. Although the banks raised funds through deposits and public offerings of stock—much like present-day banks—families retained control through ownership of sufficient shares of stock. Many of these family-owned and controlled banks had great longevity. Examples include the Providence Bank, which was controlled by the Brown, Ives, and Goddard families from 1791 to 1926; and the Merchants Bank of Providence, which was controlled by the Richmond, Chapin, and Taft families from 1818 to 1926. Women-Owned Family Businesses Women have always participated in their family businesses as unpaid workers and deputies when their husbands were unavailable due to travel, disability, or death. Prior to 1880, women were legal dependents of their husbands or fathers, a status reflected in their work roles in family businesses. As deputy husbands, women assumed many roles in family businesses, sometimes through an official designation, but more often out of custom within their community. For example, Abigail Adams managed the family farm

for her husband, John Adams, as well as his business affairs when he was engaged in politics and serving as president of the United States. In the antebellum period, husbands and wives often ran businesses together such as restaurants, saloons, boardinghouses, and sometimes brothels; in this area, prostitution was typically a family business. The shift to agribusiness that begin at the turn of the 20th century mechanized much of men’s work on the farm, whereas women’s work was generally remained unchanged from colonial days. They cooked, churned butter, preserved food, cleaned and sewed, and raised poultry. The shift away from family farming to commercial farming replaced women on the family farm with hired farmhands, typically male. Historically, ethnic cultures supported women’s participation in business, often as wage earners. By the start of World War II, immigrant women were often employed in family businesses in New York City and Kansas City, Missouri. Current figures indicate that women chief executive officers or presidents lead about a quarter of family businesses. A survey of family businesses indicates that almost a third have a woman in line as the next successor. Women hold top management team positions in almost two-thirds of family owned businesses. Researchers have found that mothers play a crucial role in transferring values between the generations. Traditionally, family businesses were passed from fathers to sons, but women are now much more likely to take the reins. The proportion of women-owned family businesses has grown by almost 40 percent in recent years, and some indicators show that women-owned family businesses are more successful than those run by men. Minority-Owned Family Businesses From 1880 to 1920, most minority businesses were small and family owned and operated, and family members often worked without pay. Examples include grocery stores owned and operated by Jewish families and Chinese-owned and operated restaurants, groceries, and laundries. Asian families often ran these businesses in extended networks. In the second half of the 20th century, many Korean and Vietnamese immigrant families started businesses. Businesses serving the African American community were often small, family owned, and developed out of necessity in order to provide goods and services to people in their community. Examples of



family businesses include grocery stores, dry cleaners, barber shops and beauty salons, banks, and insurance companies. Compared to their white and Asian American counterparts, African Americans and Latinos have been and still are substantially less likely to own and operate a business. These disparities are partly attributed to lower educational attainment, fewer assets for start-up or expansion, and fewer parents who were business owners. However, minority ownership of businesses sharply increased in the 1990s, and is still rising. According to the U.S. Census, the 30 percent growth rate of minority-owned businesses from 1992 to 1997 was more than four times the overall business growth rate. From 1997 to 2002, the growth rate of minority-owned businesses was 30 percent, compared to 10 percent overall. The growth rate for African American owned, Asian-owned, and Hispanic-owned businesses for the five-year period was 45, 24, and 31 percent, respectively. According to data from a 2002 survey by the U.S. Department of Commerce Minority Business Development Agency, the largest category of African American owned businesses was health care and social assistance (20 percent); the largest category for Asian-owned businesses was other services (17 percent), followed by professional, scientific, and technical services (14 percent), and retail trades (14 percent). The largest category for Hispanic-owned businesses was other services (16 percent), followed by construction (14 percent) and administrative support, waste management, and remediation services (13 percent). The Minority Business Development Agency data indicated that four states—California, Texas, Florida, and New York—account for more than half of all minority-owned businesses. Almost 40 percent of minority-owned firms are concentrated in the greater metropolitan areas of Los Angeles, San Francisco, Atlanta, New York City, Miami, Washington, D.C., Chicago, and Houston. Motivation and Challenges While financial success is necessary for the survival of a family business, research has revealed that profit is not usually the primary motivator. Nonfinancial goals are important drivers of most family businesses. Nonfinancial goals are goals at the family and company level that have no direct monetary value.

Family Businesses

479

Research has documented family-level nonfinancial goals that include family cohesiveness and loyalty; harmonious family relationships; autonomy and control; pride; and status, respect, name recognition, and goodwill in the community. Nonfinancial goals also shape family identity. The top causes of family business failure include inadequate estate planning, failure to plan for the transition to the next generation, and inadequate funds to pay estate taxes. The aging generation of entrepreneurs who founded their businesses after World War II will create what may be the largest transfer of wealth across generations in the history of the United States. However, inability to pay estate taxes could force some of these businesses to close. Additionally, problems of leadership succession threaten the longevity of family businesses. Sometimes, the family lacks potential successors who can fill the role, and sometimes the involvement of many family members in the company’s day-to-day management makes it difficult to select a successor. Estimates are that by 2017, more than 40 percent of current family business owners will retire, yet fewer than half of these owners report having chosen their successor. In order for family businesses to successfully transfer to the next generation, succession must be planned and carefully managed. Recent data indicate that only about a third of family businesses successfully transfer to the next generation. Family business leaders are considering new models of succession planning to address concerns that subsequent generations may lack sufficient management aptitude to run the business. A recent survey revealed that slightly more than half of U.S. family business leaders expect the next generation to run the business; almost a quarter are planning to bring in outside management at the time of ownership transfer. A common expectation for family businesses is that as the business becomes larger and more complex, it will evolve from a family enterprise to a public company within two or three generations, or cease operation. Failure to engage in strategic, estate, operational, and governance planning can lead to the failure of a family business. Involving key stakeholders, such as boards of directors and family members, in planning and developing processes to deal with the complexity of the transfer can be the difference

480

Family Consumption

between success and failure. Succession teams can be more effective than a sole successor, and appropriate selection of team members and clearly articulated structures are necessities for an effectively functioning team. The economy, competition, innovation, and talent are among other critical challenges facing American family businesses. In order to remain domestically and globally competitive, family businesses must improve their current offerings and create and offer new products and services. They must also adapt how they engage with customers and revise their business model in light of changing technology and circumstances. For example, many of even the smallest family businesses now have a Web site, and many sell their goods online. Family businesses must remain innovative and attract and retain talented people, including employees from outside the family. Turnover of employees outside the family, however, is likely to be much higher than turnover of family members. Longevity More than 1,000 U.S. family businesses have been in existence for at least 100 years, with most of them now in the fifth generation of family ownership. About two-fifths of these businesses are in manufacturing; almost a fifth are in the insurance and finance sectors. More than 10 percent are retail companies. About three-fifths are privately owned, and more than half have at least 500 employees. According to recent research, the longevity of some family businesses may be attributable to the efficiency of continued family control, applying a long-term outlook that supports strategic planning and positioning, fewer human resources issues, higher company values, greater emphasis on new entrepreneurial efforts, and an environment that supports innovation. The importance of transferring family values to subsequent generations, along with financial wealth, may also contribute to longevity. These values include charitable giving, philanthropy, and volunteering. The Zildjian Cymbal Company is the oldest family business in the United States, and is one of the oldest companies in the world. The company, founded in 1623 in Constantinople, Turkey, is now located in Norwell, Massachusetts, where it is run by the 14th generation of family members. Other family businesses that are among the oldest and

largest in the United States include Cargill Inc., founded in 1865, which is a commodities business; Levi Strauss, an apparel company founded in 1853; and Crane & Co., a paper manufacturing company founded 1801. Keri L. Heitner University of the Rockies See Also: Child Labor; Department Stores; Family Farms; Industrial Revolution Families; Intergenerational Transmission. Further Readings Aronoff, Craig E. “Megatrends in Family Business.” Family Business Review, v.11/3 (1998). Blackford, Mansel G. A History of Small Business in America. Chapel Hill: University of North Carolina Press, 2003. Bruchey, Stuart Weems, ed. Small Business in American Life. Washington, DC: Beard Books, 2003. Colli, Andrea. The History of Family Business, 1850– 2000. Cambridge: Cambridge University Press, 2003. Family Firm Institute. http://www.ffi.org (Accessed August 2013). PWC. “Family Firm: A Resilient Model for the 21st Century: PwC Family Business Survey 2012.” http://www.pwc.com/en_GX/gx/pwc-familybusiness-survey/assets/pwc-family-businesssurvey-2012.pdf (Accessed August 2013). Robinson, S. and H. A. Stubberud. “All in the Family: Entrepreneurship as a Family Tradition.” International Journal of Entrepreneurship, v.16 (2012). Zellweger, Thomas M., et al. “Why do Family Firms Strive for Nonfinancial Goals? An Organizational Identity Perspective.” Entrepreneurship Theory and Practice, v.37/2 (2011).

Family Consumption Consumption—the purchase and use of goods and services by families—is not only the behavior used to meet needs and wants, but it is also the main source of U.S. economic activity. Therefore, both the private and public sectors encourage and enable consumption expenditures by families. However, the importance of the family’s role as consumers to the



economy became pronounced in the 20th century. Further, family consumption patterns have changed over time due to various individual, relational, social, and environmental factors. Recent trends in consumer behavior have had problematic consequences to personal well-being and the environment. Families are, and have always been, key to the economic well-being of society. Not only do families provide the needed labor force, they also expend their resources to meet their needs and wants. Such consumer expenditures bolster the economy. In fact, families are the largest market for goods and services in the United States. In 2013, consumer spending was estimated to be 71 percent of gross domestic product (GDP), making it the main source of U.S. economic activity. Currently, the U.S. economy is mostly service based. Because of the purchasing power of families and the capitalist system that provides a wide array of accessible services and products, their primary role is that of purchasers/consumers, rather than producers/providers of goods. Families have always been involved in the exchange of goods and services, albeit the emphasis of their place in the economy varied over time. Rise of Family Consumption In the past, families were more self-sustaining than they are today. Early settlers commonly produced their food and shelter. When they had more food than needed or were unable to create an item themselves, this excess was used to trade with others in the community for services or products. Technological advancements and further settlement aided in the growth of towns and communities, and retail and wholesale businesses were established to handle the exchange of goods. Over time, families became less self-sustaining and raised livestock and crops more for exchange than for personal consumption. At this time, the U.S. economy was primarily based on production and distribution of agricultural goods. After the Civil War and industrialization in the United States and Europe, active participation of families in the U.S. economy increased. Because mass production was now possible, and exchange was no longer limited to local market areas (because of the railroad, telegraph/telephone lines, and other mechanical and electrical inventions), more people worked in mass production and

Family Consumption

481

distribution of goods. Individuals and families used their earned wages to pay for services and products that they needed rather than producing or trading goods. Although the Great Depression devastated the economy, the country was able to rebound due to needed production for World War II and massive government spending. Passage of the Employment Act in 1946 further solidified the place of families as labor sources in the economy by creating the expectation of full-time employment, stable prices, and increased production. The government was also given the means to shape the economy, with Congress now having the authority over tax policies and expending funds to bring economic output to a desired level of activity. Such actions by the government—manipulation of taxes and interest rates and increased government spending—have been used during other downturns over the past century to increase consumer spending and therefore improve the economy. Because of the Industrial Revolution and government programs (including the welfare, labor, and financial policies of the New Deal), the family was vital to the economy as a source of labor power. However, as production decreased and economic growth in the 1960s and 1970s stagnated, the family’s role in the economy also began to shift emphasis. Although families were (and still are) sources of labor and consumers of goods, their consumption was now the source of economic development and the growth of service industries in the United States, rather than the source of production. The social safety net of government programs used to support the economic well-being of families prior to the 1970s was replaced in the 1980s and 1990s with a greater social emphasis on the individual responsibility and self-sufficiency of families, and on their role as consumers. Although over the past century, families drove economic growth and capital accumulation through government support and their consumption expenditures, the recession at the beginning of the 21st century exposed the economic deterioration of families and their increasing household debt. Families’ earned income was unable to entirely meet their levels of consumption. Since then, the economic standing of families has improved, but their role as consumers remains vital to the economic well-being of the country and levels continue to be a source of concern.

482

Family Consumption

Social Influences and Conspicuous Consumption Government policies and economic development in the United States were only part of the influence on family consumption. The U.S. economic system is rooted in European economic development that occurred prior to the 1800s. Consumerism also shaped the economy of Europe in past centuries, rising to prominence during the reign of Queen Elizabeth II, when the relationship between families and the economic system began to depend more on individuals’ willingness to engage in conspicuous consumption. Such consumption was used to bolster one’s position in society, particularly for noble families. In affluent countries, consumption began to be driven not only by survival needs, but also by the want of social standing. In Theory of the Leisure Class (1899), Thorstein Veblen coined the phrase conspicuous consumption to describe expenditures made by families in order to secure their place in the social hierarchy. He saw such consumption to achieve status as wasteful, and was critical of those in the “leisure class” who used spending for unneeded or extravagant goods in order to gain standing in society. Those in lower social classes observed this behavior, and attempted to emulate such consumption as they were able. In this way, consumption had a trickle-down effect as families sought to emulate those who were one step higher in social standing. This, combined with mass production and prosperity in the United States, increased discretionary spending by all families. This social influence on family consumption to improve one’s status continued. The still commonly used phrase keeping up with the Joneses to describe this social phenomenon was introduced in 1949 by Harvard economist James Dusenberry to explain the social dimensions of consumption. In Income, Savings, and the Theory of Consumer Behavior, Dusenberry asserted that one’s neighbors are used as a reference group—people compare themselves to others and attempt to maintain a similar level of consumption. Because one usually lives in a neighborhood with others of similar income and standing, the focus is to do as well as, or slightly better than, others around oneself. Although other economic theories have been proposed, the “Jones” phenomenon is still a driving force in family consumption patterns and conspicuous spending.

Family Consumption Patterns The U.S. Department of Labor, Bureau of Labor Statistics (BLS), has conducted surveys of consumer expenditures on a regular basis. Survey data show how families spend their fiscal resources andvariations in consumption patterns over time. They also analyze the impact of various characteristics including age, education, race, origin, and occupation of the reference person; composition of the consumer unit; housing tenure and type of area; income before taxes; number of earners in the consumer unit; quintiles of income before taxes; and region of consumer unit. However, a significant limitation of information from this source is that the BLS uses the terms family and household interchangeably to refer to one or more people occupying a housing unit. From the data available, consumption patterns of families over the past 100 years have demonstrated the importance of meeting needs for food, clothing, housing, heating and energy, health, transportation, furniture and appliances, communication, culture and education, and entertainment. However, the emphasis and percentage of income devoted to these needs has changed over time. Between 1901 and 2003, the average income of households, adjusted for inflation, increased threefold ($750 to $2,282, in 1901 terms). At the same time, expenditures more than doubled ($769 to $1,848), while household size decreased from an average of 4.6 to 2.6 people. However, in 1901, almost 80 percent of a family’s spending was devoted to food, clothing, and housing; by 2003, families spent only 50.1 percent of their income to meet such needs. Discretionary spending by families rose from 20 percent of one’s expenditures to 43.3 percent. Transportation expenditures also changed; almost 90 percent of households in 2003 had at least one vehicle, while in 1936 it was closer to 40 percent. One of the most significant changes was that a greater portion of income was spent on housing, with less spent on food. In 1901, 23.3 percent of income went to housing and 42.5 percent was for food; numbers were similar in 1949, with 40 percent for food and 26.1 percent for housing. In recent years, however, only 15 percent of income is for food, while 41 percent goes to housing. Even with this change, the percentage of households that own the home in which they live has increased from 19 to almost 70 percent.



Current Consumption Patterns According to the BLS, in 2012, average annual household expenditures reached a high of $51,422, surpassing the past peak of spending in 2008, and increasing from the low of $48,109 in 2010. The greatest expense for households was housing ($16,887), followed by transportation ($8,998), food ($6,599), personal insurance and pensions ($5,591), miscellaneous expenses ($3,557), health care ($3,556), entertainment ($2,605), cash contributions for charitable giving and child/spouse support payments ($1,913), and apparel and services ($1,736). Proposed trends that may slow future growth of family consumer spending include stagnant incomes, an increase in the wealth gap, higher costs of credit and greater difficulty acquiring loans, fragile consumer confidence, and a reversal of stimulus spending by the government. Factors Influencing Consumption In addition to government policies and practices, family consumption is also influenced by family circumstances, social influences, and psychological characteristics. Family circumstances include family structure and size, as well as members’ age and stage, gender, socioeconomic status (SES), education level, occupational status, and lifestyle. SES is frequently examined variable of consumer behavior. With fewer resources, family consumption expenditures involve more focus on necessities than wants. Low-income families, for example, spend $7 of $10 for basic living expenses of food, shelter, and transportation. Further, parents with higher levels of income and education are more likely to involve their children in the process of consumer purchases. Children in single-parent families also have greater influence over family consumption than those in step- or first-married families. Social influences include culture, reference and membership groups, social status, media and technology, innovation, corporate America, and family experiences (e.g., intergenerational transmission, parenting styles, and communication and relationship patterns). For instance, working-class consumers tend to be more loyal to brands. Children learn consumption behaviors not so much from what parents tell them, but by what they observe. In fact, brand preference is passed on by simply seeing what items parents bring into the home. Intergenerational transmission has also been found with impulse

Family Consumption

483

shopping, perceived value of private labels, willingness to try new things, value consciousness, convenience orientation, and prestige sensitivity. Psychological characteristics such as personality, emotions, motivations, and beliefs also shape family consumption behaviors. For instance, individuals with more cautious personalities are less likely to try new things, whereas those who are more innovative tend to be early adopters of new technology. Consequences of Family Consumption High levels of household consumption have impacted personal and social well-being. In 2010, 68 percent of families had credit cards, with over half of these having a monthly balance. Although this is a decrease from 2007, when almost three-quarters of families had cards and 61 percent of those families carried balances, such high levels of unsecured debt place financial burdens on individuals and families, who must then work longer hours to fund such consumption. This stress and time lost for other aspects of life can place a toll on personal and relational well-being. Time and effort is also expended to maintain possessions (clean, store, and upgrade) that might otherwise be spent with friends and family. Increased obesity and inadequate savings are other consequences identified by economists for the current rates of family consumption. Current levels of household consumption impact not only families but also the environment as well. For instance, while the United States has less than 5 percent of the world’s population, it consumes more than a quarter of the planet’s fossil fuels. With such heavy use of the Earth’s natural resources and the resultant waste, the World Wildlife Fund’s Living Planet Report estimates that the planet’s ecological health has declined 35 percent since 1970. Recent focus has been placed on finding means to decrease the environmental impact of family consumption, but social, political, and corporate entities still encourage consumption by families to maintain the U.S. economy. Shannon E. Weaver Rebecca Ruitto University of Connecticut See Also: Credit Cards; Dual-Income Couples/DualEarner Families; Living Wage; Rational Choice Theory; Standard of Living.

484

Family Counseling

Further Readings Chao, Elaine and Kathleen Utgoff. “100 Years of U.S. Consumer Spending: Data for the Nation, New York City, and Boston.” Department of Labor Report 991. U.S Bureau of Labor Statistics (2006). http://www.bls. gov/opub/uscs/report991.pdf (Accessed March 2014). Oakley Hsiung, Rachel, et al. “Social Foundations of Emotions in Family Consumption Decision Making.” Social Influence, v.7 (2012).

Family Counseling Family counseling refers to the treatment of emotional and psychological issues in a family unit. The goal of family counseling is to improve the family’s ability to manage difficulties that may impede emotional functioning, communication, the management of mental disorders, and healthy interpersonal relationships. Families in the United States practiced some tenets of family counseling long before the profession existed, such as elders providing advice and instruction to younger family members about interpersonal relationships and other issues of family life. Since its emergence in the mid-20th century, the field of family counseling has developed into a respected profession that helps millions of people each year. Family counselors believe that addressing an individual’s mental health issues is best achieved by treating the family dynamic as a whole, rather than focusing on the individual. Prior to the 1940s, family counseling as a profession was almost nonexistent. At that time, three social factors worked against the expansion of the profession. The first of these was the tradition of individuals confiding marital and family concerns to professionals with whom there was already an established relationship. Usually, clergy, medical doctors, and lawyers were consulted, instead of mental health professionals. The second factor was the expectation that individuals should solve their problems. Adding to this expectation were the social consequences of individuals losing esteem in their community if they were unable to handle their problems. The third factor that prohibited the growth of family counseling before the 1940s was the dominant psychological theories of the time. The two most prominent theories—psychoanalysis

and behaviorism—were philosophically opposed to working with more than one person at a time. In fact, psychoanalysts believed that more than one client in the counseling office would prevent transference, the mechanism by which insight and change occur, from taking place. However, a number of factors in the late 1930s and early 1940s combined to make family counseling an acceptable option for American families. The first factor was the increase of the number of women enrolled in college who took courses in family life. Educators responded to this increased demand by creating courses on such topics as parenting, marriage, and family living. The second factor was the establishment of marriage counseling, which was popularized by a monthly feature in Ladies Home Journal titled “Can This Marriage Be Saved?” This feature began in 1945, and continues into the 21st century. Other factors included the establishment of Marriage and Family Living, a journal that presented information about various aspects of family life and the work of county home extension agents who educated families on the dynamics of family situations. The 1940s also brought the establishment of the American Association for Marriage Counselors, an organization that devised standards for the practice of working with couples. During the same time period, the development of family counseling was spurred on by the aftermath of World War II and a study of families of individuals suffering from schizophrenia. In the 1940s, a study was published that provided survey results of 50 families that had a member diagnosed with schizophrenia. This study reported that the majority of persons with schizophrenia came from broken homes and/or had seriously disturbed family relationships. Along with these findings, the events of World War II brought numerous changes and incredible stress to family systems: men had been separated from their families, women began to work in factories, and many loved ones returned from the war disabled. These events provided the impetus to focus on mental health work with families. In the 1950s, family counseling as a profession hit its stride. This was accomplished through continued work with persons diagnosed with schizophrenia and their families, as well as the development of influential theories of family functioning, such as the double-bind theory, systems theory, contextual therapy, and conjoint couples therapy. During the



1960s and 1970s, the idea of working with couples and families became much more widely accepted. This was in part because of the major figures associated with this type of work, the increased number of training institutes and family counseling associations, the incorporation of foreign therapies and therapists, and further refinement of family therapy theories. In the 1990s, this refinement brought postmodern theories of working with families into existence. These theories, including solution-focused therapy and narrative therapy, are both founded on the idea that truth is relative. They also emphasize that families are the experts of their experiences. Divorce rates have dramatically risen since the introduction of the no-fault divorce in the 1970s (the United States has the highest of any industrialized nation); spouses have undergone power shifts since women have gained more economic parity with men; cohabitation, rather than marriage, is increasingly common; and blended families, singleparent families, and LGBT families are more prevalent. This marks a shift away from the ideal of the companion marriage popularized in the early 1920s to self-aspiration, enhanced freedom, and egalitarian relationships. In the 1970s, many family counselors incorporated feminist approaches that analyzed the changing power structure between men and women in family systems. As a result of this feminist contribution, families in counseling were encouraged to recognize the impact of social, cultural, and political factors on their lives, and to move beyond gender stereotyping. Families were also invited to reflect upon harmful internalized social standards associated with traits that were typically defined as “masculine” or “feminine” in the culture at large. Another key development in the field of family counseling is the shifting American cultural landscape because of the expansion of immigrant populations. While immigrants in the early 20th century primarily came from European countries, many immigrants in the later 20th and early 21st centuries came from Asia, India, the Middle East, and Central America. The focus on multicultural sensitivity in family counseling intensified in the late 1960s, when antidiscrimination sections were added to professional ethical codes. This led to the development of culturally competent therapy practices. In the most recent code of ethics, family counseling practitioners are advised to

Family Counseling

485

understand cultural differences among clients and to engage in culturally competent practice, while accreditation bodies mandate that each school of family counseling include content on diversity. Initially, racial diversity was the focus of courses in cultural sensitivity. These courses taught family counselors various approaches to and interpretations of typical family problems based on the family’s race. Since then, cultural sensitivity has expanded to include many factors beyond race, including homosexuality, substance abuse, and severe mental illness. As the cultural backgrounds of American families continue to expand, the development of culturally competent practice continues to be a goal for family counselors. In an effort to meet this goal, family counselors are encouraged to frequently assess their cultural competence. The profession of family counseling has also had to adapt practices to meet the needs of diverse families who no longer fit the mold of the traditional nuclear family. Additionally, recognizing that each family is different, they continue to emphasize the entire family system, rather than its individual parts. One example of this has been modifying practices to work with divorcing families. Doing this considers not only the many stages of divorce, but also the role of each individual in the family, respecting their distinctive values and ideals. With regard to divorcing partners, therapists will address conflict and help facilitate the creation of a coparenting plan that speaks to the needs of each partner. Family counselors also respond to divorcing partners by helping them manage feelings such as disappointment and guilt that are often associated with divorce. They also connect with and address the unique emotional needs of any children involved in the divorce. Family counseling professionals respond to a multitude of requests from individuals within blended families. Blended families consist of a couple and their children from the current as well as previous relationships. While the blended family has become one of the most prominent family structures in the United States, the process of creating these families is rarely smooth. A majority of children resist these changes, and parents experience a great deal of frustration when the new family’s functioning does not resemble that of the prior family. Family counselors recognize that the changes required in the formation of a successful

486

Family Development Theory

blended family require time. Using this recognition, they help family members to adjust their expectations and to engage in open communication about the needs of all members as they transform into a new family unit. Since the early 20th century, family counseling has provided a framework for improving relational dynamics between family members, both those with and without mental issues. Over the years, the changing dynamics in the American family have provided an opportunity for the field of family counseling to restructure its practices in an attempt to better address the needs of these evolving families. While family counseling practices are not onesize-fits-all, their basic principles have endured. The functioning of the whole family needs to be considered in order for individuals to change. Winetta Oloo Golnoush Yektafar Loma Linda University See Also: American Family Therapy Academy; Family Life Education; Family Stress Theories; Family Therapy; Fragile Families; Parenting. Further Readings Gladding, Samuel T. Family Therapy: History, Theory, and Practice. Upper Saddle River, NJ: Pearson Education, 2011. Kemp, Gina, Jeanne Segal, and Lawrence Robinson. “Guide to Stepparenting and Blended Families: How to Bond With Stepchildren and Deal With Stepfamily Issues.” http://www.helpguide.org/mental/blended _families_stepfamilies.htm (Accessed July 2013). McGoldrick, Monica, et al., eds. Ethnicity and Family Therapy. 3rd ed. New York: Guilford Press, 2005.

Family Development Theory Family developmental theory emerged in the 1940s as a way to understand the experiences of children and their families in the United States. According to family developmental theory, children and their families progress through a series of developmental stages,

Table 1 Developmental stages and tasks Stage: Independent adults • Separating emotionally from one’s family of origin. • Developing individual identity and the ability to meet one’s own emotional needs. • Supporting oneself financially and physically. • Developing intimacy and relationship skills with peers and significant others. • Exploring interests and career goals. Stage: Coupling/marriage • Transitioning from individual to “couple.” • Developing intimacy and communication with a partner. • Adjusting to living together. • Adopting and adapting to new roles. • Putting another’s needs ahead of one’s needs. • Transitioning from single friends to married friends. • Combining two family systems. Stage: Childbearing families • Transitioning from couple to parents. • Negotiating parenting roles. • Meeting baby’s needs (24 hours a day) • Redefining marital roles. • Adjusting to changing roles of extended family members (e.g., couple’s parents becomes grandparents, couple’s siblings become aunts/uncles). Stage: Families with young child(ren) • Balancing multiple roles. • Adjusting to school commitments. • Supporting children’s development of friend relationships. • Developing good character in children. Stage: Families with adolescent(s) • Adjusting to adolescents’ increasing need for independence. • Maintaining a strong bond with adolescents to help them resist pressures of the world. • Recognizing and supporting adolescents’ lives outside of the family. • Balancing support and protection of adolescent, yet allowing them opportunities to try new behaviors. • Developing communication patterns with emerging adults. Stage: Launching child(ren) and empty nest • Adjusting to absence of children in home. • Redefining/rekindling relationship with partner/spouse. • Developing adult relationships with children. • Welcoming new members to the family (e.g., in-laws, grandchildren). • Beginning to shift concern for older generations in extended family. Stage: Later life families/retirement • Exploring new roles (e.g., family, peers, social). • Providing emotional support for extended family • Taking on caretaking roles for older generation. • Dealing with losses (e.g., spouse, siblings, peers) • Reflecting on one’s life and preparing for death.

Family Development Theory



487

Figure 1 Single-parent and never married families

INCREASING
SINGLE‐PARENT
(NEVER
MARRIED)
FAMILIES
 Independent
adults
 Coupling
/
Marriage

 Childbearing
families
 Families
with
young
children
 Families
with
adolescents
 Launching
children
and
empty
nest
 Later
life
families
/
retirement

Some
relationship
skills
may
not
be
developed
and
then
modeled
to
 children
without
the
skills
developed
in
this
stage.


Figure 1 during which they face transition points, or developtransnational families (i.e., families whose members mental tasks. The success or difficulty of achieving may live in two or more countries). Similar to singlePOSTPONING
MARRIAGE
AND/OR
PARENTHOOD
 each Independent
adults
 task influences later stages. Once individuals parent families, parents in transnational families may Decreased
fertility
may
result
which
may
delay
childbirth
even
more
and
result
in
 meetCoupling
/
Marriage
 the requirements for a particular stage, they have more difficulty modeling relationship skills if financial
and
emotional
stress.
 generally do not revert to a previous stage. However, one parent lives far away. Childbearing
families
 individuals and families can stagnate if they do not A third trend is the postponing of marriage Families
with
young
children
 Families
with
adolescents
 complete the requirements for the next Teenagers
may
have
elderly/retired
parents.
They
may
be
dealing
with
deaths
 stage. and/or parenthood. Because of decreased fertility of
parents
at
a
young
age.
Parents
may
work
beyond
retirement
to
support
 Launching
children
and
empty
nest
 Families are unique in how and when they progin older adults, this may result in pregnancy and children.
 Later
life
families
/
retirement
 ress through the stages, and numerous societal childbirth experiences that may entail significant trends can impact the developmental stages and corexpenses (such as fertility treatments) and be more Figure 2 developmental tasks. For example, the responding emotionally and physically stressful. By the time United States has seen an increase in single-parent the children of these couples become teenagers, the families in which the parent has never been married. parents may be ready for retirement or approaching Because these parents have never been married, it the end of their lives. The parents may also have to is possible that some of their relationship skills may H IGH
 D IVORCE
 & 
 R EMARRIAGE
 R ATES 
 INCREASING
SINGLE‐PARENT
(NEVER
MARRIED)
FAMILIES
 delay retirement to meet the financial needs of their not beIndependent
adults
 fully developed and modeled to children withteenage children. Marriage
 Some
relationship
skills
may
not
be
developed
and
then
modeled
to
 Birth
of
first
child
 out the skills developed in this stage (e.g., joining two Some scholars have suggested that divorce and Coupling
/
Marriage

 Divorce
and
remarriage
may
be
additional
 Remarriage
 Parenting
young
children
 familyChildbearing
families
 systems, and developing family
life
stages
 intimacychildren
without
the
skills
developed
in
this
stage.
 and comremarriage should be added as additional normative 1 Birth
of
first
child
 Divorce with 
 Families
with
young
children
 munication a partner). stages in the family lifecycle because of the frequency Parenting
young
children
 Parenting
adolescents
 Families
with
adolescents
 Another trend in the United States isFamilies
are
progressing
 the with which they occur in society. Remarried families Launching
children
and
empty
nest
 Launching
children
and
empty
nest
 increase in immigrants, which also results in through
very
different
stages
 more may be experiencing multiple stages of the family Later
life
families
/
retirement
 Later
life
families
/
retirement 


simultaneously


Figure 13

Figure 2 Postponing marriage and parenthood

AOSTPONING
 DOLESCENT
P 
 P MARENTS ARRIAGE
AND /OR
PARENTHOOD 
 Some
life
skills
and
independent
identity
may
not
be
developed.


Independent
adults
 Independent
adults
 Decreased
fertility
may
result
which
may
delay
childbirth
even
more
and
result
in
 Coupling
/
Marriage

 Coupling
/
Marriage
 financial
and
emotional
stress.
 Teenagers
are
starting
their
 Family
of
Origin
 Childbearing
families
 own
family
life
cycle
while
 Families
with
teenagers
 Families
with
young
children
 simultaneously
progressing
 Launching
children
&
empty
nest
 Families
with
adolescents
 Teenagers
may
have
elderly/retired
parents.
They
may
be
dealing
with
deaths
 through
different
stages
in
 of
parents
at
a
young
age.
Parents
may
work
beyond
retirement
to
support
 Launching
children
&
empty
nest
 Launching
children
and
empty
nest
 their
families
of
origins

 children.
 Later
life
families
/
retirement Later
life
families
/
retirement


Figure 24

HIGH
DIVORCE
&
REMARRIAGE
RATES


Families
with
young
children
 Teenagers
may
have
elderly/retired
parents.
They
may
be
dealing
with
deaths
 Families
with
adolescents
 Launching
children
and
empty
nest
 of
parents
at
a
young
age.
Parents
may
work
beyond
retirement
to
support
 children.
 Later
life
families
/
retirement


488

Family Development Theory

Figure 2 Figure 3 Divorce and remarriage rates IGH
DIVORCE
 &
R‐P EMARRIAGE
 RATES IH NCREASING
 SINGLE ARENT
(NEVER
 M
 ARRIED)
FAMILIES


Marriage
 Independent
adults
 Birth
of
first
child
 Some
relationship
skills
may
not
be
developed
and
then
modeled
to
 Coupling
/
Marriage

 Divorce
and
remarriage
may
be
additional
 Remarriage
 children
without
the
skills
developed
in
this
stage.
 Parenting
young
children
 Childbearing
families
 family
life
stages
 1 Birth
of
first
child
 Divorce 
 Families
with
young
children
 Parenting
young
children
 Parenting
adolescents
 Families
with
adolescents
 Families
are
progressing
 Launching
children
and
empty
nest
 Launching
children
and
empty
nest
 through
very
different
stages
 Later
life
families
/
retirement
 simultaneously
 Later
life
families
/
retirement 


Figure 13 Figure lifecycle concurrently. For example, parents in stepfamilies may need support. For example, if homefamilies may have children from a previous marriage less or trafficked children leave their homes, either DOLESCENT
M PARENTS 
 PAOSTPONING
 ARRIAGE
AND /OR
PARENTHOOD 
 Some
life
skills
and
independent
identity
may
not
be
developed.
 who they are preparing to launch, yet they may also voluntarily or involuntarily, then helping profesIndependent
adults
 Independent
adults
 be giving birth to another child in the new marriage. sionals should recognize that they may be learning Decreased
fertility
may
result
which
may
delay
childbirth
even
more
and
result
in
 Coupling
/
Marriage

 Coupling
/
Marriage
 financial
and
emotional
stress.
 Hence, the parents are going through multiple develunhealthy ways of interacting and communicating, Teenagers
are
starting
their
 Family
of
Origin
 Childbearing
families
 Childbearing
families
 own
family
life
cycle
while
 opmental stages at the same time. which may include lack of boundaries, exploitative Families
with
teenagers
 Families
with
young
children
 Families
with
young
children
 simultaneously
progressing
 Launching
children
&
empty
nest
 Families
with
adolescents
 InFamilies
with
adolescents
 families with teenage parents, the parents are sexual relationships, poor conflict resolution, and Teenagers
may
have
elderly/retired
parents.
They
may
be
dealing
with
deaths
 through
different
stages
in
 Launching
children
&
empty
nest
 of
parents
at
a
young
age.
Parents
may
work
beyond
retirement
to
support
 not only starting their family, but are also progressexploitative communication. Hence, helping profesLaunching
children
and
empty
nest
 their
families
of
origins

 children.
 Later
life
families
/
retirement ing through different stages in their families of orisionals could target these youth with family life eduLater
life
families
/
retirement
 gin. For example, they may be dealing with developcation to enhance their understanding of developFigure 4 mental tasks in the “families with teenagers” stage mental tasks that they may have missed. Figure 2 (e.g., trying to achieve independence from their Using the family developmental theory perspecfamilies and learning how to develop adult relative to examine these trends and issues also allows tionships) while simultaneously trying to learn the professionals to see how families experiencing these developmental tasks associated with a childbearing issues adapt to and overcome challenges. As an illusHIGH
DIVORCE
&
REMARRIAGE
RATES
 family stage (e.g., negotiating parenting roles, and tration, many children raised by a never-married sinMarriage
 meeting the needs of their baby). This can be quite gle parent develop healthy relationship skills. TheoBirth
of
first
child
 Divorce
and
remarriage
may
be
additional
 Remarriage
can examine how challenging. rists, scholars, and practitioners Parenting
young
children
 family
life
stages
 1 Birth
of
first
child
 Examining single parents teach and model relationship skills, Divorce 
 how societal trends or issues may Parenting
young
children
 Parenting
adolescents
 impact the family lifecycle allows helping professionand how children learn them. Families
are
progressing
 Launching
children
and
empty
nest
 als (e.g., therapists, social workers, and educators) It is important to note that family developmenthrough
very
different
stages
 Later
life
families
/
retirement
 to identify potential areas where children andsimultaneously
 their tal theory has received extensive criticism due to its 


Figure 3 Figure 4 Adolescent parents

ADOLESCENT
PARENTS
 Independent
adults
 Coupling
/
Marriage

 Childbearing
families
 Families
with
young
children
 Families
with
adolescents
 Launching
children
&
empty
nest
 Later
life
families
/
retirement

Figure 4

Some
life
skills
and
independent
identity
may
not
be
developed.
 Teenagers
are
starting
their
 own
family
life
cycle
while
 simultaneously
progressing
 through
different
stages
in
 their
families
of
origins



Family
of
Origin
 Families
with
teenagers
 Launching
children
&
empty
nest


Family Farms



culturally specific assumption of what constitutes the “normative family” (i.e., heterosexual, two-biological parents, with an intact family). The stages and developmental tasks may not accurately reflect the diversity in family structure because of cultural variations and population trends. However, scholars have tried to extend the idea of normative stages to other family types (e.g., divorced families, stepfamilies, blended families, same-sex families, and single-mother families). Another major critique is that there is little empirical evidence to support the family stages and corresponding developmental tasks beyond the initial data gleaned from U.S. census by Evelyn Duvall in the first half of the 1900s. In addition, the key constructs are very difficult to operationally define and measure. Thus, it can be quite challenging for a researcher to use this theory when studying individual and family development. In conclusion, family developmental theory appears to be a good tool to assist practitioners in understanding (1) how children and their families progress through a series of similar stages, (2) how children and families develop the skills associated with each stage, and (3) how families who do not progress through the stereotypical stages develop. However, more development of the theory should be conducted to move beyond the widely used static and deterministic approach to understanding families; instead, a theory should be created theory that can accommodate different family structures. Scott W. Plunkett California State University Northridge See Also: Emerging Adulthood; Later-Life Families; Life Course Perspective; Remarriage; Single-Parent Families; Stepfamilies. Further Readings Carter, Elizabeth A. and Monica McGoldrick, eds. The Changing Family Cycle: A Family Therapy, 2nd ed. New York: Gardner Press, 1988. Duvall, Evelyn M. Family Development. Philadelphia: Lippincott, 1957. Rodgers, Roy H. Family Interaction and Transaction: The Development Approach. Englewood Cliffs, NJ: Prentice Hall, 1973. Rodgers, Roy H., and James White. “Family Developmental Theory.” In Sourcebook of Family Theories and Methods: A Contextual Approach,

489

Pauline G. Boss, William J. Doherty, Ralph LaRossa, Walter R. Schumm, and Suzanne K. Steinmetz, eds. New York: Plenum, 1993. White, James M. Dynamics of Family Development: A Theoretical Perspective. New York: Guilford, 1991.

Family Farms When one thinks of family farms and rural farm life, it is often with a sense of nostalgia and sentimentality. In the past century, a number of economic and technological changes combined to bring about great changes in farming. Some look at these changes as offering new opportunities, but others focus on decline and lament the death of family farms. Farm Size and Variety Farms across the United States today are diverse, ranging from small farms to large enterprises owned by nonfamily corporations. The United States Department of Agriculture (USDA) classifies family farms as “any farm organized as a sole proprietorship, partnership, or family corporation.” Family farms exclude farms organized as nonfamily corporations or cooperatives, as well as farms with hired managers. Family farms may be run by an individual, multiple generations, or multiple families. They may be large farms that serve a family’s livelihood, or smaller hobby farms. Some farms specialize in one or only a few products, whereas others have a variety of livestock and produce a number of different products. Despite this variation, the vast majority of farms are family owned and operated. Today, family farms account for about 98 percent of the 2.2 million farms across the country. Farmland varies in quality, which affects the type of produce and pricing. Because of this, farm size is measured by sales totals, instead of land area. Farms are categorized as small family farms (sales under $250,000), large family farms (sales over $250,000), very large (sales over $500,000), and nonfamily farms (corporations or cooperatives or operated by farm managers). Large and very large family farms make up less than 9 percent of all family farms, yet these farms produce 63 percent of the value of all

490

Family Farms

U.S. domestic food and fiber products. Small family farms and ranches produce about 15 percent of products, but account for just over 50 percent of total U.S. agricultural land. Mechanized farming and other technological advances helped increase farm profitability, but not without some costs. Concerns about groundwater pollution, soil erosion, and other factors helped spur interest in sustainable agricultural practices. Sustainable agriculture focuses on meeting goals of profitability, stewardship, and quality of life without depleting Earth’s resources or polluting the environment. Boom and Bust Cycle of Family Farms The history of American farms is characterized by both boom and bust—periods of prosperity, followed by stagnation and hardship. The 19th century witnessed a rapid increase in the number of farms, but these trends reversed during the 20th century. Fertilizers, pesticides, improved irrigation, and mechanized tools introduced during the 20th century helped farms become much more productive. Tractors and other farm machinery fundamentally changed farm work and eliminated the need for many farm laborers. Larger farms became more numerous, and small and midsized farms diminished. During the golden age of agriculture (1910 to 1914), many believed that hard work, thrift, and optimism were the key to the good life. However, at the end of World War I, the agricultural economy fell into a recession that continued through the Great Depression. Many farm policies trace their roots to programs instituted during the 1930s. The first Farm Bill, the Agricultural Adjustment Act (AAA) of 1933, was enacted as part of President Franklin D. Roosevelt’s New Deal. The AAA was intended to help balance supply and demand, so that prices would support farmers’ purchasing power, and it remains one of the largest sources of support for U.S. farmers today. Every few years, Congress passes a revised farm bill, a complex piece of legislation that now authorizes billions of dollars of taxpayer spending. The farm bill remains an important piece of legislation that provides authorization for services and programs that impact every American, as well as millions around the world. A similar boom-and-bust pattern followed half a century later, with the farm crisis of the 1980s, which primarily impacted farmers in the Great

Plains. After several decades of uncertainty, the 1970s were an anomaly for farming as the agricultural industry became a growth industry. Congress enacted farm legislation that expanded credit and helped promote commercialization, and tax policies encouraged investment. Land prices soared, and growing numbers of farmers sought loans to buy more land so that they could expand their operations and meet growing global demand. World food exports increased, and the energy crisis raised demand for alternative energy sources. As American farmers were integrated into the greater world economy, they faced more risk and uncertainty in global markets. Renewed optimism about rural America and great prosperity led some to buy more land, new machinery, and houses; manufacturing and businesses grew to meet these needs. Young farmers in particular tended to be more likely to take on risks, compared to older farmers who tended to be more cautionary. These managerial styles impacted the likelihood of financial problems. The focus on capital gains over cash flow ultimately spelled disaster for many farm families in the 1980s. Farm assets and land values fell, liabilities increased, and global exports declined. Those who moved toward commercialization experienced great financial strain. Tenant farming and sharecropping became more common as land ownership became less feasible for many families. The farm crisis forced many young farm families out of farming, and discouraged others from entering the farming industry. Many farm families endured bankruptcy and foreclosure as they dealt with the farm crisis. Families who lost their farms during the 1980s farm crisis identified it as one of the most traumatic experiences of their lives. Some lost farms they had purchased, but others lost land farmed by their family for generations. The financial stress of farm families also impacted surrounding communities as farm families tightened their budgets, reduced spending, and repaired rather than replaced goods and machinery. In many ways, the farm crisis originated from farmers adjusting to circumstances that were out of their control, changing rural realities, and a changing economy. Compared to families in the 1920s and 1930s, families affected by the 1980s farm crisis had a greater number of support systems in place. One prime example is the Farm Aid concerts, organized



by Willie Nelson, Neil Young, and John Mellencamp. The first Farm Aid concert was held in 1985 to raise awareness about the loss of family farms and raise funds to help farm families keep their land. The Cooperative Extension Service also played a significant role in responding to the problems brought about by the farm crisis. Extension is a partnership between the U.S. Department of Agriculture and land-grant universities established in the Smith-Lever Act of 1914 to “take the university to the people.” Extension provides a wide array of research-based education using a variety of means, including classes, fact sheets, Web sites, and radio and television programs. Programs of interest to farm families range from farm safety and coping with farm stress, to dealing with natural disasters, to developing plans to help diversify farm incomes and support first-time farmers. Work and Family Balancing work and family is a unique challenge for farm families. Given that the two spheres are intertwined, spillover is inevitable. Familymembers work side by side as both family and coworkers, which can cause strain in relationships. The majority of farmers and spouses today also hold off-farm employment, which can add stress and interfere with completing necessary farming tasks. Until the mid-20th century, farm families worked and lived as they had for decades—everybody pitched in. Children as young as 5 years old were assigned chores such as helping around the house, milking and herding cattle, tending gardens, and helping with chickens. As children aged, they gradually took on more gender-specific tasks, such as field work for boys and housework and child care for girls. Gradually, teens became expected to seek off-farm work to make spending money, but also to help contribute to the family farm enterprise. Farm wives’ work in the past was largely dictated by the number and ages of children, whereas men’s work depended on what a farm produced. Historically, women contributed by producing goods for use in the home or to sell or barter. Many joined farm women’s clubs for education, community service, fundraising, and activities revolving around women’s work. These clubs, including Extension Homemakers’ Clubs, helped improve women’s contribution to farm work by training women in the “proper” methods for completing household tasks.

Family Farms

491

Over time, however, technology and the invention of labor-saving devices meant that families became more likely to purchase durable goods. Furthermore, with farm mechanization and fewer free farm laborers, outside income become a necessity, and women had less time to engage in these practices. Farm wives remain an important source of labor on farms because they are often responsible for keeping the farm’s books and paying bills, maintaining the home, and helping with various farm tasks such as gardening and milking. An ever-increasing number also hold off-farm work. Farm families have historically depended on offfarm employment to make ends meet during tough times. This trend dates back to the 1930s, when nonfarm employment became common practice during the Great Depression. Half a century later, during the 1980s, growing numbers of farm operators sought off-farm employment to cope with financial losses. Many work off-farm to obtain insurance and benefits, but also to help supplement their families’ incomes. Today, most farm households (91 percent) have at least one family member working at an offfarm job. Farming is among the most stressful occupations. Farmers and farm workers have the highest rates of death from stress-related health problems, such as hypertension, heart disease, and ulcers. Farm families experience the same types of stress as nonfarm families, but with added stressors of weather conditions that have a direct and sometimes dire impact on their livelihood. Price fluctuations and high costs of purchasing farm machinery also add stress for these families. Families who experience the greatest strains are at highest risk for emotional and relationship problems. For many, farming remains more than just an occupation, but a way of life. Losing one’s farm means not only losing one’s job, but also one’s home and land. Recent Trends in Family Farms The 2007 Agricultural Census showed a 4 percent increase in the number of farms, which was the first increase since 1920. Although very small and very large farms are becoming more common, trends suggest that midlevel family farms will continue to dwindle. One cause is the aging of America’s farmers and a lack of younger farmers to take their place. More than a quarter of farmers today are age 65 and older; only 5 percent are 35 and younger.

492

Family Farms

In the past, farmers often aspired to pass on their farming business to the next generation. Farms that are continuously operated by a single family for at least 100 years may even receive special recognition as a Century Farm. Today, however, multigenerational farms are decreasing as younger generations are becoming less interested in taking over the family farm. In addition, retiring farmers tend to be older than their cohorts in other occupations, so their children are older and likely to have already sought other employment by the time their parents are ready to pass down the family farm. Many farms go out of business, are absorbed into nearby farmland, or are converted for nonfarm use. Of the 2.2 million farms in the United States, 1.8 million are headed by white men with an average age of about 57. However, recent trends show greater diversity among newer farm operators in terms of age, race, and ethnicity, and a growing number of women are becoming farm operators. In 2007, women operated 14 percent of farms, up from 5 percent in the late 1970s. Interest in local produce, farmers’ markets, and niche markets has provided a new opportunity for many beginning farmers. Today, approximately 20 percent of U.S. farms and ranches have operators with 10 or fewer years of experience. Beginning farmers who do not inherit a family farm and must start from scratch are often challenged to acquire enough land for their operation, and banks have tightened their lending practices. Most farm households continue to rely on offfarm employment for a significant portion of their income. Only 1 million of the nation’s 2.2 million farms have a positive net cash income; therefore, the other 1.2 million depend on off-farm income to get by. For decades, farm operators had lower high school completion rates, but this trend ended in the late 1980s. Operators of very large-scale farms were the most likely to hold college degrees, but the gap between very large and very small-scale producers is diminishing as college education is often required for off-farm work. Even many large farm operators need an additional income to meet their families’ financial needs. Newer farmers are more likely to be college educated, and frequently do not identify farming as their primary occupation. Family farms have undergone many changes during the past century. Farms have become much

Michael Popp and his son, Hayden, practice backing up to farm equipment at the five-generation Popp family farm based in El Campo, Texas. They currently grow cotton and grain sorghum and are part of the United Agricultural Cooperative.

larger, more highly specialized, and require a much smaller number of workers than in the past. The 21st century promises more change as technological development and market changes will continue to impact family farms. Rising costs, food safety, an emphasis on sustainable agriculture, climate change, and the global economy will continue to drive U.S. family farms in the future. Kelly A. Warzinik University of Missouri See Also: Cooperative Extension System; Family Businesses; Family Stress Theories; Rural Families. Further Readings Brown, J. P. and J. G. Weber. The Off-Farm Occupations of U.S. Farm Operators and Their Spouses. Washington, DC: U.S. Department of Agriculture, Economic Research Service, 2013. Hoppe, R., D. E. Banker, and J. MacDonald, America’s Diverse Family Farms, 2010 ed. Washington, DC:

U.S. Department of Agriculture, Economic Research Service, 2010. Hoppe, R. P. Korb. Characteristics of Women Farm Operators and Their Farms. Washington, DC: U.S. Department of Agriculture, Economic Research Service, 2013. Hoppe, R., P. Korb, E. O’Donoghue, and D. E. Banker. Structure and Finances of U.S. Farms: Family Farm Report. 2007 ed. Washington, DC: U.S. Department of Agriculture, Economic Research Service, 2007. Neth, M. Preserving the Family Farm: Women, Community, and the Foundations of Agribusiness in the Midwest, 1900–1940. Baltimore, MD: Johns Hopkins University Press, 1995. U.S. Department of Agriculture. 2007 Census of Agriculture. Washington, DC: U.S. Department of Agriculture, 2007.

Family Housing The structure of American homes has been inextricably tied to the characteristics of and ideals about families throughout the history of the nation. Prevailing ideas about family privacy, particularly the boundaries between family and community, have influenced the structure of houses, as has the dominant type of economy. The physical environment—including climate and available natural resources—also influenced the materials used for building homes. Thus, regional variation in climate and resource base has long been reflected in the design and construction of American homes. Technological advances in construction techniques in the early decades of the 1800s allowed for more standardization and quicker production of housing; and advances in public utilities and transportation —including expansion of rail lines and highways— allowed neighborhoods to be developed at greater distances from central urban areas. Urbanization, and then suburbanization, have had different effects on housing design, size, and amenities. The generally rising standard of living has resulted in an increase in the size of houses, the function of specific interior spaces, and the introduction of technological devices within homes, even as the average household size has declined. Home ownership has

Family Housing

493

become a major part of the American dream, yet economic disparity in recent decades has deprived many families of home ownership. The federal government has implemented a number of policies that have affected home ownership and provided assistance to those who are purchasing homes, as well as those who are homeless. Family Ideals and Housing Design The ideal middle-class American home has long been a detached single-family dwelling in a rural or suburban area. There have always been alternate house designs, however, particularly for those who fall outside of the middle class. In urban areas, the poor and the wealthy have lived in homes that differ from the ideal of the detached home. An architectural tradition of plain, uniform row houses was established in the early days of city growth. Later, as urbanization increased with the large waves of immigrants settling in American cities, more affluent owners of townhouses fled the city to the developing suburbs. The row houses were then typically subdivided so that multiple families could rent space in them. Conditions in many of these buildings were highly undesirable: dark, damp, unventilated, overcrowded, and susceptible to fires and epidemics. People whose lives are governed by others, such as workers in factory towns and slaves on southern plantations, have also had different types of housing. In the early days of U.S. industrialization, factory owners built their factories in rural areas close to streams that could provide water power. Because these areas were typically unsettled when the factories were built, there was no housing for workers. Factory owners addressed this by building housing for workers and their families. This typically took the form of small cottages or boarding houses, which were owned by the factory owners. Southern agriculturalists, who depended on the labor of slaves, had to provide housing for these workers. They designed quarters that would promote the type of family and community life they thought was best for those they enslaved. As architectural historian Gwendolyn Wright argues in her book Building the Dream, Americans have seen domestic architecture as a way to encourage certain kinds of families and social life. This has resulted in inequitable social patterns such as racism, slavery, industrial exploitation, class segregation, and restrictions on women’s behavior.

494

Family Housing

Early Colonial Housing The Puritans came to America in the late 1620s, settling in the Massachusetts Bay Colony. As Wright has written, they created a physical environment that reflected their religious belief in a divinely ordained structure for family relations and social life. Building their communities was arduous work. Early dwellings were similar to those of the indigenous people, such as, dugouts, tent-like structures, and huts made of small trees covered with grass or thatch because these materials were readily available. These early homes also resembled those of the poor in England: one-room, windowless huts made of interwoven twigs and mud or clay. As the migration continued and the population increased, new towns, with more permanent wooden homes and public buildings (such as meeting houses), were constructed by the government and business owners. Building lots were allocated to residents based on their professions and their resulting social standing. Houses varied considerably, reflecting differences in social status. The most common type of wood-framed house in New England colonial towns was a two-room dwelling, often referred to as a hall and parlor. The front doors of the houses opened to a small entryway that typically included a set of stairs leading to a bedroom loft. There was a room on each side of the entryway, with both rooms roughly equal in size. A large fireplace, which served as the only source of heat and light—and provided the only way to cook inside the home—dominated the main floor of the homes. One of the rooms, referred to as the hall or keeping room, was the center of a family’s life. It was the space where all of the activities of daily life— cooking, eating, reading the Bible, making soap and candles, spinning, and weaving—occurred. Everyone who resided in the home, which included adult men and women “householders,” children, and servants, spent their days together working in the hall. The residents kept watch on each other to ensure that there were no idle hands. The second room downstairs was the parlor, which was reserved for more formal events, such as entertaining guests and viewing the dead. In some of the earliest colonial homes, the parlor also included the parents’ bed. This practice only provided parental privacy at night, and did not serve as an escape from the crowded and busy hall during daylight hours. Personal privacy was not a central value of Puritan

colonies. Children and servants slept in the upstairs loft space. Eventually, additional rooms were added to these rudimentary structures, providing space for storage and specific household functions. Southern Plantations and Slave Quarters For much of the nation’s history, the economy of the southern United States was based on agriculture. Until emancipation after the Civil War, slave labor was essential to the productivity of southern farms. Agricultural settlements, often referred to as plantations, were generally self-sufficient, and included numerous buildings that served different functions, from the main residence that housed the plantation owners to livestock pens. The popular image of plantations includes grand mansions with a characteristic style, but many began as simple farmhouses that were enlarged and improved over time. In most areas of the south, the earliest settlers constructed houses to provide basic shelter suited to their local climate, not to establish permanence or demonstrate wealth or power. Slave houses were only meant for sleeping, and were usually roughly built one-room frame cabins. Not many survived over the years because of the materials from which they were constructed. The placement of slave housing relative to the main house and the fields varied. On the largest farms, or plantations, they could be arranged as a separate village or quarter away from the main house. An alternative arrangement placed slave quarters on the edges of the fields where the slaves worked. Industrialization, Urbanization, and Immigration Over the course of the 19th century, the United States became an industrialized and increasingly urbanized nation. As the need for labor grew, waves of immigrants from Europe began to come to the United States. Cities began to grow rapidly, and the demand for housing grew. In New York City, for instance, the population doubled every decade from 1800 to 1880. Many of the more affluent residents of New York’s Lower East Side neighborhood began to move further north, leaving their low-rise row houses behind. These homes were increasingly divided into multiple living spaces to accommodate the huge numbers of immigrants, many of whom were fleeing poor conditions in Ireland and Germany. These buildings, which were known as



tenements, were crowded, poorly lit and ventilated, lacking indoor plumbing, and with poor sanitation (having no garbage collection, open sewers, and shared toilets for many residents). Additional buildings with similar characteristics were cheaply built to accommodate the growing immigrant population. By 1900, some 2.3 million people (a full twothirds of New York City’s population) were living in tenement housing. Tenement housing was found in nearly all of America’s cities. A typical tenement building had five to seven stories, and occupied nearly all of the lot upon which it was built (usually 25 feet wide and 100 feet long, according to existing city regulations). With less than a foot of space between buildings, little air and light could get in. In many tenements, only the rooms on the street got any light, and the interior rooms had no ventilation (unless air shafts were built directly into the room). Because of the high density of these living quarters, epidemics developed and rapidly spread. A cholera epidemic in New York in 1849 took some 5,000 lives, many of them poor people living in overcrowded housing. During the infamous “draft riots” that tore apart the city in 1863, rioters were not only protesting against the new military conscription policy; they were also reacting to the intolerable conditions in which many of them lived. The Tenement House Act of 1867 legally defined a tenement for the first time, and set construction regulations; among these were the requirement of one toilet (or privy) per 20 people. Tenements were also especially vulnerable to fires. The Great Chicago Fire of 1871, for instance, led to restrictions on building wood-frame structures in the center of the city, and encouraged the construction of lower-income dwellings on the city’s outskirts. Unlike in New York, where tenements were highly concentrated in the poorest neighborhoods of the city, in Chicago they tended to cluster around centers of employment, such as stockyards and slaughterhouses. In spite of tenement legislation, living conditions had not significantly changed. Jacob Riis, a Danish journalist and photographer, had experienced firsthand the hardship of immigrant life in New York City. Riis wanted to make more affluent Americans aware of the deplorable conditions in which many urban Americans lived. He photographed what he saw in the tenements, and used these vivid photos

Family Housing

495

to accompany the text of his book How the Other Half Lives, published in 1890. The text included statistics about life in the tenements, with shocking facts such as the large number of adults who slept in one small room, and the high infant death rate in the tenements. Two major studies of tenements were completed in the 1890s, and in 1901 city officials passed the Tenement House Law that outlawed the construction of new tenements on 25-foot lots, and mandated improved sanitary conditions, fire escapes, and access to light. Under the new law—which in contrast to past legislation actually was enforced—preexisting tenement structures were updated, and more than 200,000 new apartments were built over the next 15 years, supervised by city authorities. By the late 1920s, many tenements in Chicago had been demolished and replaced with large, privately subsidized apartment projects. The next decade saw the implementation of President Franklin D. Roosevelt’s New Deal, which transformed low-income housing in many American cities through programs including slum clearance and the building of public housing. The first fully government-built public housing project in New York City opened in 1936. Development of Suburbs The first suburbs emerged in the 1880s after streetcars were electrified. Homes were built along streetcar lines, allowing residents easy access to transportation that would take them into the cities. These streetcar suburbs continued to develop until 1918, when automobile ownership began to increase among Americans. Cars facilitated the development of a new type of suburb in different locations. When freeways were developed after World War II, suburbs proliferated at even greater rates than they had in the late 1880s. Suburbs represented the fulfillment of the American Dream of material wellbeing and home ownership. Suburbanization was made possible by the availability of undeveloped land close to cities; cheaper, more efficient methods of building homes that emerged in the 1830s; changes in the ways in which homes were financed, including the creation of long-term, fixed-rate mortgages in the 1930s; and a large and growing demand for houses. The housing industry in the first half of the 20th century slowed as a result of the Great Depression and World Wars I and II. A housing bill enacted in 1948 liberalized

496

Family Housing

lending by requiring only a 5 percent down payment on a 30-year fixed mortgage. One of the most prominent of the new suburban housing developments that appeared after World War II was Levittown, New York, located on Long Island. By the time the development was complete, 17,447 houses had been built according to a few standard plans, which produced a community in which houses were nearly identical, except for their color or the placement of windows. The cost of the new homes at the time was approximately $7,900. Levittown, however, became a symbol of racial segregation. The discriminatory housing standards of Levittown were consistent with government policies of the time. The Federal Housing Association (FHA) allowed developers to justify segregation within public housing. The FHA only offered mortgages to nonracially mixed developments which discouraged developers from creating racially integrated housing. In accordance with this policy, the purchase agreement signed by all those who bought homes in Levittown stated that the property could not be used or rented by any individuals other than those of the Caucasian race. Even though the GI Bill for returning veterans of World War II fueled the increased demand for affordable housing, black veterans were unable to buy or rent homes in Levittown. A group opposed to the racial covenants in Levittown pushed for an integrated community. In 1948, the U.S. Supreme Court declared that property deeds stipulating racial segregation were unenforceable by law. Levitt and Sons, the developers of Levittown, did nothing to counteract the racial homogeneity of the suburb, and thus the racial composition of the community did not change. By 1960, Levittown was still a completely white suburb. Even as late as the 1990 census, only a small proportion of the community was nonwhite. The American Dream and Homelessness For many Americans, owning their own home has long been their American dream. An opinion poll conducted by Gallup in 2013 revealed that 62 percent of Americans aged 18 and older owned their place of residence, and 25 percent who did not currently own their place of residence plan to do so within the next 10 years. Home ownership varies by age and income. About 71 percent of those between the ages of 59 and 64 own their primary place of residence; similarly, 69 percent of people 65 or older are

home owners. In addition, the majority of those aged 30 to 49 (58 percent) also own their homes. In terms of income, three-quarters of those making at least $75,000 a year own their homes. In spite of the high value placed on home ownership, a substantial number of Americans are not only unable to buy a home, but also many are unable to find stable housing of any kind. The U.S. Department of Housing and Urban Development has identified four situations under which an individual or family may qualify as homeless.  The first category includes an individual or family who lacks a fixed, regular, and adequate nighttime residence, meaning the individual or family has a primary nighttime residence that is a public or private place not meant for human habitation, or is living in a publicly or privately operated shelter designed to provide temporary living arrangements. This category also includes individuals who are exiting an institution in which they resided for 90 days or less, and who resided in an emergency shelter or place not meant for human habitation immediately prior to entry into the institution. The second category includes an individual or family who will imminently (within 14 days) lose their primary nighttime residence, provided that no subsequent residence has been identified, and the individual or family lacks the resources or support networks needed to obtain other permanent housing. Third are unaccompanied youth (under 25) or families with children and youth who do not otherwise qualify as homeless under this definition and are defined as homeless under another federal statute, have not had permanent housing during the past 60 days, have experienced persistent instability, and can be expected to continue in such status for an extended period of time. The final category includes any individual or family who is fleeing or attempting to flee, domestic violence, dating violence, sexual assault, or stalking. It is difficult to accurately estimate the number of homeless Americans because it greatly fluctuates. There are approximately 600,000 to 1.1 million homeless people in the United States at any given time. The number has almost doubled since the 1980s due to economic downturns and crises. The most recent economic recession pushed more than 1.5 million families into homelessness between 2007 and 2012. For most people, homelessness is a temporary condition. Many are able to find shelters, adequate affordable housing, or some

Family Life Education



type of permanent residence within three months of becoming homeless. It is estimated that 20 to 25 percent of homeless adults cannot find adequate housing for more than one year. The National Center on Family Homelessness reports that every year, one out of every 50 children in the United States is homeless. Roughly three-quarters of homeless people are located in urban areas, 20 percent are located in suburbs, and the remaining are in rural areas. Approximately 35 percent of homeless individuals are white, 45 percent are African American, 12 percent are Latinos, 5 percent are Native American, and less than 3 percent are Asian American. Veterans also experience homelessness. The main causes of homelessness are lack of income, unemployment, poverty, and the inability to find affordable housing. However, other noneconomic factors play a role. Mental illness, substance abuse, drug addiction, alcohol addiction, parental abuse, disease, emotional distress, depression, and other health problems are major contributors to the growing problem of homeless people in the United States. American Homes Today American houses have become larger over the past 60 years. The National Association of Home Builders reports that the average square footage of American houses increased from 983 in 1950 to 2,679 in 2013. As houses have expanded, their contents have also increased, thus increasing the sheer amount of space and objects to be cleaned and maintained. Another significant trend that occurred alongside the increase in the size of homes was an upsurge in ownership of technological devices for communication, entertainment, and household labor. Individual privacy and family togetherness are central values in the design of contemporary homes. Large kitchens, seen as the center of the home, are sought after as a way to provide “family time.” At the same time, substantial private space for various family members is highly valued so that family members can retreat to the solitude of their rooms. As has been true throughout American history, the design of family housing reflects the dominant values of society. Constance L. Shehan University of Florida

497

See Also: Addams, Jane; Cult of Domesticity; Household Appliances; Housing Crisis; Housing Policy; Immigrant Families; National Affordable Housing Act; Slave Families; Trailer Parks. Further Readings Bryson, Bill. At Home: A Short History of Private Life. New York: Anchor Books, 2010. Gans, Herbert J. The Levittowners: Ways of Life and Politics in a New Suburban Community. New York: Pantheon Books, 1967. Riis, Jacob. How the Other Half Lives. New York: Macmillan, 2010. Rybczynski, Witold. Home: A Short History of an Idea. New York: Viking, 1986. Stevenson, Brenda. Life in Black and White: Family and Community in the Slave South. New York: Oxford University Press, 1997. Ulrich, Laura Thatcher. A Midwife’s Tale: The Life of Martha Ballard Based on Her Diary, 1785–1812. New York: Vintage Books, 1991. Wright, Gwendolyn. Building the Dream: A Social History of Housing in America. Cambridge, MA: MIT Press, 1981.

Family Life Education Family life education (FLE) refers to educational activities, information, or resources aimed at improving family relationships and functioning. Founder of the technique, Margaret Arcus and colleagues outlined seven core principles that define FLE in their seminal work, Handbook of Family Life Education (1962): (1) it is relevant to individuals and families throughout the life span; (2) it is based on the needs of individuals and families; (3) it is a multidisciplinary area of study and practice; (4) it is offered in many different settings; (5) it is an educational, rather than a therapeutic approach; (6) it presents and respects differing family values; and (7) it must be taught by qualified educators in order to be effective. Unlike family counseling, FLE takes a broad educational approach. It is taught through a variety of organizations, from schools to churches to community health agencies. While therapists may be FLE educators, not all FLE educators are therapists; they may come from diverse backgrounds

498

Family Life Education

and fields. Although a particular program may target individuals and families at one point in the life span, in general, FLE is relevant at any point, from supporting new parents to providing sex education to helping families care for an elderly family member. Critical to being relevant is representing and being respectful of differing family values. FLE is based on the needs of individuals and their families, and qualified educators are crucial to the success of these programs. History Informal FLE can be traced back to the late 1700s and early 1800s, when mothers organized informal groups to discuss child rearing and family issues. Changes in society at the beginning of the 20th century resulting from increasing industrialization, and urbanization created new challenges for families. As young people moved to the city, they started families without the support of their extended families. Parents, and often children, worked long hours in cruel conditions. These changes were blamed for increased divorce and child behavior problem rates. In rural communities, infant mortality and basic living conditions remained challenging. FLE was created to address these problems, and arose from the home economics education movement around the turn of the 20th century. Home economics education was created to prepare young women for their roles in the home, but it was also tied to the political movement for women’s equality, both in the home and society. Home economics focused on nutrition, home management, and childrearing, and courses were developed in high schools, colleges, and through extension outreach programs throughout the country. Over time, the study of children and families matured into a separate professional and academic discipline. The National Council on Family Relations (NCFR) became the primary professional society for family life educators. Professionalization The NCFR supports and promotes FLE, and in 1985, it began establishing a formal credential for family life educators. To become a Certified Family Life Educator (CFLE), individuals must demonstrate proficiency through experience and/or coursework in 10 content areas: families and individuals

in societal contexts, internal dynamics of families, human growth and development across the life span, human sexuality, interpersonal relationships, family resource management, parenting education and guidance, family law and public policy, professional ethics and practice, and FLE methodology. Today, there are numerous CFLE programs across the country. Family Life Education Programs The content of FLE programs has changed over the years to adapt to new concerns, changing family demographics, and growing diversity in the United States. Many life experiences are shaped by gender, race, ethnicity, and culture. and family life is no exception. FLE has responded to these issues by creating resources and certifying professionals who are culturally sensitive and aware of how such differences impact individuals and families while providing timely and relevant information. The major topics of FLE programs are marriage and relationships, parenting, and sexuality. Helping couples prepare for marriage and enrich their relationships has been a longstanding topic for educators. Major FLE programs were based on the growing scientific understanding of how effective communication and conflict management are essential to long-lasting marriages. This work was refined as family scientists developed more sophisticated methods of studying couples, such as videotaping interactions between them in realistic settings. Several marriage programs, including PREPARE/ENRICH, have demonstrated success in improving marital relationships. Parent education focuses on providing timely information to parents on a range of issues including discipline, sleep, feeding, and other issues related to raising young children. Child-rearing practices have changed over the years, and educators have tackled some controversial topics, including corporal punishment and attachment parenting. Sexuality education has probably been the most controversial area of FLE. The Sexuality Information and Education Council of the United States assert that young people need an improved understanding of sexual health and behavior, but efforts to include this education in school settings has been repeatedly challenged by parents who believe such information should only be taught by family members. Nevertheless, educators have developed evidenced-based

Family Mediation/Divorce Mediation



sexuality programs that have demonstrated reduced sexual risk taking. Typically, these programs focus on a specific behavior, such as reducing teen pregnancy, and a specific population, such as middle-school girls in urban areas. Increasingly, educators have designed programs that are tailored to specific audiences or family circumstances. For example, the growing divorce rate has prompted educators to create divorce education programs that assist divorcing parents with their continued coparenting responsibilities. Likewise, the growth of stepfamilies has led to the creation of programs to assist stepparents in blending family members together and navigating complex relationships. There are also programs designed for specific transitions, such as becoming new parents, or for particular contexts, such as military families. Delivery Methods FLE programming is delivered in various ways, such as through face-to-face programs, home visits, small group work, printed materials, instructional videos, and online. FLE delivery utilizing information technology will likely continue to grow, and can take many forms, including Web sites, online modules or programs, forums, blogs, and social networking sites. Online programming can be convenient for both providers and recipients, can expand traditional programs and services, can offer richer or more detailed information, and can be cost-effective for providers. However, there are also negatives to this delivery method, including not reaching the intended audience, misunderstanding of information with little opportunity for clarification, and the challenge of maintaining a Web site or Web-based program as technology continues to evolve. Regardless of how the program is delivered or the specific content of the program, FLE will need to continue to adapt to meet the needs of changing American families. Elissa Thomann Mitchell Robert Hughes, Jr. Sarah L. Curtiss University of Illinois at Urbana-Champaign See Also: Child-Rearing Experts; Child-Rearing Manuals; Child-Rearing Practices; Cooperative Extension System; Discipline; Divorce and Separation; Family Values; National Council on Family Relations;

499

Parent Education; Parent Effectiveness Training, Parenting; Parenting Styles. Further Readings Arcus, Margaret E., Jay D. Schvanefeldt, and Joel J. Moss. Handbook of Family Life Education: Foundations of Family Life Education. Vol. 1. Newbury Park, CA: Sage, 1993. Bredehoft, David J. and Michael J. Walcheski, eds. Family Life Education: Integrating Theory and Practice. Minneapolis, MN: National Council on Family Relations, 2003. Duncan, Stephen F. and H. Wallace Goddard. Family Life Education: Principles and Practices for Effective Outreach. Thousand Oaks, CA: Sage, 2011. Hughes, Jr., Robert, Jill R. Bowers, Elissa T. Mitchell, Sarah Curtiss, and Aaron T. Ebata. “Developing Online Family Life Education and Prevention Programs.” Family Relations, v.61/5 (2012).

Family Mediation/ Divorce Mediation As a way of resolving conflicts, mediation dates back to ancient Eastern cultures. Within U.S. history, Quakers were among the first to advocate for and practice mediation. Though there are an assortment of contexts for this longstanding approach to dispute resolution, such as international, community, organizational, workplace, schoolyard, victim-offender, and neighborhood, it is the interpersonal context of the family that is the focus of this article. Family members may find themselves in serious dispute with one another about numerous issues and areas of decisionmaking (e.g., family business; elder care; medical; financial; property division; and child custody, visitation, and rearing practices). Divorce mediation is at the heart of this practice. During the 1960s, a number of probation officers and family division employees within court systems began using mediation protocols to help resolve divorce and custody conflicts. Divorce mediation steadily gained momentum throughout the 1970s as fault-based divorce laws were replaced by no-fault laws that valued equity, fairness, need, ability to pay,

500

Family Mediation/Divorce Mediation

and the best interests of children. During this decade, the American Arbitration Association devised rules for conducting family mediation, and was among the first groups to design and carry out training programs for family mediators. Scholars and practitioners began proposing theories and models, refining techniques and strategies, and authoring books. In the 1980s, one state after another began passing legislation mandating divorce mediation, typically for high-conflict disputes between parents, which in turn prompted calls for accountability along with research-based practices and outcomes. At the turn of the century, the efforts of a work group comprised of representatives of multiple professional associations resulted in the establishment of model standards of practice and codes of conduct for family and divorce mediators. Definition and Goals According to the model standards of practice for family and divorce mediation, mediation is a process in which an impartial third party (a mediator or perhaps a pair of mediators) facilitates the resolution of family conflicts by promoting voluntary agreement among the disputants on a variety of issues. This process is goal oriented and typically time limited. Family mediation, particularly divorce mediation, can be court mandated, yet in other circumstances, conflicted family members may voluntarily seek out mediation. The mediator usually meets jointly with all of the disputants, but on occasion may meet separately with each one. The mediator promotes effective communication between the disputants, and encourages their mutual understanding of each other’s perspective. In the context of separating or divorcing parents, the mediator helps disputants to clarify their individual and common interests, and more importantly, the best interests of their child or children. The mediator empowers them to identify and analyze various options, make informed decisions, and formulate agreements. While family and divorce mediators may be attorneys or professionals with mental health, family science, or social work backgrounds, and as a result utilize techniques inherent to their professions, engaging in family or divorce mediation does not negate the need for conflicted family members to obtain independent legal advice, and if the situation calls for it, counseling or therapy services.

Family or divorce mediation should take place in an environment that is conducive to the development of trust and honest communication. According to the model standards of practice, mediators in this context are to maintain confidentiality with regard to information shared during the mediation process, unless they are required by law or have been given permission by the disputing family member(s) to reveal certain information to others. In the same spirit of confidentiality, mediators are rarely subpoenaed or ordered by the court to give testimony in legal proceedings related to their family or divorce mediation cases. Benefits and Cautions Family or divorce mediation is not appropriate or useful for all family disputes, particularly high-conflict, volatile situations involving one or more disputants who are significantly entrenched in their positions; situations deeply impacted by emotional, psychological, or cognitive competence issues; or when disputants are unable or unwilling to share responsibility for and control over negotiations in fair and safe ways. According to the model standards of practice, a family or divorce mediator shall not undertake a mediation, in which the family situation has been assessed to involve child abuse or neglect without appropriate and adequate training. The same expectation exists when adult couples have been involved in domestic abuse and/or substance abuse. Abusers are referred to appropriate intervention providers and only return to mediation when abuse issues have been properly dealt with. As a result of well-designed research studies and experience, the theoretical benefits of family and divorce mediation are manifest for most participants. Mediation has the best interests of children as its top priority. When effectively and efficiently carried out, it reduces the economic and emotional costs associated with resolving family disputes, particularly when compared to litigation. If the goals of self-determination and good communication are achieved during mediation, disputants forge agreements with which they are satisfied and more willing to abide. In most family conflicts, including divorcerelated ones, disputants must continue to interact to some degree with each other after the mediation process and court proceedings are concluded. The communication and problem-solving skills that they have practiced and refined during mediation can



continue to serve them well when new disagreements arise in the future. Successful mediation means that these couples should not have repeat visits to court in coming years. Settings, Models, and Stages Family and divorce mediation services are typically undertaken in one of four settings: private practices, agencies and clinics, community mediation centers, and court-connected facilities. The setting for a particular mediation case may be influenced by one or more factors, such as whether or not mediation has been mandated by the court or voluntarily pursued; the availability and accessibility of services; the mediator(s) selected to provide the services; or the issues that are in dispute. As the field of family and divorce mediation has matured, researchers and practitioners have conceptualized numerous models of the process, each with certain strengths and limitations. Among the more prominent models are facilitative mediation, evaluative mediation, transformative mediation, therapeutic mediation, narrative mediation, and various hybrid forms of mediation. When contrasted, the models differ with regard to mediator perspective and directiveness, focus, goals, and outcomes. As illustration, facilitative mediation is processoriented, client-centered, communication-focused, and interest-based. A mediator taking this approach would rarely formulate recommendations, give advice, or make predictions. It would be considered best practice to routinely mediate with both/all disputing family members present. Evaluative mediation is quite the opposite. Efficiently reaching a settlement is a goal, so the mediator is more likely to be directive with disputing family members, often prompting them to expose their positions and solutions to “reality testing.” Based on knowledge and expertise, a mediator carrying out this approach is more inclined to put options forward for disputants’ consideration. In evaluative mediation, it is commonplace to meet with disputants individually, caucus with each separately, and shuttle back and forth between them. While the proponents of transformative, therapeutic, and narrative models of mediation note fine distinctions between them, they share common characteristics. Improving the relationship between conflicted family members, divorcing partners, and/ or separating parents is the primary focus. Though

Family Mediation/Divorce Mediation

501

reaching a settlement is a goal, perhaps the most important goal of mediation is to have the disputants adjust the way they interact with one another. Interactions that have been negative, destructive, alienating, demonizing, and rooted in self-absorption in the past are gradually transformed during the mediation process into ones that are positive, constructive, connecting, humanizing, and responsive. To accomplish this transformation, the mediator employs listening, reflecting, summarizing, and questioning skills while prompting the disputants to practice these same skills as they interact with each other. Most processes unfold in stages, though it may not be clear just when one stage ends and another begins. Furthermore, the stages may not occur in tidy sequence. Sometimes, a gain is followed by a setback before permanent progress is made. This is true of family and divorce mediation. One perspective posits that there are five typical stages of mediation: introduction, information gathering, framing, negotiating, and conclusion. The introduction stage consists of the mediator(s) and disputants becoming acquainted with each other, as well as arriving at a mutual understanding of the process of mediation and the mediator’s role in it. There are occasions when two mediators of different genders or different professional training will jointly mediate a family or divorce dispute (e.g., a female attorney and a male mental health or family professional). Whether conducted by one or two mediators, some ground rules regarding how participants are to interact with one another are established during this stage. An agreement or contract to mediate may even be signed. In the case of divorces or separations that involve children, the mediator will learn about the children through each parent’s eyes. For example, the mediator will seek descriptions of each child’s characteristics, talents, and needs, and then call attention to commonalities between the parents’ perspectives. In the information-gathering stage, issues in dispute are identified and prioritized. While the past is sometimes relevant, the present is typically most important. The mediator will likely ask the disputants to bring factual information and documents to the table as a basis for discussion (e.g., tax returns, bank and mortgage statements, or school records). During the framing stage, the mediator encourages each disputant to outline his or her reasons for wanting certain outcomes in the settlement. Hopefully, when each person’s concerns, priorities, goals,

502

Family Medicine

and values are heard, overlap is found. Such overlap provides a good foundation for the next stage, negotiating. Brainstorming options and weighing their pros and cons is at the heart of this stage. Unacceptable options are eliminated, whereas promising options are given closer examination, and trade-offs are considered. When the needs and interests of each disputant, and in situations of divorce or separation, the children especially have been maximized, it is time to draft a tentative settlement agreement. As sections of the agreement begin to solidify, each disputant is urged to have outside consultants, particularly an attorney, review what has been written. The goal of the concluding stage is a clear and unambiguous agreement that each disputant perceives to be fair and workable, and upon signing, is committed to following. As a Profession Professional organizations devoted in part or in full to mediation exist at the national, state, and local levels. Most noteworthy are two sections of the American Bar Association (Family Law and Dispute Resolution), the Association of Family and Conciliation Courts, and the Association of Conflict Resolution (resulting from the merger of the Academy of Family Mediators, the Conflict Resolution Education Network, and the National Institute for Dispute Resolution). A number of members of the Family Section of the Association of Conflict Resolution, disillusioned by unrealized goals of the merger, recently formed the Academy of Professional Family Mediators. These entities hold annual conferences. Some sponsor journals, including Conflict Resolution Quarterly and Family Court Review, which feature articles specifically related to family mediation. They also promote and publicize training. Mediator training programs vary in terms of setting, content, duration, and inclusion of practicum and/or supervision components. Few, if any, training programs are accredited, though this is a goal. While the number of master’s degree programs focusing on family or divorce mediation has increased, the attainment of such a degree has not yet become the established prerequisite to becoming a practitioner. Most training programs take place in nonacademic settings, and range from one to five days in length. Nonetheless, there is growing consensus about basic qualifications and core competencies needed for high-quality performance as a family mediator.

The model standards of practice indicate that minimum expectations include knowledge of family law; knowledge of and training in the impact of family conflict on parents, children, and other participants; knowledge of child development, child abuse and neglect, and domestic abuse; education and training specific to the process of mediation; and ability to recognize the impact of culture and diversity. Is family/divorce mediation a profession? Many believe that it is, or soon will be. The existence of professional organizations, peer reviewed publications, conferences, reputable training programs, core qualifications and competencies, and model standards of practice demonstrate that family/divorce mediation can be legitimately classified as a profession. Deborah B. Gentry Illinois State University See Also: Conflict Theory; Custody and Guardianship; Divorce and Separation; No-Fault Divorce; Parenting Plans. Further Readings Beck, Connie J., and Dennis Bruce. Family Mediation: Facts, Myths, and Future Perspectives. Washington, DC: American Psychological Association, 2001. Emery, Robert E. Renegotiating Family Relationships: Divorce, Child Custody, and Mediation. New York: Guilford Press, 2012. Folberg, Jay, Ann Milne, and Peter Salem. Divorce and Family Mediation: Models, Techniques, and Applications. New York: Guilford Press, 2004. Moore, Christopher W. The Mediation Process: Practical Strategies for Resolving Conflict. San Francisco: JosseyBass, 2003. Stoner, Katherine E. Divorce Without Court: A Guide to Mediation and Collaborative Divorce. Berkeley, CA: Nolo, 2009.

Family Medicine Family medicine is a medical specialty concerned with comprehensive, longitudinal health care for individuals regardless of age, gender, or disease history. It was established in 1969 in order to train primary care physicians (PCPs) amid a background of



medical specialization and fragmentation of care. Doctors who practice family medicine are called family doctors or family physicians. Primary care physicians include family doctors, general internists, general pediatricians, and obstetrician-gynecologists. Family doctors are unique among this group because they are qualified to treat patients of all ages, whereas the others treat one specific population (i.e., adults, children, or women). Family medicine emphasizes a holistic approach to health care. Family doctors are trained to integrate the biological, clinical, and behavioral sciences in order to care for their patients. This entails understanding not only the biology, diagnosis, and treatment of a patient’s disease, but also the motivations, concerns, and stressors of the patient, as well as the strengths or constraints of the patient’s community. Despite a growing history and continued importance of family medicine among medical specialties, there is increasing concern regarding a potential shortage of family doctors. Relevant Definitions A medical specialty is a field of medicine that has a board that maintains national professional standards for doctors within that field. Within medicine, there are 24 boards, including family medicine. Other specialties include surgery, pediatrics, and internal medicine. A PCP is a doctor who oversees most aspects of a patient’s health care and develops a relationship with that patient. The PCP evaluates acute health complaints, such as sprains and upper respiratory infections, offers preventative measures such as immunizations and colonoscopies, and manages all chronic conditions such as diabetes and depression. The role of a PCP is thus distinct from that of a specialist, who manages one specific health need, and is not necessarily based in the patient’s community. Therefore, the PCP’s contribution arises from the breadth of his or her training, as well as his or her personal connection to the patient. Importance of Family Medicine Half of all visits to doctors’ offices occur at primary care centers, and approximately one-third of PCPs are family doctors. While family doctors are geographically distributed across the United States in both rural and urban areas, general pediatricians and internists are more likely to be in urban centers. Thus, family doctors in rural settings provide a wide

Family Medicine

503

range of medical care due to the fewer numbers of general pediatricians and internists. Furthermore, within urban areas, family doctors are more likely to work in medically underserved areas. Without these family doctors, many Americans would experience more restricted access to health care. Family medicine contributes to an efficient health care system. The benefits of having a family doctor include lower mortality rates, decreased reliance on emergency departments and hospitals, and improved preventive care through regular checkups. Additionally, patients may disclose information to a family doctor with whom they have a relationship that they might not think to mention to a specialist. This can improve the health care they receive. Family doctors are the only doctors who are qualified to see entire families—children, adults, elderly people, and pregnant women alike—and they often do so at the same appointment, thus offering a comprehensive and holistic view to health care. Doctors in family medicine have made significant contributions to medical thinking. These contributions include helping to think about how disease affects people and their families, how the social context affects a person’s disease processes, and how to approach the delivery of systematic health care in the United States. History of Family Medicine Family medicine was established as a medical specialty in 1969 in response to the over-specialization of doctors and the shortage of primary care physicians that occurred earlier in the century. In 1910, the American Medical Association published the Flexner Report, detailing the changes necessary to improve American medical care and bring it on par with that of England and Germany. These changes included establishing premedical requirements, standardizing medical education, creating full-time faculty positions for teaching and research, and attaching medical schools to universities. Medicine thus became more scientifically rigorous, and as a result, began to fragment into different specialties in order to better understand and treat different disease processes. The first specialty was created in 1917, and growth was rapid; by 1940, 19 specialties existed. The additional push toward technological advancement after World War II further reinforced this trend. Thus, the focus of medicine shifted from the community, where medicine had been practiced in

504

Family Medicine

the 19th century, to the university medical center and its associated hospitals. During this shift, general practitioners lost their prestige and hospital privileges, and having completed only one year of post-medical school training, were less equipped to handle the new disease processes that were being characterized by specialists. Furthermore, there were not enough physicians to offer medicine in communities. In 1900, half of graduating medical students went into general practice; by 1964, less than a fifth did so. Patients often had to receive their health care from multiple providers, one for each problem. By the 1960s, there was public dissatisfaction with the physician shortage, high cost of health care, inaccessible health in rural areas and inner cities, and fragmentation of health care. Several reports were authored that reinforced the need for a specialty that focused not on a single organ system, but on the entire individual and his or her environment. The cultural turmoil of the 1960s also provided the setting for the establishment of a medical specialty that provided holistic care to all segments of the population. Family medicine was thus created. Training of Family Doctors Family doctors generally receive broad medical training, which enables them to care for patients in a wide array of settings. This breadth of knowledge and experience distinguishes family doctors from many other areas of medicine. While becoming a board-certified physician is technically voluntary, most hiring practices and hospitals now require board certification. Becoming a board-certified family doctor requires successful completion of medical school and residency. Medical schools are generally four-year post-college programs that offer a doctor of medicine (M.D.) or a doctor of osteopathic medicine (D.O.). Admission to American medical schools is often competitive, and requires completion of an admission exam in addition to certain science courses such as biology, chemistry, and physics. Residency is the training process in which a graduate from medical school, a newly minted M.D. or D.O., is trained through practicing medicine under the supervision of more experienced physicians. In 2013, there were 461 accredited family medicine residency programs in the United States, accepting 3,575 first year residents in family

medicine, and training a total of 10,384 residents in family medicine. During their fourth and final year in medical school, students are matched to these residency programs through the National Resident Matching Program. This is a uniform and competitive system whereby students are matched to residency programs based on their preferences and qualifications. The family medicine residency, like that for pediatric and internal medicine, lasts for three years after medical school. This residency provides core training in pediatrics, obstetrics and gynecology, internal medicine, psychiatry and neurology, surgery, and community medicine, as well as supplemental training in other fields. Residents learn their areas of medicine through treating patients with a variety of health concerns. This practical experience may be combined with lectures and conferences. Such a rigorous training process ensures that family doctors are equipped to handle any acute or chronic medical condition, as well as to offer preventive care to their patients. Residents may train in a variety of settings, including ambulatory, emergency, hospital, and home and long-term care facilities. A residency in family medicine may be university-based or community based. In a university based program, family medicine residents train together with residents from other specialties in academic hospitals. The benefits of these programs include a plethora of research opportunities, specialized training core fields, and opportunities to teach medical students. Meanwhile, in a community-based residency, residents train in smaller hospitals and health centers, where they are often the only residents. The advantages of these programs include exposure to more diverse patient populations, continuity of care, and extensive training in common diseases. Whether a medical school graduate pursues a university-based or community-based family medicine residency depends on the graduate’s interests. Family medicine residencies also differ in how much obstetric training they offer. After completing the residency and receiving a passing mark on the exam administered by the American Board of Family Medicine (ABFM), a person is fully qualified to practice as a board-certified family doctor.Board certification is different from licensure, which is granted after an M.D./D.O.



completes one year of post-medical school training, and passes the U.S. Medical Licensing Examination. In lieu of entering active practice immediately following residency, the physician may elect to undergo further training in a fellowship, which provides additional training in a subspecialty. Family medicine offers fellowships in adolescent medicine, emergency medicine, faculty development, geriatrics, hospice and palliative care, international medicine, obstetrics, preventative medicine, research, rural medicine, sports medicine, substance abuse, and women’s health. If a physician is interested in working in an academic setting, he or she may consider a fellowship because teaching institutions often view the fellowship favorably. Physicians may also consider pursuing a fellowship if they want additional research training or clinical skills. Once they achieve board certification, family doctors maintain their certification by re-applying to the ABFM every three years, when they must demonstrate professionalism, competence, and a commitment to learning. The ABFM maintains quality standards and minimum competency requirements for board-certified family doctors. Family medicine was among the first medical specialties to mandate continuing medical education (CME) for its board-certified members, which ensures continued learning and knowledge of medical advancements in the field. The education is available either through live courses, publications, or online. Shortage of Family Medicine Doctors As of the early 21st century, just as in the mid-20th century, there is once again a shortage of primary care doctors in the United States. This shortage stems from a combination of longer life expectancy that requires a longer period of care, and a newly insured population under the Affordable Care Act. While the demand for family doctors increases, their supply has decreased in recent years as family doctors continue to retire and medical school graduates choose to specialize in other fields of medicine. Although family medicine provides doctors with the opportunity to learn about and help their patients and communities, barriers to entry include a lower salary relative to other specialties, as well as perceived burnout and stress among PCPs resulting from treating many patients with a wide range of health care issues.

Family Medicine

505

The lower average salary in family medicine compared to other specialties is particularly problematic due to the increasing educational loan debts acquired by those entering medical education. By many estimates, the median educational debt carried by new physicians is over $100,000, which prompts many medical school graduates to enter specialties that are better paid than family medicine. To counter some of the disincentives for entry into family medicine, certain programs have been proposed or instituted to recruit and retain family doctors. These programs often benefit from partnerships between medical schools, the private sector, state governments, and the federal government. Financial incentives for family medicine may include scholarship programs and tuition waivers. Additional financial incentives may include educational loan forgiveness programs and low-interest student loans for those committed to pursuing careers in family medicine. Medical schools may also choose to emphasize primary care and supplement the resources dedicated to this area of medicine. They may also introduce rural-track programs in an effort to train family doctors for rural areas, where family doctors are often needed most. Finally, they may choose to recruit people for this area of medicine, both for the faculty and the student body. In sum, family medicine is an important branch of primary care that provides comprehensive health care to people of all ages, including children, adults, the elderly, and pregnant women. Because family doctors are more likely to practice in medically underserved areas, it is critical to avoid a shortage of family doctors in order to ensure the health of the entire American population. Elizabeth Ryznar Harvard Medical School See Also: Family Planning; Medicaid; Medicare. Further Readings American Academy of Family Physicians. “Family Medicine Specialty.” http://www.aafp.org/about/ the-aafp/family-medicine-specialty.html (Accessed September 2013). McGaha, Amy, et al. “Responses to Medical Students’ Frequently Asked Questions About Family Medicine.” American Family Physician, v.76/1 (2007).

506

Family Planning

Taylor, Robert, ed. Family Medicine Principles and Practice, 6th ed. New York: Springer, 2003.

Family Planning The decision to have a child is a significant lifechanging event. In fact, no other single decision transforms a household in both the short and long term quite like it. Children have sweeping social and economic impacts, not to mention consequences for relationships and priorities. Most people who make this decision understand that children are expensive and time consuming, but also that parenting is rewarding and satisfying. Parents report that overall children increase their happiness, quality of life, and make them better people. However, the number of U.S. households comprised of married parents with children has steeply declined since 1970. Between 1970 and 2012, the percent of U.S. married households with at least one child under 18 declined from 40 to 20 percent. In 2013, there were more single-parent households (28 percent) than married-parent households. In general, the trend in the United States is toward smaller families than in previous generations. This trend is the result of the combination of three factors: (1) the widespread availability of birth control to guide family planning, 2) the increasing cost of child rearing, and 3) decreasing annual household income. Family Planning The first factor that has influenced the size of the American family is the development and use of birth control. Contraception gives women control over their reproductive health and empowers them to make decisions regarding whether or not to have a child, and if so, the number and spacing of those children. This is critical because delaying childbirth enables women to complete their education and therefore increase their earnings potential. Birth control did not become legally available in the United States until the 1960s, and was not easily accessible until the 1970s. In 1836, Congress passed the Comstock Law, which prevented the distribution by mail of obscene material, including any information regarding birth control.

In the early 1900s, antipoverty and women’s rights activist Margaret Sanger began the reproductive rights movement in the United States. She opened up at birth control clinic in New York City in 1916. She was soon arrested for violating the Comstock Law by mailing pamphlets to women who wanted to stop having babies. She then founded the American Birth Control League, which was later renamed the Planned Parenthood Federation of America. In 1936, the courts ruled that physicians could distribute contraception to patients, but birth control methods were limited to diaphragms and condoms, and were not widely available. In 1960, the birth control pill was developed and sporadically made available to married women. However, health insurance companies were not mandated to cover the pill until 40 years later in 2000. If women wanted the pill, they had to pay for it out of pocket or find a Title X family planning clinic. In 1965, the U.S. Supreme Court ruled in Griswold v. Connecticut that couples have a “marital right to privacy and can use birth control,” but the right to possess birth control for unmarried women was not granted until 1972 in the Supreme Court decision Eisenstadt v. Baird. The following year, women were given the legal right to abortion in the landmark case Roe v. Wade. Prior to 1973, abortions were criminal actions, and they often took place in unsafe and unsanitary conditions, which resulted in serious complications and death for many women. By the mid-1970s, birth control was accessible to mainstream society, and because of federal legislation, it was also available to poor women. Birth control includes a range of methods, including hormonal contraceptives (the oral pill, injections, patches, and vaginal rings), sterilization (tubal ligation and vasectomy), barriers (condom, diaphragms, and sponges), and spermicides. Although sterilization is the most effective method, it is not easily reversible. Hormonal contraceptives are almost as effective and easily reversible. Although abstinence is considered a method of birth control, it has a high failure rate. Emergency contraception, or “the morning after pill,” disrupts fertilization, thus preventing an unwanted or mistimed pregnancy. These methods vary by price and have different side effects. Condoms are inexpensive and available without a prescription, but human error means that they are not the most effective means of birth control. Emergency contraception pills range from $35 to $60, and are available over



Family Planning

507

and numbering of children, and enhances the health outcomes of mothers and infants. Title X provides several core health services to women and men who could not otherwise afford it. These services include (1) contraception, (2) pregnancy testing, (3) breast and cervical cancer screening, (4) HIV testing, and (5) sexually transmitted infections screening and treatment. Title X does not include abortions, no Title X funds can be used to pay for abortions. Estimates indicate that over 5 million people use Title X each year, the majority white women, in their 20’s. Approximately 91 percent of clients had incomes below 250 percent of the federal poverty level, which equates to $45,875 for a family of 3. It is one of the most cost effective federal programs enacted by Congress, and continues to provide poor women and men power over their reproductive lives, family size, and economic futures.

The Sanger Clinic building is now a landmark in New York City. Margaret Sanger’s Birth Control Research Bureau was located here from 1930 to 1973.

the counter. Vasectomies may cost up to $1,000, and the majority of insurance companies cover the procedure. Tubal ligations or sterilization are much more expensive, costing approximately $3,800, but the majority of insurance companies cover the procedure. Tubal implants are cheaper, ranging in cost from $800 to $1250. Oftentimes, tubal ligations are performed after delivery of a baby. Birth control methods vary in terms of cost and effectiveness. Title X Family Planning For the past 40 years, access to affordable birth control has been made available to low-income woman and men through federal legislation identified as the Title X Family Planning Program. Title X, as it is commonly referred to, was enacted in 1970 as part of the Public Health Service Act, Public Law 91-572. No other single federal program has done more to improve the health and economic well being of women than Title X. It prevents unplanned pregnancies, empowers women to determine the spacing

Cost of Children A second factor that has influenced the size of the America family is the cost of child rearing. In 2010, it cost an average of $242,000 for a married, middleincome couple to raise a child from birth to age 18, excluding college. The annual cost of raising a child ranges from approximately $9,000 to $25,000 for a two-child, married couple. These costs vary widely by household income, geographic region, and the age of the child. Geographic region has a major impact on child-rearing expenses. The most expensive region in the United States is the northeast, and the least expensive areas are rural and the urban south. The average two-parent married couple in the northeast earning over $106,000 will spend approximately $446,100 on a child from birth to age 18, compared to those living in rural areas earning less than $62,000 who will spend roughly $143,000. Housing, childcare, and education are the largest expenses for families. For parents in the middle-income group with infants and toddlers, the biggest expense is childcare. The younger the child, the more expensive the childcare. The cost for full-time infant daycare ranges from $4,600 to $20,200 per year, compared to $3,900 to $15,460 for a 4-year-old in a center. Although childcare expenses substantially decrease once children enter school (i.e., if they attend public school), children’s activities increase as they become older and their interests become more expensive. These activities may include camps, music lessons, sports, school activities, and enrichment programs.

508

Family Research Council

However, parents who pay for their children’s college find that this is the most expensive line item. College expenses widely vary, depending on the type of institution. Two-year institutions are less expensive than four-year institutions, and public universities are cheaper than private universities. Based on the 2010 to 2011 academic year, an average year at a four-year public institution cost roughly $16,000, versus $32,617 at a private institution. These numbers includes tuition, room and board, and fees. However, the total cost of a college degree depends on the number of years it takes a student to graduate. The typical college student today takes longer than four years to graduate, which substantially increases the total cost. Another factor related to child-rearing costs is maternity leave. In addition to the time it takes for a new mother to recuperate from giving birth, it is critical that both parents bond with their baby. For working parents, taking time off from their jobs to care for a newborn has a price. Maternity leave policies greatly vary across organizations. All public agencies and organizations with more than 50 employees must adhere to the federal Family Medical Leave Act of 1993, which offers 12 weeks of unpaid leave. Many families cannot afford to take off work for 12 weeks without compensation, and many families work in small businesses that are not covered under the law. Annual Household Income The third factor is income. Annual household income sometimes influences a person’s decision about whether or not to become a parent. After weighing the pros and cons of parenthood, some couples and singles decide not to have children. These couples are often referred to as “double income no kids” or DINKS. A core reason that married couples and singles decide not to become parents is financial expenses. Estimates based on 2011 median household income indicate that 65 percent of households earn less than $75,000, which may make parenting out of reach for those families. Moreover, since 2008, U.S. median annual household income (adjusted for inflation) has decreased by $4,000. In 2007, the median household income was slightly over $54,000; by 2011, it dropped to $50,500. More alarming is the fact that families in the bottom income levels have been hit the hardest. From 1996 to 2011, the number of families living in extreme poverty doubled. In 2013, more than 1.5 American households lived

in extreme poverty. The recession of 2008 rendered parenting a dubious proposal; many families cannot afford to have children without encountering serious financial hardships. Lorenda A. Naylor University of Baltimore See Also: Abortion; Birth Control Pills; Childless Couples; Infertility; Later-Life Families. Further Readings Chandra, Anjani, et al. “Fertility, Family Planning, and Reproductive Health of U.S. Women: Data From the 2002 National Survey of Family Growth.” Vital and Health Statistics, v.25 (2005). Cleland, Kelly, et al. “Family Planning as a Cost-Saving Preventative Health Service.” New England Journal of Medicine, v.364/18 (2011). Frost, Jennifer J. and Laura Duberstein Lindberg. “Reasons for Using Contraception: Perspectives of U.S. Women Seeking Care at Specialized Family Planning Clinics.” Contraception, v.87/4 (2012). U.S. Bureau of the Census. Current Population Survey. http://www.census.gov/cps (Accessed January 2014). U.S. Department of Agriculture. Center for Nutrition Policy and Promotion. “Expenditures on Children by Families, 2012.” http://www.cnpp.usda.gov/ Publications/CRC/crc2012.pdf (Accessed January 2014). U.S. Department of Education, National Center for Education Statistics. (2012). Digest of Education Statistics, 2011. http://nces.ed.gov/FastFacts/display .asp?id=76 (Accessed January 2014). U.S Department of Health and Human Services. Office of Population Affairs. Title X Family Planning Program. http://www.hhs.gov/opa/title-x-family -planning/index.html (Accessed January 2014).

Family Research Council The Family Research Council (FRC), a Christian lobbying organization, promotes politically and socially conservative policies. Started in 1981 by James Dobson, the organization was incorporated in 1983 as a nonprofit educational institution in Washington, D.C., as part of Dobson’s Focus on



the Family organization. The founding board of the FRC included psychiatrists Armand Nicholi, Jr., of Harvard and George Rekers of the University of South Carolina School of Medicine. In 2010, Reckers was implicated in a scandal when a young man he claimed to have hired as a travel assistant came forth with accusations that he had given Rekers nude massages involving genital touching. The selection of men like Dobson, a practicing clinician prior to his work in radio broadcasting with Focus on the Family, Nicholoi, and Rekers, who is wellknown for using his platform as a scientist to argue against gay rights, underscored the FRC’s goal of creating a conservative educational outlet informed by scientific research. Jerry Regier, a Department of Health and Human Services administrator during under President Ronald Reagan, first led the organization, followed by Gary Bauer, a domestic advisor for Reagan. During Bauer’s tenure, the FRC’s political activism threatened Focus on the Family’s tax-exempt status, so they separated in 1992. By the time of Bauer’s departure in 2000 to run for president, the FRC’s influence had considerably grown. From 2000 to 2003, Ken Connor, a lawyer with extensive pro-life experience, led the FRC. He was followed by Tony Perkins, a two-term state legislator from Louisiana who had worked in that state to oppose the gambling industry and promote pro-life policies. As state senator, Perkins authored the first U.S. covenant marriage law, limiting grounds for divorce. Perkins is a frequent guest on political talk shows, often serving as the voice of conservative religion. His daily radio show, Washington Watch With Tony Perkins, includes prominent conservative leaders discussing current events. Today, the FRC is associated with FRC Action, a 501(c)(4) lobbying political action committee advancing conservative positions on topics including gay rights; abortion and contraceptives; divorce; homeschooling, sex education, intelligent design, and school prayer; parental rights; embryonic stem cell research (though it promotes adult stem cell research); pornography, profanity, and indecency; and global climate change. The FRC also lobbies for changing the tax code to increase benefits for married families. Additionally, the FRC actively opposes what the organization calls “judicial activism”—specifically, the legal recognition of gay rights by judges.

Family Research Council

509

Activities The FRC maintains an active publishing program, producing Web-based and paper texts, including email alerts providing news updates, press releases, policy statements, amicus briefs, and pamphlets. The FRC’s most notable contribution to public debate is the FRC Action’s Values Voter Summit each fall in Washington, D.C. Started in 2006, this event includes a Values Voter straw poll that aims to reveal conservative believers’ support for the next Republican presidential nominee; the poll has never accurately predicted the eventual nominee. However, the summit grew in importance after the larger Conservative Political Action Conference in 2011 included GOProud, a gay rights group, reflecting the tension in the greater Republican Party over gay rights. The FRC’s depiction of same-sex attraction as inherently disordered, unnatural, harmful to society, detrimental to families, and unhealthy for individuals has earned it criticism by social scientists who question the validity of the group’s claims. The organization frequently calls upon discredited or questionable research produced by Tim Dailey and Peter Sprigg; the National Association for Research and Therapy of Homosexuality (NARTH), which advocates for “reparative therapy” that is aimed at assisting gay people in overcoming their sexual desires; and the American College of Pediatricians, a conservative group that splintered from the American Academy of Pediatrics when that group depathologized homosexuality. The FRC’s continued reliance on scientifically disreputable information, and in particular, its circulation of stereotypes of gay men as sexual predators, prompted the Southern Poverty Law Center (SPLC) to add the FRC to its list of hate groups, a designation that even some supporters of gay rights considered to be an overstatement. FRC as a Hate Group Its designation as a hate group prompted outrage from the FRC, which claimed that the SPLC had included the FRC on its list solely because it opposes gay marriage and rights. The SPLC quickly reminded the public that the FRC was included based not on its antigay rights perspective, but on its knowing references to discredited research to support its claim, as well as what the SPLC has identified as repeated anti-gay hate speech.

510

Family Reunions

The hate group designation was viewed by the FRC as motivation for an attempted shooting at FRC headquarters. On August 15, 2012, 26-year-old Floyd Corkins II entered the building and shot and injured security guard Leonardo Johnson. Despite his injuries, Johnson, with help, incapacitated Corkins until police arrived, an act of bravery recognized with the mayor’s medal of honor. Corkins had brought nearly 100 rounds of ammunition into the building in hopes, he said, of committing mass violence. He also had with him 15 chicken sandwiches from Chick-fil-A. Only a few weeks prior to the attack, the president of the restaurant chain had spoken out against gay marriage, prompting the FRC, among others, to support Chick-fil-A Appreciation Day. Corkins reportedly told police that he had intended to smear his victims’ faces with sandwiches as a political statement. He was explicit in his statement to police that he was targeting the FRC because of its anti-gay rights activism, and that he had seen the group listed on the SPLC Web site. Since the incident, the FRC has repeated its claim that the SPLC’s hate group labeling contributed to the violence. Rebecca Barrett-Fox Arkansas State University See Also: American Family Association; Christianity; Evangelicals; “Family Values”; Focus on the Family; Protestants. Further Readings Peterson, David James. “The ‘Basis for a Just, Free, and Stable Society’: Institutional Homophobia and Governance at Family Research Council.” Gender and Language, v.4 (2010). Smith, Lauren Edwards, Laura R. Olson, and Jeffrey A. Fine. “Substantive Religious Representation in the U.S. Senate: Voting Alignment With the Family Research Council.” Political Research Quarterly, v.63 (2010).

Family Reunions Family reunions are get-togethers of multiple family units comprised of at least three generations. This type of kinship ritual fosters recurring, patterned

interactions among family members that serve to honor family relationships. Reunions help to produce and reinforce social webs, family and cultural identity, and shared beliefs and values. These events may be conceptualized as celebrations, which are shared enactments common within the broader culture, and which may become a family tradition when they take on a form that is unique and idiosyncratic to the family’s identity. Family reunions may take place on a regular basis, such as annually or every five years, on an intermittent basis, or coincide with another ritual that brings the family together, such as weddings and funerals. Reunions allow families to knit together their past, present, and future, and maintain the family system, especially when families are geographically dispersed. When successfully enacted, the family reunion ritual preserves families’ roots, enlarges its history to new kinship (e.g., spouses or children coming of age), and builds family bonds and connectedness, particularly multigenerational connectedness. Family rituals tend to become more important to younger generations once they have children, when they begin to appreciate their place in the family’s history and desire stronger connections to the family. While the reunion ritual has many benefits for the family, it may also bring to the surface family stressors, turbulent boundary issues concerning who is included and not, or loyalty conflicts that may be largely shelved when the family is not together. Reunion Structure and Planning The forms that family reunions take are as varied as families. They range from a single event to multiple days with a series organized activities. Reunions include activities such a picnics, formal banquets, workshops, plays, or fashion shows to engage with family history. Meals are important to most family reunions, and offer another way to establish rituals and traditions associated with the family’s cultural roots, geographic region, or recipes from beloved family members. Artifacts are often highlighted in family reunions, and range from photographic displays to religious items of symbolic importance, to matching T-shirts. Family reunions often coincide with, and help facilitate, family history or genealogy projects. Reunions may be held in a location of significance to the family, for example, on a family farm or town associated with multiple generations, or in central



locations that facilitate easy or affordable travel. For families with troubled pasts, reunions may be planned in a new or neutral location to provide a fresh start for family relations. Families with the financial means may plan trips to an ancestral home or to exotic locations for additional adventure. Reunion planning normally becomes the responsibility of family kinkeepers, most often middle-generation women who provide family support and maintenance. Kinkeepers may form a committee within the family to take on various tasks, including coordinating communications, locating venues, planning activities and meals, and organizing photography or videography. Families who are able to do so may hire a reunion planner. Social media provides families with the ability to communicate and share resources and photographs. Digital communication is not replacing family reunions, but rather digital media make it easier to locate family members and plan family reunion events. Family Reunions and Ethnicity Family reunions allow people to connect and celebrate their ethnic heritage. While there has been surprisingly little research on the topic, family reunions are better documented in African American families. The Family Reunion Institute at Temple University sponsored the African American Family Reunion Conference for many years, and provides information on planning reunions. For African Americans, family reunions began after emancipation, and emerged as rituals to restore family cohesiveness and rekindle ties that had been weakened by slavery, Jim Crow laws, and white supremacy. For some families, former plantations or rural communities in which they had roots emerged as sites where reunions were held. Other African American families preferred to start anew in locations to which families had relocated. Some family reunions also included others who lived on plantations, both black and white, who would come together in recognition of their intertwined, and at times interracial history. Apart from the celebratory nature of African American family reunions, they help to preserve cultural heritage, strengthen positive role modeling for future generations, and ensure the high standing of family elders. In addition, some families use the ritual as an opportunity to share health information critical to African Americans, including diabetes and kidney disease.

Family Service Association of America

511

In recent decades, family reunions have become prevalent among migrant families, helping them to maintain multiple social, political, and cultural connections across national borders. For migrants, family reunions create a space where individuals can share memories and learn about their ancestors and cultural histories. Because migration often results in a deterioration of family connections, family reunions help to preserve family ties and cultivate new ties among those who are widely or internationally dispersed. Dawn O. Braithwaite Jenna Stephenson Abetz Julia Moore University of Nebraska, Lincoln See Also: African American Families; Rituals; Social History of American Families: 2001 to the Present. Further Readings Baxter, Leslie and Catherine Clark. “Perceptions of Family Communication Patterns.” Western Journal of Communication, v.60 (1996). Kluin, Juyeon and Xinran Lehto. “Measuring Family Reunion Travel Motivations.” Annals of Tourism Research, v.39 (2012). Leach, Margaret and Dawn O. Braithwaite. “A Binding Tie: Supportive Communication of Family Kinkeepers.” Journal of Applied Communication Research, v.24 (1996). McCoy, Renee. “African American Elders, Cultural Traditions, and the Family Reunion.” Generations, v.35 (2011). Sutton, Constance. “Celebrating Ourselves: The Family Reunion Rituals of African-Caribbean Transnational Families.” Global Networks, v.4 (2004). Wolin, Steven and Linda A. Bennett. “Family Rituals.” Family Process, v.23 (1984).

Family Service Association of America The Family Service Association of America is a national organization that sets standards, promotes family health, and provides communication among

512

Family Service Association of America

social work agencies that offer family counseling, casework, and other services. The FSAA was established in Boston in 1908, began operation in 1911, and included agencies in Canada and the United States. At that time, it was known as the National Association of Societies for Organizing Charity, and its 59 members were spread throughout New England and the Pacific northwest. Origins The first Charity Organization Society (COS) in the United States was founded in 1877 in Buffalo, New York. By the end of the century, many communities had local charity organizations, but they were usually autonomous with no communication or coordination between them. Thus, some families and individuals learned to take advantage the system by seeking help from numerous charities. To address the issue, local charity leaders began meeting at the National Conference of Charities and Correction (NCCC). Its goals were to use casework to help families and individuals become self-sufficient and to research and disseminate means of preventing poverty and other social problems. In 1879, seeking a more scientific approach to ameliorating the woes of urbanization and reducing abuses from duplication of effort, the NCCC established a standing committee on charity organizations. The basis for the national organization that promoted and coordinated charity organization in cities throughout Canada and the United States was a series of meetings at the NCCC and similar work by the Russell Sage Foundation’s Charity Organization Department. The FSAA’s name has changed numerous times over the years. From the National Association of Societies for Organizing Charity, it became the American Association of Societies for Organizing Charities, and then the American Association for Organizing Charities. By 1930, it was known as the Family Welfare Association of America, before becoming the Family Service Association of America in 1946. In 1993, it became the Alliance for Children and Families and is part of Families International Inc. For many years, the FSAA’s headquarters were in New York City, before they relocated to Milwaukee, Wisconsin, in the 1980s. Activities The association supports both volunteer and government agencies. Its research efforts began in 1922,

with establishment of a committee on industrial problems to address unemployment. Other committees established in the 1920s and 1930s dealt with homelessness, relief, housing, and subsistence. The FSAA expanded its scope after World War II to include counseling and casework support for children, the elderly, and the mentally ill. In 1952, the FSAA spearheaded the establishment of criteria for skilled professionals in the social services field. By that time, it included 250 family service agencies that were assisting over 750,000 people per year through their missions to provide aid to families and individuals in need. The consensus in the field at the time was that family and personal problems were the result of emotional, physical, social, and economic factors. Several government programs provided economic relief by then, which left nonprofit family service agencies free to concentrate on counseling and preventative services. Financial support for FSAA agencies came from governmental and private funds, or from fees charged by some member agencies. A public issues committee began in 1953, lasting until 1968. Project ENABLE, which began in 1965, dealt with the usefulness of neighborhood coalitions in combating poverty, and in the 1970s, the FSAA addressed the growing concern over unmarried parents. Also in the 1970s, the FSAA considered but rejected merging with the Child Welfare League of America and the Florence Crittenton Association of America, and between 1973 and 1976, the FSAA and CWLA worked together to found the Council on Accreditation for Families and Children, which became independent in 1977 after existing as a joint body of CWLA and FSAA for a brief time. After it was renamed the Alliance for Children and Families, Inc. (ACF) in 1993, the organization formed Ways to Work Inc., which provided technical assistance in program development and fundraising, software, and matching low-interest loans to seed local loan pools. A service begun in Minneapolis in 1985 provided low-income single parents with loans to cover unanticipated expenses that would otherwise interfere with their work or education. In 1990, the loan program in Duluth became coadministered by a local bank, and in 1994, the program went statewide. The Ways to Work loan program added eight to 12 new sites per year. Borrowers with bad or no credit received intense training before and during the life of the loan. The program operated in 20 states, with 35 agencies providing

Family Stress Theories



nearly 3,000 loans totaling over $6 million between 1996 and 2002. The default rate was 14 percent of dollars lent, and 85 percent of loans were used to purchase cars, mostly by women, sometimes with supplemental individual funds, always with credit counseling beforehand. In 1905, the NCCC began publishing a journal to exchange information among 14 charity societies and the newly formed Russell Sage Foundation. The Family, a monthly magazine, began publication in 1920, changing its name to the Journal of Social Casework in 1946, and Social Casework in 1950. Between 1940 and 1971, the FSAA published the Highlights newsletter, later renamed Family Service Highlights. John H. Barnhill Independent Scholar See Also: Living Wage; Minimum Wage; Welfare. Further Readings Alliance for Children and Families. “Family Service Association of America Snapshot” (2010). http:// alliance1.org/centennial/book/family-service -association-america-snapshot (Accessed January 2014). Davidann, Jon, Linnea Anderson, and David Klaassen. “Family Service Association of America Records, Social Welfare History Archives.” Elmer L. Andersen Library, University of Minnesota, 2002. http:// special.lib.umn.edu/findaid/xml/sw0076.xml (Accessed January 2014). Federal Reserve Bank of Philadelphia. “Family Services Association Ways to Work Program, Technical Brief ” (August 2002). http://www.phil.frb.org/community -development/publications/technical-briefs/tbriefs4 .pdf (Accessed January 2014). Hansan, John E. “Family Service Association of America: The Origin of FSAA.” http://www.socialwelfarehistory .com/organizations/family-service-association-of -america-part-i (Accessed January 2014).

Family Stress Theories Stress is an unavoidable part of family life. When major life or family events are anticipated,

513

individuals can engage in coping mechanisms to keep stress to a manageable level. However, unplanned events or changes, which may be internal or external in origin, can create significant stress, depending on the family’s resources and ability to cope. R. Lazarus describes stress as a “stimulus condition that results in disequilibrium in the system that produces a dynamic kind of strain.” Thus, a stressor is an event that changes the current family system and dynamic. Stressors require adaptation within the family system. Resilient families are better able to make such adaptations. Crisis Theory and the ABC-X Model Major contributions to family stress theory begin with Reuben Hill’s pivotal ABC-X model, developed in 1949. Using data on the adjustment of Iowa families to the crisis of separation and reunion during and after World War II, he set forth a two-part theoretical model of families under stress. The descriptive portion proposed a course of adjustment that assumed that families are stable until a stressor event upsets the balance, yielding a state of disorganization. The family tries to resolve the crisis through trial and error in order to return to a functionally stable level. The model labels the factors influencing crisis severity, with A representing the hardship of the event or situation, B representing the family resources, C representing the family’s perception of the event as threatening, and X representing the severity of the resulting stress or crisis. Stress or crisis is not seen as inherent in the event but as a function of the family’s response to the stressor. The Double ABC-X Model In an effort to increase the number of variables in the family stress equation, McCubbin and Patterson introduced the Double ABC-X model of family stress and adaptation, which builds on Hill’s original model. It redefines the original A, B, and C factors as precrisis variables, and adds postcrisis variables in an effort to describe the additional stressors prior to or following the crisis event. Thus, the effects of the crisis depend on Double A, a pile of demands; Double B, family adaptive resources; and Double C, the family’s perception of the situation. All of these result in Double X, the range of outcome in response to the pile up of stressors, from maladaptive to bonadaptive. Y. Lavee, H. McCubbin, and J. Patterson tested this theoretical model in 1985 with data on

514

Family Stress Theories

army couples’ adaptation to the stress of relocating overseas. Their results supported the notion of a pile of demands, showing that previous family life events significantly influenced the postcrisis strain. Family resources and social support were both found to facilitate adaptation, with family resources directly affecting adaptation and social support acting as a buffer between stress and family adaptation. Intrafamily resources proved part of the couples’ adaptive power, such that they directly enhanced family adaptation. In particular, couples who were more cohesive, who communicated support better, and who were more flexible were better able to adapt to the pile up of stressors in their lives. In an effort to reconceptualize family stress, A. Walker offers criticism of the ABC-X framework and an alternative contextual model, which makes room for multiple levels of analysis and takes into account the social system and sociohistorical backdrop. She argues for a distinction between individual and familial factors, and for an inclusion of macrolevels. Walker describes how each point of the model can be revised to maximize its representation. The definition of the “crisis” A or stress as an event ignores cumulative features of ongoing stress, such as economic deprivation and strain. McCubbin and Patterson recognized this problem of emerging events acting as new stressors when they proposed the Double ABC-X model with a Double A factor dealing with pile up of stress. However, the problem of requiring an initial event to begin the crisis process still remained. Identifying a specific occurrence as stressful is inconsistent with a contextual model, which suggests that change or stress is more constant in nature, and not a disruption of homeostasis. The classification of a crisis should be considered more fluid than previously described. For example, economic problems, which are often conceptualized as internal relationship problems in couple therapy, such as a failure of couples to work together or solve their economic problems effectively, can also be thought of in terms of the external situation, resulting from pressure from a changing economic system. Walker suggested that families’ resources and coping strategies are more predictive of the family stress process than simply details of a particular stressor or event. A similar point made by P. Boss (1987), who emphasizes that the stressor event or situation

does not directly act on the family system. Instead, it is the appraisal of the situation as determined by internal and external contexts that determines whether the family will cope or fall into crisis. The couple’s perception of the stress and the meaning that it holds for the family within it is a major determinant of outcome. Although the crisis theory derived from the ABC-X model was designed to explain family functioning in reaction to stressful events, it has also been used by marital researchers to explain and predict marital outcomes. These efforts assume that declines in marital satisfaction and separation result from failure to recover from the stress or crisis. Couples experiencing higher stress may be more vulnerable to negative marital outcomes, and this effect may be moderated by the couple’s levels of adaptive resources. Crisis theory can be used to focus on the direct effects of external events and stressors on processes within and between spouses if adaptations are conceptualized as multilevel variables. Although crisis theory offers a means of predicting when declines in marital satisfaction are likely to occur, it is vague in specifying mechanisms of change in the marriage. According to the theory, marriages change in response to the need to adapt to stressful events. Mundane Extreme Environmental Stress Model The Mundane Extreme Environmental Stress Model as developed by M. Peters and G. Massey in 1983 incorporates racism and oppression experienced on a daily level in the lives of families of color. The environment in which African Americans live is a “mundane extreme environment,” involving daily microaggressions such as receiving poor service based on skin color and ethnic heritage. These microaggressions combine to form a mundane extreme environmental stress (MEES). Mundane refers to stress so commonplace for African American families that it is taken for granted. The stress is extreme because of the harsh psychological impact on people of color. Racism and oppression are not viewed as additional stressors, but ones that are integrated into the everyday experiences of families of color and poor families. Such families may encounter stress and crises that impact their functioning, putting them at risk for health and mental health problems and



family conflict. These families have been labeled multiproblem or dysfunctional families. Such labels ignore the multitude of ecological factors contributing to family stress and marginalization. Racism and oppression are seen as part of the A factor (stressor) in the ABC-X model. Peters and Massey recommend the addition of a D factor in the model to represent pervasive environmental stress associated with being a person of color. Family Economic Stress Model A more specific line of research from R. Conger and colleagues draws upon an evolving Family Economic Stress Model developed to guide analysis of family economic hardship. This theoretical perspective draws on research from American families during the Great Depression and studies of the Iowa farm crisis in the 1980s. Conger & Elder’s research takes both a developmental and behavioral perspective by evaluating the impact of economic difficulties on parents, and later their children, and by examining the interaction of couples. The Family Economic Stress Model is built on the framework of economic hardship influenced by the interdependent emotions and behaviors of family members. Family members’ interdependent lives connect broad socioeconomic changes to the experiences and well-being of individual family members. They believe that mounting economic pressures alter relationships by changing individual behavior and family relations. More generally, their theoretical framework for understanding family stress proposes that stressful events or conditions create strains in daily living. These strains affect mood and behaviors of individual family members. Findings provided support for the mediation model, which proposes that economic pressure increases risk for emotional distress, which in turn increases marital conflict and subsequent marital distress, according to Conger and colleagues. Findings from a Czech study of households during the mid-1990s were generally consistent with U.S. research, indicating that family interactions intervene between economic stress and marital outcomes. In particular, economic pressure made Czech spouses irritable, and their tension exacerbated problem behaviors such as drinking and fighting. Resulting hostility then impacted marital stability, according to J. Hraba, F. Lorenz, and Z. Pechacova. Economic

Family Stress Theories

515

strain experienced by families generates adverse consequences for marital happiness and wellbeing. Economic strain and pressures have a negative impact on partners’ emotions, which in turn have direct and indirect effects on marital quality through exacerbated conflict. Resilience to Family Stress Resilience and protective factors describe characteristics that help individuals and families adapt to challenges and adversity. Although resilience has typically been used to refer to psychological attributes of the individual, protective factors promoting such resilience encompass relational as well as psychological elements. Relationship resources include a strong, confident relationship with one’s partner and family, access to social support, and social integration in the form of occupying multiple roles. In marriage, positive adaptation to adversity includes continued marital satisfaction and the avoidance of marital distress. Furthermore, social support and mastery may buffer families from the deleterious effects of stress and strain. Social Support Researchers from a variety of disciplines have presented data that maintain the relationship between social support and the ability to adjust and cope with stress and change. Social support allows family members to adapt more easily to change and appears to protect them from both physiological and psychological health consequences of stress, according to McCubbin and Patterson. Marriage can serve a protective function during stressful times by providing social support, which includes such characteristics as companionship, security, cohesion, fondness, encouragement, and emotional support, according to L. Pasch, T. Bradbury, and K. Sullivan. Couples who exchange supportive behaviors when under economic stress have been found to experience less emotional distress than nonsupportive couples, according to Conger and colleagues. Although much literature has looked at the role of social support in the alleviating the stress process, there have been some inconsistencies in the literature as to the role of social support. These depend on who the support is from (the partner, other family members, or outside the family), and how it is perceived. It has even been suggested by M. Tucker that the absence of support

516

Family Therapy

interferes more with adjustment than the presence of support facilitates it. Mastery Mastery is a construct important to the resilience process. People with a high degree of mastery have a history of successfully coping with stress and challenges in their lives. Such a history should also lead them to be more effective in adapting to future stress. Mastery was found to compensate for the direct effects of economic pressure on depressive symptoms, and also increased the likelihood that couples would successfully cope with economic problems, thus decreasing economic pressure over time, according to Conger and Conger. A family with a mastery orientation may believe that they can solve anything, and that they can control any situation. These are families that have remained together through other adversity, may have hardier marriages, and are better equipped to deal with stress and economic strain. Marina Dorian Alliant University

Structural Equations With Latent Variables. Journal of Marriage and the Family, v.47 (1985). Lazarus, R. S. Psychological Stress and the Coping Process. New York: McGraw-Hill, 1966. McCubbin, H. I. and J. M. Patterson. “Family Adaptation to Crisis.” Family Stress, Coping, and Social Support, H. I. McCubbin, A. Cauble, and J. Patterson, eds. Springfiled, IL: Charles C. Thomas, 1982. McCubbin, H. I. and J. M. Patterson. “The Family Stress Process: The Double ABCX Model of Adjustment and Adaptation.” In Social Stress and the Family: Advances and Developments in Family Stress Theory and Research, H. I. McCubbin, M. B. Sussman, and J. M. Patterson, eds. New York: Hawthorn Press, 1983. Peters, M. F. and G. Massey. “Chronic vs. Mundane Stress in Family Stress Theories: The Case of Black Families in White America.” Marriage and Family Review, v.6 (1983). Walsh, F. Strengthening Family Resilience. New York: Guilford Press, 2006. Walker, A. J. “Reconceptualizing Family Stress.” Journal of Marriage and the Family, v.47/4 (1985).

See Also: Divorce and Separation; Family Therapy; PREPARE/ENRICH Programs.

Family Therapy

Further Readings Boss, P. “Family Stress.” Handbook of Marriage and the Family. M. B. Susman and S. K. Steinmetz, eds. New York: Plenum Press, 1987. Conger, R. D. and K. J. Conger. “Resilience in Midwestern Families: Selected Findings From the First Decade of a Prospective, Longitudinal Study.” Journal of Marriage and the Family, v.64 (2002). Conger, R. D. and G. H. Elder. Families in Troubled Times: Adapting to Change in Rural America. New York: Aldine de Gruyter, 1994. Conger, R. D., M. A. Rueter, and G. H. Elder. “Couple Resilience to Economic Pressure.” Journal of Personality and Social Psychology, v.76 (1999). Hill, R. Families Under Stress. New York: Harper and Row, 1949. Hraba, J., F. O. Lorenz, and Z. Pechacova. “Family Stress During the Czech Transformation.” Journal of Marriage and the Family, v.62 (2000). Lavee, Y., H. I. McCubbin, and J. M. Patterson. “The Double ABC-X Model of Family Stress and Adaptation: An Empirical Test by Analysis of

In the first half of the 20th century, the main theory of psychotherapy was psychoanalysis, which held that past experiences and current intrapsychic forces are responsible for behavioral problems. Behaviorism as a method of psychotherapy was developed in the 1950s, and humanistic theories emerged in the 1960s. These new theories challenged the notion that a person’s past determined current behavior, and speculated that outward interpersonal forces determined personality (behaviorism) or that future strivings (humanistic theories) pulled people to behave in certain ways. Around the same time, several somewhat independent movements in the United States coalesced into the development of a new style of treatment—family therapy. The switch from thinking about individual patients to thinking about families was not easy. At the time, individual problems, whether experienced by children or adults, were thought of as emanating from an individual’s past trauma or unresolved conflict. Consistent with physical medicine, an outward problem was



Family Therapy

517

Postgraduate students of the marital and family therapy program at Loyola Marymount University in Los Angeles. Students learn an innovative program that leads to a master of arts in marital and family therapy, with specialized training in clinical art therapy. Students are trained to integrate their visual art backgrounds with psychotherapeutic skills as they work with a variety of clients.

seen as a manifestation of an inner disease process. Freud had acknowledged the influence of the mother on the development of the child, but his treatment exclusively focused on the individual child or adult. It was difficult for practitioners who worked from an individual dynamic perspective to see how problems experienced by one family member were connected to problems experienced by another family member. Each person in the family was seen as a separate entity; if a child acted out at school and a mother was depressed—these symptoms were seen as independent of each other. The connection between the child’s symptoms and mother’s symptoms, which scientists now almost take for granted as related, was difficult for clinicians to see at the time. Psychoanalysis accounted for how the mother influences the child’s personality, but it did not go far enough. To address its shortcomings, psychologists launched the child guidance movement, in which parents were seen in therapeutic sessions along with their children. Clinicians discovered that as children showed improvement, their parents uncovered marital issues. Thus, marital therapy was born. A third type of therapy soon arose; group therapy developed from notions of the roles that people play in living groups, and stressed

the importance of clear communication and how group mores and rules evolved over time. The fourth influence on family therapy was research with schizophrenic families, conducted by Theodore Lidz and Lyman Wynne, who focused on communication disorders; and by Gregory Bateson, Carl Whitaker, Jay Haley, John Weakland, Donald Jackson, and Virginia Satir, focusing on the role of communication, feedback, and homeostatic mechanisms in the development and maintenance of problems. The fifth influence was the development of general systems theory, proposed by Ludwig von Bertalanffy. According to general systems theory, all living systems, including families, showed a wholeness that went beyond the sum of its parts, and maintained a steady state (homeostasis) of which symptomatic behavior was a part. All these movements coalesced into the family therapy movement. Basic Tenets of Family Therapy Although some of the original developers of family therapy insisted on the entire family being present for therapeutic sessions, modern family therapy is more insistent that the “patient” is the entire family, regardless of who shows up for sessions. That is, the family is viewed as a system, with rules, roles,

518

Family Therapy

and feedback mechanisms, referred to collectively as cybernetics. Cybernetics details how ongoing behaviors, including symptoms, are oriented toward maintaining a homeostasis, a recurring pattern of interactions that maintains the stability of the family, particularly in times of stress. Family interactions are also governed by a “circular causality,” rather than linear causality. That is, regardless of what might have originally “caused” a problem behavior, that behavior is both a reaction to things going on in the family and an invitation to the family to continue that behavior. A simple stereotypical example is the alcoholic who claims that his wife makes him drink because she nags, and the wife who claims that she nags because her husband drinks. The family therapist is generally not interested in whether the husband started drinking first or the wife started nagging first; only that when he drinks, she nags, and when she nags, he drinks (a “vicious cycle”). A more complex example of circularity is a child whose acting out distracts dad from his depression, or mom from her work difficulties, or both mom and dad from their marital problems. The tension in the family that increases when mom and dad fight, or mom is having a terrible time at work, or dad’s depression is getting worse, leads to the child’s acting out. The child’s behavior pulls attention away from the “real” problem, which is ignored. Mom and dad are too busy taking care of the misbehaving child to pay attention to their individual or marital problems. As the child starts to behave properly, and there is no diversion for the adults, their problems once again threaten the status quo, and the child again acts out to protect the parents. Treating the child as if his or her behavior emanates intrapsychically, such as from low self-esteem or an unresolved early childhood conflict, will not solve the problem, because a minor’s behavior is so intimately interwoven with that of his or her parents. This is a key concept in family therapy that was espoused by Murray Bowen in his description of “triangles” or “triangulation,” a process by which a third person (e.g., a child) is sought to stabilize an unstable dyad (e.g., a marital relationship). Family therapy focuses almost exclusively on interpersonal interactions in the here and now. For the most part, family therapists do not spend too much time asking about the past because past events are dealt with differently by different families. It is thought that the content of the past is not

as important as the process; that is, how these events are dealt with in the present. Does it really matter, for example, who started the marital fight last week? What really matters is that the couple is dealing with it by avoiding each other, blaming each other, or taking out their frustrations on the children. Another basic tenet of family systems theory is that families have a structure, subsystems, and boundaries that guide their communication and behaviors. When this structure becomes out of whack, such as if children assume parental roles or parents become overly involved in their children’s lives, that someone in the family shows a symptom, and the family structure becomes problematic. In family therapy, the family, not its individuals, are the “patient.” This differs from how family members usually think about problems because they tend to identify one of the family members as the “patient.” Family therapists call that person the “identified patient,” but their focus is almost always on the whole family or on subgroups within the family (such as the father-son dyad or the triangle of mother-daughter-grandmother). Any clinician who has worked with families understands that each person has a different view about how the family members interact, and they act differently when by themselves or when family members are present. So, learning about individual members can never reveal what the family is like because the whole is more than the sum of its parts. Family therapists also know that because of this, any change in one member of the family affects both the family structure and each member individually. Other concepts of family therapy include the notion of “first-order” versus “second-order” change. The former involves changing a behavior, such as parents attempting to make a child more compliant. The latter is an attempt to change the rules that govern a behavior, such as parents insisting that a child obey. Family approaches to treatment focus on second-order change—changing the rules of a family system—so that similar problem behaviors do not emerge in the future. Theories of Family Therapy Although there are dozens of schools of family therapy, and even more when considering marital therapy, the major ones include structural therapy, psychoanalytic therapy, behavioral therapy, experiential therapy, communications therapy, family of



origin therapy, contextual therapy, strategic therapy, systemic therapy, and narrative therapy. Structural family therapy was developed by Salvador Minuchin in the 1960s in New York City, and later Philadelphia. It focuses on how communication and interactions in a family define roles, subsystems, and boundaries. Some families tend to be fairly closed and rigid, with narrowly defined roles and a resistance to allowing exchanges of information. Other families are extremely flexible and open, so there is little sense of privacy or clarity of roles. Structural family therapy focuses on changing interactions, both in the session and for homework, so that families can achieve a new level of homeostasis that allows for optimal functioning. Psychoanalytic family therapy was developed by Nathan Ackerman in New York City, and incorporates the familiar techniques of individual analysis, such as dream interpretation, resolving unconscious conflicts, and transference, but in the context of the family. Behavioral family therapy is usually associated with couples work, such as sex therapy, as developed by Masters and Johnson. For example, various forms of parent training and couples’ relationship enhancement are part of behavioral family therapy. Teaching parents how to use reinforcement schedules, a token economy, and logical consequences is a part of behavioral family treatment. Experiential family therapy as developed by Carl Whitaker and the Gestalt family therapists such as Walter Kempler focused on the here-and-now experience of families. Whitaker spoke about “seeding the unconscious,” which involves planting ideas in family members’ minds that might stimulate an awareness that is later brought up with the entire family. The communications school of family therapy is exemplified by the work of Virginia Satir. Satir focused on the clarity of communications between family members, and if the message sent by one person is the message received by the other person. Satir believed in the healing power of touch, and on making sure that people in families felt comfortable asking directly for what they needed, such as a hug. Family of origin therapy was founded by Ivan Böszörményi-Nagy and Murray Bowen, who is sometimes referred to as the “father of systems theory.” This technique focuses on how psychological disorders are intergenerationally transmitted, and

Family Therapy

519

how therapy should focus on helping adults separate or “individuate” from their families of origin. In doing so, adults are typically asked to make a “journey home” to deal with unresolved issues. Contextual family therapy is similar to Bowen’s theory in its focus on one’s family of origin. What is different is that James Framo, its originator, held long family-of-origin sessions over the course of a few days, asking family members to bring up with their families unanswered questions from childhood. Strategic family therapy, developed by clinicians such as Jay Haley and Cloe Madanes, is another communication theory in which the therapist’s job is to come up with tasks for the family that challenge their conclusions about each other. These tasks might be direct, such as asking mom to discipline instead of dad, or indirect or “paradoxical,” such as asking a person to have more of his or her symptom, or asking a family member to insist that the symptomatic person have the symptom. Systemic family therapy was developed in Milan, Italy, by the Milan Group of Palazzoli, Cecchin, Boscolo, and Prata, and was practiced in the United States by such therapists as Peggy Papp at the Ackerman Institute in New York. In systemic family therapy, the therapist makes use of a consulting team who sits behind a one-way window and comments on the therapeutic action. Usually, the therapist takes a position with the family that change is okay, and the consulting team warns the therapist that change would be dangerous for the family. Narrative family therapy, created by Michael White and David Epston, proposes that a person, family, or societal conception of health and illness results in family members behaving consistently with these ineffectual concepts. Therapy focuses on helping people change their belief systems so as to open up new possible actions for the person. Since the 1990s, other diverse approaches have gained some popularity, such as post-modern family therapy, solution-focused therapy, cognitive-behavioral therapy, integrative therapy, and multicultural and international therapy. Effectiveness of Family Therapy A heartening trend is that more and more outcome and process studies are conducted on the efficacy of family therapy, and the results have provided an evidence base for family treatments. Family treatments have been supported for use with addictions,

520

Family Values

childhood behavioral problems, marital problems, psychosomatic disorders, emotional and physical domestic abuse, and in psychoeducation in families with a schizophrenic member. Regardless of the specific theory, family therapy approaches focus attention away from individual problems and a search for why something is happening toward relationships, patterns of interaction, and an emphasis on what is happening and how it is happening. Family therapists spend years in training and supervision to help them make this distinction.

Family Values

family consisted of a breadwinner father, a full-time homemaker mother, and three or four children. In 1972, 60 percent of all U.S. families followed this model, while in 2011 only 29 percent did. The percentage of children under age 18 living with two parents fell from 77 percent in 1980 to 65 percent in 2011. Single-parent U.S. households increased from 11 percent of all households in 1970 to 31 percent in 2011. Along similar lines, the percentage of Americans between ages 30 and 44 who have never been married has significantly increased, to 31 percent. Understanding these changes sheds light on the link between attitudes about family and behavior. Have these changes in family composition and roles resulted in a dramatic change of values? Not necessarily. Several different theories have been proposed to explain these demographic changes, such as the deinstitutionalization of marriage and changes in ideologies. However, research studying changes in family values is relatively scarce. Some studies examine changes in family-related attitudes over time in the United States. A few trace family attitudes in different countries at a given time point, while others have specifically looked at attitudes toward marriage and children internationally and over time. Overall, these studies suggest that the link between family attitudes and behavior is from parallel trends in socialization values, religious beliefs, political allegiances, and support for civil liberties. Some other studies point out trends toward individual autonomy and tolerance toward a diversity of personal and family behaviors, as reflected in increased acceptance of divorce, premarital sex, unmarried cohabitation, remaining single, and choosing to be childless. At the same time, these studies suggest that a large percentage of young people believe that marriage and family life are important, and that they plan to marry and raise children.

Since the 1960s, many Western industrialized nations have experienced significant demographic changes. Two key changes have been the decline in marriage and fertility rates. Fertility rates in most European countries are below the replacement level, and Europe has also seen increases in divorce, cohabitation, nonmarital births, and voluntary childlessness. What about the United States? Two generations ago, the typical American

Theories Explaining Changes in Attitudes Zoya Gubernskaya identified three types of theories that explain the trend toward more egalitarian family values: post-materialism, a second demographic transition, and structural and demographic changes. According to Ronald Inglehart and Wayne Baker, one explanation for the changes in family attitudes is because of a shift from materialist and traditional values to postmaterialist and

Neil Ribner Jason Ribner California School of Professional Psychology See Also: American Association for Marriage and Family Therapists; Bowen, Murray; Child Rearing Experts; Dr. Phil; Family Development Theory; Family Life Education; Family Stress Theories; Groves Conference on Marriage and the Family; Intensive Mothering; Intergenerational Transmission; National Council on Family Relations; Overmothering; Parent Effectiveness Training; Psychoanalytic Theories; Satir, Virginia; Skinner, B. F.; Systems Theory. Further Readings Gladding, Samuel T. Family Therapy: History, Theory, and Practice, 5th ed. New York: Pearson, 2010. Nichols, Michael P. Family Therapy: Concepts and Methods, 10th ed. New York: Pearson, 2012. Minuchin, Salvador, and H. Charles Fishman. Family Therapy Techniques. Cambridge, MA: Harvard University Press, 1981.



secular-rational values. According to the postmaterialist argument, economic security liberates individuals from dependence on family and community. Those with such postmaterialist values are also expected to have high tolerance for abortion, divorce, and homosexuality, and low levels of support for the importance of family life and children, male dominance, and traditional gender roles. Inglehart argues that the shift from materialist to postmaterialist value priorities is mainly from cohort replacement. Demographic transition theory, popularized by Frank Notestein, is used by demographers to explain the trends from high mortality and high fertility to low mortality and low fertility. Economic factors are the key to understanding this trend. These proponents argue that there is a linear relationship between economic development and population change, and that all traditional and agricultural countries will move toward being nontraditional and industrialized during this demographic transition. The second demographic transition theory (SDT), considered the very last stage of demographic transition theory, is specifically associated with later marriage, delays in childbearing, and the belief in a reduced ideal family size. Overall, this theory links changes in attitudes toward family issues to a global shift in values, but it mainly insists on a role for ideology independent from economic factors. According to the SDT, the rise of individualistic values is incompatible with traditional marriage. Individual self-realization has become a priority, even in marriage, and career and education have trumped plans to start a family. Overall, SDT stressed the importance of increased levels of education and secularization as predictors of changes in attitudes and values. Finally, structural and demographic changes can either prevent individuals from realizing traditional norms regarding marriage and childbearing, or open up new opportunities that compete with or even outweigh the benefits of traditional marriage. Some examples are the expansion of the educational system, improvement in birth control, and the rise in women’s labor force participation. All of these have facilitated changes in family formation behavior and attitudes. As more and more people choose to cohabit while pursuing education or establishing a career, living together as a couple but delaying marriage and childbearing is

Family Values

521

becoming increasingly accepted, a pattern which contributes to further erosion of traditional family attitudes and values. Research on Attitudes Toward Marriage and Children Gender differences strongly impact attitudes toward marriage and children, but the overall effect is not clear. Rachel Jones and April Brayfield confirm that women are less likely than men to view children as central, and are more likely to think that people, especially women, can lead satisfying lives without marriage. Some other research has suggested that motherhood is viewed as the primary form of parenthood, an effect that is stronger for women, and that women are more likely than men to regard children as a significant aspect of their identity. The support for traditional family views is found to increase with age, and having less education increases the odds of both women and men supporting the view that women need children in order to be fulfilled. Individuals with more egalitarian gender ideology were found to be less likely to consider children as central to fulfillment. Specifically, this variable was found to best explain the variations in pro-child attitudes across six European countries. Overall, religiosity most significantly correlates with attitudes toward cohabitation and marriage. Specifically, individuals who are more religious are more likely to be conservative in their attitudes toward cohabitation and marriage. Similarly, a more conservative political ideology was found to be associated with more restrictive attitudes toward social policies among sexual minorities. Those with more liberal political ideology were linked with more favorable attitudes toward marriage equality, as measured by support for the legalization of samesex marriage. Using the same argument, more liberal political ideology is expected to be correlated with more egalitarian attitudes toward marriage and children. Another individual-level variable that is expected to predict attitudes toward marriage and children is immigrant status. The precise relationship, however, is not clear. Typically, immigrants migrate from countries with less egalitarian cultures to countries with more egalitarian cultures, at least in terms of gender. Based on European Social Survey data, one recent study shows that immigrants

522

Fatherhood, Responsible

in Europe originating from countries with very traditional gender relations support gender equality less than members of mainstream society. Considering the expectation that those with more egalitarian gender ideology have more egalitarian attitudes toward marriage and children, it is expected that immigrants will have less egalitarian attitudes toward marriage and children compared to natives. The same study, however, found that immigrants adapt their gender ideology to the standards of their residence country, and that the origin context loses force over time. This would also suggest that over time, there might be no differences between natives and immigrants in regard to their attitudes toward marriage and children. Prior research has also found that employed individuals have less traditional attitudes toward family and children. Married people are particularly likely to think that marriage is important for life satisfaction. Those who are either married or widowed have more traditional views about marriage and children compared to those who are either separated or divorced, or never married. Prior research has also suggested that those who live in urban areas are more likely to have access to family-planning services. Living in urban areas is associated with fewer unintended pregnancies, more contraceptive use, and less conservative attitudes about intimate behavior without marital commitment and about having sex with one’s future spouse. Along with the fact that people in urban areas tend to be more educated than those in rural areas, those who live in urban areas will have more liberal attitudes about marriage and children. Overall, this discussion has focused on relationships between individual characteristics and attitudes toward marriage and children, without considering the cross-cultural differences. Some variation, however, is expected across national borders, both in people’s views on marriage and children and in the nature of the relationships between individual characteristics and attitudes. Certain societal characteristics, such as political regimes, educational systems, family laws, demographic trends, welfare state approaches, specific family policies, and media campaigns may partly explain some of these variations. Deniz Yucel William Paterson University

See Also: Breadwinner-Homemaker Families; Breadwinners; Childless Couples; Cohabitation; Collectivism; Defense of Marriage Act; Focus on the Family; Homemaker; Individualism; Mothers in the Workforce; Promise Keepers; Same-Sex Marriage. Further Readings Gubernskaya, Zoya. “Changing Attitudes Toward Marriage and Children in Six Countries.” Sociological Perspectives, v.53 (2010). Inglehart, Ronald. Modernization and Postmodernization: Cultural, Economic, and Political Change in 43 Societies. Princeton, NJ: Princeton University Press, 1997. Inglehart, Ronald and Wayne E. Baker. “Modernization, Cultural Change, and the Persistence of Traditional Values.” American Sociological Review, v.65 (2000). Jones, Rachel K. and April Brayfield. “Life’s Greatest Joy? European Attitudes Toward the Centrality of Children.” Social Forces, v.75 (1997). Notestein, Frank. “Population: The Long View.” In Food for the World, T. Schultz, ed. Chicago; University of Chicago Press, 1945.

Fatherhood, Responsible There has been growing interest in fathering roles, expectations, and behaviors in the research literature and social policy. Over the last 30 years, research has sought to understand father–child relationships, father influence on child developmental outcomes, and father influence on family well-being. Specifically, research has shown the importance of father involvement in children’s educational attainment, economic stability of the family, decrease in juvenile delinquent behaviors, and child emotional wellbeing. Positive father involvement has been correlated with secure attachment, regulation of negative feelings, high self-esteem during adolescence, and higher academic achievement. Some research highlights the impact of absentee fathers, whereas other research examines the changing roles of fathers, from sole breadwinner to nurturing or responsible fathers. Much of the research has also focused on low-income fathers and families. Concurrently, social policy has attempted to address poor child and familial outcomes through focused



attention on the role of fathers. Social policy has taken a similar course when compared to the fathering research agenda. Government initiatives address issues such as economic stability, father-child relationships, and barriers to father involvement. Fatherhood consists of cultural ideology and socially constructed roles and expectations. Fathering behaviors, on the other hand, directly specify what they do. It has been suggested that fathering is a social construction that is constantly redefined. Historically, cultural expectations for fathers were as breadwinners for the family. Fathers, while physically present in the home, played a minimal role in the care of their children. Additionally, fathers’ level of interaction overall was also limited. Research suggests that in the last two decades, there has been a shift in fatherhood expectations from teacher to breadwinner, to sex role model to new nurturing father. Moreover, a few trends have also affected the current societal expectations of fathers. The 20th century marked a dramatic increase in the number of mothers entering the workforce. In the 1950s, about 12 percent of married women with children were in the workforce. The second trend is a rise in father absence and the increase in female-headed households. In 2009, there were 19.6 million children living in femaleheaded households. In 2011, children living in female-headed household reported a poverty rate of 47.6 percent, more than four times the amount for married-couple families. Historical Typologies of Fathers Historical characterization of the father role demonstrates the complexity and multifaceted nature of the role. While roles emerged during different periods, the emergence of one characterization did not necessarily mean the others were not simultaneously present. An early typology of fathering was the moral teacher or guide. This father role has roots during the Puritan era through the colonial period. Fathers during this time were viewed as the parent responsible for the moral development of their children. This development was primarily based on biblical values with the goal of rearing children to maintain Christian values. Good fathers were viewed as those who served as role models for good Christian living with children who understood biblical teachings. The next role, breadwinner, developed as a result of industrialization during the mid-19th century

Fatherhood, Responsible

523

through the Great Depression. Prior to industrialization, mothers and fathers shared in providing for the family; however, changes in agriculture and piece work done in the home led to a separation between housework (childcare) versus work in the labor force. In the 1930s and early 1940s, fatherhood literature focused on fathers’ role as sex role models particularly for their sons. The roles of moral guide and breadwinner also remained present during this time. In the mid-1970s, literature highlighted the new nurturing father. This father-type was characterized as a “good father,” or active if he was involved with daily childcare. Furthermore, during this time and even more so in the 1980s the “new fatherhood” typology emerged. The level and quality of father involvement received more attention. A popular literature review concluded that father involvement increased from the mid-1970s through the early 1990s. Father involvement rose by over 40 percent in the early 1990s, compared to the late 1970s. Yet, the typology of breadwinner remains engrained in research on father involvement and paternal influence. While there has been an increase in father involvement, a substantial number of children are born out of wedlock, reared in families disrupted by divorce, and/or raised in female-headed households. The outcomes for these children can be problematic. At the same time, however, there was a substantial increase of children born out of wedlock, growing up in single-female households, and disrupted as a result of divorce. Social policy from the early 1990s to present has created programs to address the challenges to the family system as a result of poor father involvement. The typology of the “responsible father” emerged within the literature and social environment as a response to this adverse outcome. Responsible Fathering in the Literature Research on father involvement began with the pioneering work of Michael Lamb in 1976. Much of the work in this area uses Lamb’s work from 1976 and beyond as a foundation to expand existing typologies. In the area of responsible father, W. Doherty, E. Kouneski, M. Erickson, and James Levine and Edward Pitt were credited with the delineation of responsible fatherhood. Levine and Pitt suggested that a father who behaves responsibly waits until he is emotionally and financially prepared to have

524

Fatherhood, Responsible

a child; establishes legal paternity; actively shares with the child’s mother in the continual care of the child, prenatally and beyond, and economically provides for the child and the mother prenatally. Doherty and colleagues expounded on Levine and Pitt’s work, and developed the conceptual model of responsible fathering. This framework was developed based on prior research, prior theoretical frameworks, and the notion that fatherhood should be examined through an ecological lens. This framework allows an examination of behaviors and attitudes, regardless of the living arrangements between parents. Specifically, the model underscores the individual factors of the father, mother, and child; the quality of the coparenting relationship between the mother and father; and contextual factors in the social environment such as employment opportunities, cultural expectations, and social support. Emergence of Responsible Fathering in Social Legislation in the United States The emergence of fathering as a national policy issue can be traced back to the creation of a confidential report titled “The Negro Family: The Case for National Action,” also known as the Moynihan report created by then Assistant Secretary of Labor Daniel Patrick Moynihan in 1965, under President Lyndon B. Johnson. The purpose of this report was to examine the causes for the growth in single female-headed households in the black community. Criticized for its pathological presentation of the black family, federal programming did not emerge as a result of this report until the 1990s. Under the Bill Clinton administration, findings based partly from data from the Moynihan report led to the creation of the Fatherhood Initiative. This initiative was created to build community programs to address the economic and parental aspects of fatherhood. In 1995, President Clinton issued a charge for federal agencies to re-envision programs to include components to strengthen father roles in the family and focus attention on father contributions to overall child well-being. The Fatherhood Initiative emerged after a series of workshops held from 1996 to 1997. The initiative created an opportunity for exchange of information and collaboration from professionals in academe, research, policy, education, and social services practitioners. The focus was on understanding the multiple factors that contribute to barriers to consistent fathering. Moreover, responsible fathering

was identified as a key factor in the development of prosocial behavior, which also includes meeting the economic needs for families. This initiative advanced several recommendations about father involvement, including gathering data on motivation; the impact of father involvement on child outcomes in racially and ethnically diverse groups; and the impact of nonmarital and marital relationships on children. However, this first initiative was funded through private grants and foundation support. Critical legislation was created under each president since Bill Clinton that reinforced the notion of responsible fathering and provided funding for community program development. The focus of each presidential Fatherhood Initiative has differed, but they share the key goal of helping fathers access employment and provide financial as well as social support for their children. Moreover, the initiative sought to gather information on unmarried, low-income fathers. The promotion of healthy marriage was also a key component to many of these initiatives. This initiative has continued for more than 16 years, through the presidency of Barack Obama. The Personal Rights and Work Opportunity Reconciliation Act (PRWORA) of 1996, created under the Clinton Administration, instituted time-limited welfare benefits with the goal of returning welfare mothers to work. Additionally, PRWORA focused on paternity establishment, child support enforcement, and stable employment for fathers. Other legislation under President Clinton included the Fatherhood Counts Act of 1999 and the Responsible Fatherhood Act of 1999, which proposed funding for media campaigns to promote marriage, successful parenting, and financial responsibility for children by fathers. In 2000, a new Responsible Fatherhood Initiative was introduced, promoting work and an increase in child support payments. Under President George W. Bush, the Healthy Marriage Initiative and Marriage Protection Week was launched in 2003. This initiative pushed responsible childrearing and strong families through marriage. In his second term, congressional spending to promote responsible fatherhood and marriage, anger management, and communication skills was at $500 million over five years. The Office of Faith-Based and Community Initiatives was created to work in urban congregations and community organizations



to provide services to children in homes with absentee fathers. President Barack Obama’s Fatherhood Initiatives took cues from the previous initiatives, but doubled the budget by seeking an additional $500 million. President Obama’s plan also created partnerships with other government departments, including the Department of Justice and the Office of Faith Based Neighborhood Partnership. Unlike the other initiatives, President Obama focused on the connection between fatherhood and the criminal justice system, creating programs that supported incarcerated parents, child support and family support, and addressed issues of domestic violence. Moreover, partnerships with the Departments of Commerce and Veteran Affairs focused on empowering fathers in the marketplace and supporting military and veteran fathers. Current Program Practices in Responsible Fathering Although delivered in a variety of ways, the overall goal of responsible fatherhood programs is to strengthen the father-child relationship and improve outcomes for children and families, particularly within the context of marriage. Federal funding is provided via the Deficit Reduction Act (DRA) of 2005. This act appropriated $150 million in discretionary grants annually from 2006 to 2010 to implement the Healthy Marriage and Responsible Fatherhood Initiative. The act was reauthorized in 2010 via the Claims Resolution Act of 2010. Additional extensions took federal funding through the first six months of fiscal year 2013. The DRA outlined allowable activities for the Responsible Fatherhood Program in four specific areas. First, activities should promote marriage or sustain marriage through activities such as counseling, mentoring, dissemination of information about the benefits of marriage and coparenting, relationship skills training, skills-based marriage education, and financial planning. Second, activities must promote responsible parenting through activities such as skill-based parenting education and the promotion of payment of child support. Third, activities should promote economic stability by helping fathers better their economic positions. Last, activities to promote responsible fatherhood should do so through dissemination of information, promotion, and development of programming.

Fatherhood, Responsible

525

Conclusion The notion of what it takes to be a responsible father has been the subject of cultural, social, and political debate. Much of this conversation seeks to understand the temporal roles of fathers with their children and families. There are two parallel aspects of the literature, policy, and programmatic aspects of responsible fatherhood. At the foundation is the notion that a responsible father meets the economic needs or strives to meet the needs of his children and family. The breadwinner expectation has been attached to fathers for a number of decades, and plays out in their perceptions of fatherhood. Data on cohabiting couples indicates that males who are locked out of the labor force or have erratic participation are often in strained parental relationships. Moreover, fathers question their effectiveness in the role when they failed to meet the financial needs of their children. Quality coparenting is another theme across scholarship, policy, and programs. Data indicate that the quality of the father-mother relationship plays a critical role in father involvement for resident and nonresident fathers. Moreover, parents’ ability to avoid or address conflict is tied to father involvement. The father–mother relationship appears to be the most salient predictor in continued paternal involvement. Felicia Law Murray Shann Hwa Hwang Texas Woman’s University See Also: Coparenting; Fragile Families; Healthy Marriage Initiative; Moynihan Report; New Fatherhood; Welfare Reform. Further Readings Bronte-Tinkew, J., M. Burkhauser, and A. Metz. “Elements of Promising Practices in Fatherhood Programs: Evidence-Based Research Findings on Interventions for Fathers.” Fathering, v.10/1 (2012). Cabrera, N. and H. E. Peters. “Public Policies and Father Involvement.” Marriage & Family Review, v.29/4 (2000). Cabrera, N., C. S. Tamis-LaMonda, R. H. Bradley, S. Hofferth, and M. E. Lamb. “Fatherhood in the 21st Century.” Child Development, v.71/1 (2000). Carlson, M. J. and S. S. McLanahan. “Fathers in Fragile Families.” In The Role of the Father in Child

526

Father’s Day

Development, M. E. Lamb, ed. Thousand Oaks, CA: Sage, 2010. Doherty, W. J., E. F. Kouneski, and M. F. Erickson. “Responsible Fathering: An Overview and Conceptual Framework.” Journal of Marriage and the Family, v.60 (1998). Fatherhood Initiative. Improving Opportunities for LowIncome Fathers. Washington, DC: U.S. Department of Health and Human Services, 2005. Lamb, M. E. “The History of Research on Father Involvement: An Overview.” Marriage & Family Review, v.29/2–3 (2000). Levine, J. A. and E. W. Pitt. New Expectations: Community Strategies for Responsible Fatherhood. New York: Families and Work Institute, 1995. Pleck, E. H. and J. H. Pleck. “Fatherhood Ideals in the United States: Historical Dimensions.” In The Role of the Father in Child Development, M. E. Lamb, ed. New York: Wiley, 1997. Roy, K. M., N. Buckmiller, and A. McDowell. “Together but Not ‘Together’: Trajectories of Relationship Suspension for Low-Income Unmarried Parents.” Family Relations, v.57 (1998). Weaver, J. D. “The First Father: Perspectives on the President’s Fatherhood Initiative.” Family Court Review, v.50/2 (2012).

Father’s Day The first recorded celebration of Father’s Day occurred in 1908, but Father’s Day did not become a national holiday until 1972 (100 years after Mother’s Day became a national holiday). Many people have taken credit for beginning this holiday, but one woman—Sonora Smart Dodd—has been recognized for her work in establishing it. In fact, in 2010, the House of Representatives honored her memory on the 100th anniversary of the Father’s Day event she organized. There are differences, however, in how individuals celebrate Mother’s Day and Father’s Day. History Historical accounts show that Father’s Day celebrations occurred in a few different locations in the United States, beginning in the early 1900s. The first Father’s Day celebration on record occurred on July 5, 1908, in Fairmont, West Virginia, at the Williams

Memorial Methodist Episcopal Church South. The event was organized by Grace Golden Clayton, who had recently lost her father in the Monongah coal mining disaster, which left about 1,000 children fatherless. Clayton requested that Pastor Robert Thomas Webb honor those fathers lost in the disaster on the Sunday closest to her father’s birthday, which happened to be July 5. This event alone did not inspire a nationwide celebration of Father’s Day because it was primarily a celebration of local significance. Furthermore, historians believe that the date of this celebration was too close to Independence Day for it to gain traction. A second celebration occurred on June 19, 1910 in Spokane, Washington, at the YMCA. Sonora Smart Dodd, one of six children raised by a single father, was inspired to host a celebration for fathers after hearing a Mother’s Day sermon. Historians describe how Dodd requested that Father’s Day be celebrated with a sermon on the first Sunday of June, but that pastors needed until the third Sunday of the month to prepare their sermons. Despite Dodd’s attempts, the celebration failed to catch on nationwide. A number of other individuals attempted to establish Father’s Day. In 1911, social activist Jane Addams tried to organize a Father’s Day in Chicago. In 1912, Methodist pastor J. J. Berringer celebrated Father’s Day in Vancouver, Washington. Harry C. Meek of the Lions Club International Group believed that he was responsible for Father’s Day, having celebrated it in 1915. Historical accounts have found that he chose the third Sunday in June for Father’s Day because it was his birthday. The Lions Club has referred to him as the “Originator of Father’s Day.” Presidents also became involved with Father’s Day. President Woodrow Wilson visited Spokane’s celebration in 1916 with the purpose of proclaiming it a national holiday, but Congress denied this request because of the belief that the holiday would become too commercial. In the late 1920s, President Calvin Coolidge suggested that Father’s Day become a nationally observed holiday. He never took the steps necessary to make this holiday official, though, perhaps because of earlier failed attempts to pass a proposal through Congress. A National Holiday Father’s Day remained sporadically celebrated in various communities around the country during

Fathers’ Rights



the 1920s. In the 1930s, Sonora Smart Dodd began advertising the holiday to masculine retail outlets, such as clothing stores and tobacconists, stressing that if Father’s Day became more commercialized, retailers would stand to earn considerable money. In 1938, the New York Associated Men’s Wear Retailers founded the Father’s Day Council, a group focused on promoting gifts suitable for children to present to their fathers, thereby inaugurating the commercialization of the holiday. Despite this group’s efforts, Father’s Day was still not widely celebrated. In 1957, Senator Margaret Chase Smith of Maine proposed that Congress make Father’s Day a formal holiday. She argued that not having congressional support for Father’s Day sent the message that only mothers mattered and should be honored. Finally, in 1966, President Lyndon B. Johnson proclaimed that Father’s Day be celebrated on the third Sunday of June, although President Richard Nixon made it an official holiday in 1972. By the 1980s, the Father’s Day Council stated that Father’s Day was comparable to Christmas in terms of gift buying. This was an overstatement, however, as individuals spend twice as much for mothers on Mother’s Day as they do for fathers on Father’s Day. In 2010, Sonora Smart Dodd’s home was restored and opened for the 100th anniversary of her first celebration of Father’s Day. Family scholars claim that Mother’s Day and Father’s Day serve to keep the ways in which mothers and fathers parent distinct. There are differences in how families celebrate these two holidays. For example, more time is spent celebrating Mother’s Day compared to Father’s Day, and more mothers than fathers are taken out to eat on their day. Fathers are less likely to receive gifts, although both mothers and fathers are equally likely to receive a card. And while Mother’s Day is the busiest day of the year for phone calls, Father’s Day is the busiest day for collect calls. Fathers are more likely to enjoy their day as compared to mothers, however, despite having fewer gifts and shorter celebrations. Some scholars believe that more is done to celebrate mothers because of the greater importance and value that American culture places on mothers compared to fathers. Jessica Troilo West Virginia University See Also: Fatherhood, Responsible; Mother’s Day; New Fatherhood; Primary Documents 1972.

527

Further Readings Cote, Nicole Gilbert and Francine M. Deutsch. “Flowers for Mom, a Tie for Dad: How Gender Is Created on Mother’s and Father’s Day.” Gender Issues, v.25 (2008). LaRossa, Ralph and Jaimie Ann Carboy. “‘A Kiss for Mother, a Hug for Dad’: The Early 20th Century Parents’ Day Campaign.” Fathering, v.6 (2008). LaRossa, Ralph, Charles Jaret, Malati Gadgil, and G. Robert Wynn. “Gender Disparities in Mother’s Day and Father’s Day Comic Strips: A 55-Year History.” Sex Roles, v.44 (2001).

Fathers’ Rights The fathers’ rights movement (FRM) consists of a loose constellation of groups, movements, and individuals who act on behalf of the interests of fathers, especially those who do not live with their children. Those who participate in the FRM believe that men in their role as fathers experience discrimination in the family law system because the system is dominated by feminists. Many fathers’ rights groups represent men who feel victimized, frustrated, and angry by the process of undergoing separation and divorce, and who believe that the legal system was biased against them. Fathers’ rights groups emerged out of the divorce reform movement in the 1960s, the backlash against feminism in the 1990s, and right-wing Christian promarriage groups like the Promise Keepers. Generally, the FRM represents the efforts of those who believe that fathers should have authority over their children and their children’s mothers. The FRM is committed to a discourse of formal legal equality that is also often a discourse of power that legitimates and maintains the status quo of the private heterosexual family and the genetic link between men and children. The FRM’s logic derives from the idea that women and the children they produce are the property of men. Shifting Demographics The legal relationship between fathers and their children has undergone a profound shift over the last 50 years. For example, the number of births to

528

Fathers’ Rights

unwed parents has now overtaken divorce as the primary cause of father absence. The percentage of births to unmarried women increased from 18 percent of total births in 1980 to 41 percent of total births through 2011. In 2010, 75 percent of women under the age of 25 who were giving birth for the first time were unmarried, and approximately 25 percent of unwed mothers were cohabiting with a partner. In 2012, 7.8 million cohabiting couples resided in the United States, a 170 percent increase from 2.9 million in 1996. The number of same-sex parents with children has increased by 50 percent in the last 10 years to a total of almost 2 million children. In 1970, 40 percent of U.S. households were made up of married couples with children under 18; that figure dropped to 20 percent in 2012. Also in the United States, 40 to 50 percent of all marriages now end in separation or divorce, affecting over 1 million children annually. The role of women in the family has also significantly changed in the last 50 years. Today, q nearly 75 percent of women with children at home engage in full- or part-time paid work, quadruple the number who did so in 1950. Also, a full 40 percent of women today are their families’ sole or primary income generator, up from 11 percent in 1960. What the Fathers’ Rights Movement Wants The FRM has been understood as a response to feminism and shifting family demographics and a strategy for maintaining power. The FRM uses the legal system and legislative process to advocate for the following ends: the dismantling of legal child support (lowering or removing child support awards), an end to no-fault divorce, biological fathers’ mandatory consent for child adoption, and mandatory joint custody of children. The movement fights for extended waiting periods for divorce, forced marriage counseling, and the linking of Temporary Assistance to Needy Families (TANF) to marital status (forced marriage). Fathers’ rights advocates fight against automatic wage garnishment and the requirement that men identify their employment in court documents, as well as punishments associated with child support arrearages, such as the suspension of professional and drivers’ licenses and imprisonment for nonpayment of child support. They also protest the use of private collection agencies for child support and

data-sharing computer systems used by states to track child support payments. The FRM has garnered a following with its ability to use the media to successfully promote the idea that women have more rights than men in the areas of abortion rights and family law. Fathers’ rights groups want to increase father power by changing divorce, support, and custody laws. These groups argue that mother-headed single-parent families have produced decades of child poverty, and have increased delinquency and crime. Most groups focus on fighting for their “rights” rather than their actual relationships with children. Fathers’ rights groups are also active in efforts to reform no-fault divorce laws. Fighting the Family Law System Currently, around 1 million children in the United States are affected by divorce each year. In 72 percent of cases in which parents have a formal written divorce agreement, mothers retain sole custody; fathers are awarded sole custody in 9 percent of cases, and joint custody is awarded in 17 percent. Fathers’ rights groups support presumptive joint physical custody, regardless of fathers’ history of childcare, and actively fight against the primary caretaker rule, which rewards custody based on past care and concern for children. Fathers’ rights groups argue that the primary caretaker theory is fixated on mothering and ignores fathering, even though the theory is gender-neutral. Fathers’ rights groups fight against any interpretation of the “best interest” of the child that excludes the father. They also oppose children’s visiting centers, which they charge are used as instruments to marginalize fathers, weight given to children’s opinions in custody determinations, and the obligation to employ a lawyer for divorce proceedings. Fathers’ rights groups seek to increase father power by insisting on settling family conflicts outside of court, relying on family mediation rather than adversarial means, and supporting the parent who provides the most help to ensure balanced access to both parents following a divorce. Other FRM goals are for protected child exchanges, free access to genetic testing to determine paternity, and the “permission to move” law that redefines relocation as a change in the principle residence of a child 30 miles or more from the child’s current principle residences. Many fathers’ rights advocates



also argue that the spouse filing for divorce must be regarded as the one willing to abandon the children to the other parent. They want to punish “false” accusations of child or spousal abuse with imprisonment for felonious perjury. Fathers’ rights proponents insist that domestic violence charges are false and should have no impact on child custody determinations, even if proven true. They also oppose the Violence Against Women Act and the Child Support Enforcement Act, believing both acts to be unconstitutional and primary weapons in what they say is the “war against fathers.” Additionally, fathers’ rights groups support expanding claims in divorce proceedings of parental alienation syndrome (PAS), in which this condition is argued to be supposedly brought about by parents—usually mothers—who alienate their children from the other parent. Even though PAS is not recognized by the American Medical Association or the American Psychological Association, fathers’ rights advocates are using PAS claims in divorce cases in an attempt to switch custody to fathers, eliminate or reduce child support payments, and discredit charges of child abuse. A national PAS foundation has been established in Washington, D.C., with an advisory board made up of people connected to the father power movement and funded by federal grants. Abortion and Family Laws as a “Depravation of Property” Fathers’ rights activities define unwanted pregnancies as a “depravation of property” because of what they allege is coerced child support. In addition, FRM groups argue that because women can choose whether to give birth to a child and men cannot choose whether to financially support them, women receive more rights than men in the area of reproduction. The FRM maintains that men and women have equal connection to a fetus, and that abortion rights must be universal rights that are applicable to men as well as women, overlooking the fact that because men do not become pregnant, they will never have the need for an abortion. Furthermore, FRM proponents claim that women have more rights and men have fewer regarding abortion because women can choose to terminate a pregnancy, but men cannot have a “male abortion” and terminate social and financial obligations to their child.

Fathers’ Rights

529

In addition, fathers’ rights advocates argue that because men must earn money by working, and working is a surrender of the body, their bodies are also at risk because of pregnancy. The FRM seeks to portray the experience of pregnancy as the same for both women and men. Their request for the right for fathers to terminate a pregnancy legally conceptualizes children (and fetuses) as the property of men. The FRM also fights for a father’s right to control the life of both a fetus and the subsequent child. Movement activists maintain that a man who impregnated a woman should have the right to force that woman to carry a pregnancy to term because the fetus belongs equally to both parents. They also argue in favor of a gender-neutral definition of reproductive freedom, and ignore the consequences to women whose bodies men often treat as property. Instead, the FRM casts wanted pregnancies that are aborted by women as a depravation of men’s property because they have lost their children. FRM’s Changing Public Message Throughout the 1990s, fathers’ rights groups emphasized the discrimination that fathers experienced in family court, and presented separated fathers as victims of an antifather court system. However, the FRM experienced little success convincing legislatures and courts throughout the country that the family court is antimale. The FRM was unsuccessful in its goal to increase father power by fighting for family law changes that would favor fathers, including major reforms in divorce, support, and custody laws. The movement’s argument that single-parent families (meaning mother-only families) produced decades of child poverty, delinquency, and crime that could only be remedied by reforming no-fault divorce laws, reducing child support, and giving fathers legal control of their children was unpersuasive. As a result, by the early 2000s, fathers’ rights groups realized that because their rhetoric did not frame fathers as good and responsible parents who had only the best interests of their children at heart, their arguments had to change. Fathers’ rights groups in the 2000s began to talk about children’s well-being, and shifted their rhetoric from notions of fathers’ rights to notions of parental “fairness.” However, their rhetoric of shared parenting only

530

Feminism

highlighted their inability to come to terms with fathers’ overall low levels of actual shared parenting. Lynn Comerford California State University, East Bay See Also: Abortion; Alimony and Child Support; “Best Interest of the Child” Doctrine; Child Custody; Child Support; Coparenting; Custody and Guardianship; Deadbeat Dads; Divorce and Separation; Domestic Ideology; Fatherhood, Responsible; Feminism; Gender Roles; No-Fault Divorce; Parenting; Shared Custody; Social Fatherhood; Stepparenting. Further Readings Hamilton, B. E., J. A. Martin, and S. J. Ventura. “Births: Preliminary Data for 2011.” National Vital Statistics Reports, v.61/5 (2012). Khader, Serene. “When Equality Justifies Women’s Subjection: Luce Irigaray’s Critique of Equality and the Fathers’ Rights Movement.” Hypatia, v.23 (2008). Mason, Mary Ann. From Fathers’ Property to Children’s Rights: A History of Child Custody in the United States. New York: Columbia University Press, 1994. Ventura, S. J. and C. A. Bachrach. “Nonmarital Childbearing in the United States, 1940–1999.” National Vital Statistics Reports, v.48/16 (2000).

Feminism Throughout U.S. history, the definition of family has changed with attitudes toward gender roles. As the country moved into the industrial age, it developed the breadwinner and homemaker roles of the stereotypical nuclear family when it moved away from the family model of rural life. Long hours at work meant that fathers were less involved in family life, and women solely focused on the home. The marital partnership entailed in running a farm dissolved in the face of divided responsibilities. This was a time of improvement and hope, and the United States was embroiled in discussions of human rights. This was the context in which the first wave of feminism emerged. Although many Europeans, especially Mary Wollstonecraft, had spoken out in favor of women’s rights as early as the 18th century, the first wave

of true feminism in the United States took place between 1890 and 1920. This first wave was solidified at the Seneca Falls Convention of 1848, where attendees decided to focus on women’s suffrage as their main priority. The suffrage movement was intertwined with the issue of prohibition, and many activists such as Elizabeth Cady Stanton and Susan B. Anthony argued that as long as liquor was so readily available to men, women’s overall well-being was jeopardized. They argued that husbands who were addicted to “demon alcohol” financially jeopardized the family and were more likely to abuse their wives and children. Other activists not directly involved in the women’s movement, such as family planning crusader Margaret Sanger, started advocating for birth control around this time as a way to limit family size. Sanger believed that women had the right to limit the number of children that they had because it would preserve the women’s health and the well-being of families by allowing them to better provide for the children they already had. The early suffrage movement was also closely aligned with the abolitionist movement. As with abolition, the fight for voting rights was based on the idea of equality. Many suffrage activists, such as Lucretia Mott, began by campaigning to end slavery and met like-minded women. Many of these abolitionist women found that their roles within the movement were relegated to being helpers; they became dissatisfied, and branched off to focus on women’s rights by using the same human rights arguments that they used in support of abolition. In fact, a number of individuals, such as Frederick Douglass, Ida B. Wells, and Harriet Tubman, who were well known for their abolitionist work, were also active within the suffrage movement. At the time, women’s work roles were limited to factory labor or domestic service. Women who could afford it obtained hired help for the home, allowing them to focus on social and charitable activities that helped advance their husbands’ careers. Thus, many of the women who were free to participate in the suffrage movement were white and middle to upper class. During this time, African American activist and field laborer Sojourner Truth challenged the idea that women should be “ladylike” and remain protected within the home. By calling attention to the fact that poor women and women of color lived very different lives than many in the movement, she also



underscored the racial and class divisions running through the core of the suffrage movement. Arguments both for and against women’s rights were based on the Christian Bible, and were centered the question of a “woman’s place” and to what extent she belonged solely within the home. However, the idea that women should be protected and serve as helpmates to their husbands was heavily ingrained in social mores. By the late 19th century, the notion was that women are different—pure, family centered, and less sullied by politics. This very notion ended up advancing suffrage. Anti-war male politicians helped women achieve the vote, based on the belief that women could be the moral compass for society in the same way that they served as the guardian of morality within the home. By the end of the first wave of feminism, some women were obtaining college degrees, and they gained the right to vote in 1920. Other advances for women that emerged at this time were the right for married women to own property and share custodianship of children in the case of divorce. However, no rights were secured for single women, and married women still had no recourse if raped or abused by their husbands. Poor women and women of color benefitted little from these changes, and they

A photograph on display at the National Portrait Gallery, Washington, D.C., of Elizabeth Cady Stanton and Susan B. Anthony, social activists, abolitionists, and leading figures of the early women’s rights movement.

Feminism

531

continued to work long hours for little pay. Smaller reforms were taking place, such as the settlement house movement started by Jane Addams with Chicago’s Hull House, and these helped poor women and families of color with such things as childcare and health initiatives. However, the first wave of feminism primarily focused on obtaining the vote and did not represent the needs of all women. The 1940s and 1950s Important social changes in the 1940s and 1950s helped to lay the groundwork for the second wave of feminism. During World War II, record numbers of women went to work, supporting an economic structure in need of war products and a workforce that had been depleted by men leaving for war. The government facilitated this by recruiting women to work, using public campaigns such as Rosie the Riveter, who happily did her part for the war effort. Women were reassured that their household skills could transfer to work and that they could do the same job as men. Government funds were used to create childcare centers. For the first time, women working full time became socially acceptable. When the war ended, there was a general expectation that life could go back to the way it was before, with women in the home and men in the workplace. Not all women agreed—they had felt a taste of the independence that came from earning a paycheck. The government, having guided women into the workplace, found it difficult to put the genie back in the bottle. However, social expectations about work and family had not fully caught up. Many believed that women who continued to work were taking jobs that rightfully belonged to returning soldiers. Media images of the perfect family reflected the harmonious Leave It to Beaver model, with a dad working full time and handling the family finances, and the mother raising children and caring for the home. Women (at least white, middle-class women) were expected to only work until they got married, and certainly not if they had young children at home. Although birth control was available in other countries, it was strictly regulated in the United States, such that only married women could obtain it, and then only from a doctor. During the 1950s, women were divided between career women and housewives and often criticized each other. However, both were increasingly unhappy with the limitations that they experienced.

532

Feminism

Career women found that they were relegated to lower-status jobs, were overlooked for promotions, and received less pay for the same work compared to men. Women were attending college or working before marriage in ever greater numbers; however, they often found that their subsequent lives as fulltime homemakers were less fulfilling. They missed the mental challenge and financial independence that they had formerly enjoyed. Betty Friedan wrote about the problem of women trying to find personal fulfillment only through their husband and children in The Feminine Mystique. This growing dissatisfaction, combined with changes in the social fabric, helped set the stage for the second wave of feminism. The Second Wave The second wave of feminism occurred between the 1960s and the 1990s, and was initially set against the backdrop of the general social unrest and civil rights movements of the 1960s. This era was characterized by a general sense of rebellion against the social restrictions of the 1950s. Unlike the first wave, feminists branched off into factions that dramatically differed in ideals, but all retained a central focus on women’s rights. Dominant themes of the second wave were workplace equality, female sexuality, and including diverse voices. As more women earned college and graduate degrees and entered the workplace, the traditional rigid family system no longer worked for many families. Women often continued working after marriage and childbirth. Divorce laws became more accessible, resulting in more female-headed households. The salary disparity between men and women became an impediment to many women and a prominent second wave slogan of “equal pay for equal work” emerged. Initially introduced within the first wave of feminism by Alice Paul, the Equal Rights Amendment (ERA) was reintroduced in the 1970s as an effort to protect women from discrimination. Unfortunately, the ERA has still not been fully ratified at the state level. The phrase the personal is political emerged to reflect the idea that a person’s identity was both multifaceted and central to his or her social and political views. French feminist and author of The Second Sex Simone de Beauvoir’s earlier pivotal thinking about how women are seen as “other,” thus allowing for oppression to take place, helped spark ideas that translated to modern feminists arguing that they must be free to define themselves and not be

defined by rigid social roles. For instance, Chicana feminist Gloria Anzaldúa contends that “insiders” are privileged, and “outsiders” are excluded if they cross cultural identity “borders.” Scholars increasingly argued that many social structures are mental constructions, and feminists began to recognize the significant role that socialization plays in forming gender roles. This was a time of rapid change in social expectations about gender. Feminists questioned the assumption that women should only work in the “nurturing” professions (e.g., teaching or nursing), while leaving the more well respected and higher paid positions to men (e.g., chief executive officer or doctor). Men’s groups shed light on the limitations of traditional male gender roles, and helped advance custody rights for fathers. The term patriarchy emerged to help explain social systems (from the family to the government) whose structure informally or formally limited the experience of women. There was a mix of formal organizations such as (the National Organization for Women or NOW) and smaller grassroots movements that contributed to the national discussion. Many of these small groups focused on “consciousness-raising,” whereby women discussed patriarchy and gained insight about how it affected their lives. Another theme was female sexuality and reproductive rights. Historically, women had been limited by numerous unplanned or unwanted pregnancies. Increased availability of birth control and the Supreme Court upholding Roe vs. Wade, which gave women the legal right to an abortion, gave women more control over their reproductive process and propelled the sexual revolution to new heights. Up until this time, female sexuality had been tied to reproduction, and was often defined in terms of the man’s pleasure. Women began to embrace their sexuality as an aspect of their wellness and to discuss it more openly. The notion of woman as “object” in the sex act was explored, and activists such as Gloria Steinem and Naomi Wolf helped people understand how women were objectified in the media, the workplace, and in relationships. Laws came into effect that made nonconsensual sex (i.e., rape) within marriage illegal. This era also spotlighted date rape and domestic violence, helping to increase awareness and institute policies such as the 1994 Violence Against Women Act (VAWA).



During the 1980s and 1990s, feminism was drawn to women’s difficulties in balancing work and family. Arlie Hochschild published The Second Shift in 1989, arguing that social mores had not adjusted to women working full time, and that women often had to shoulder the majority of the household chores and childcare, as well as manage a career. Most of these issues are still in play today. Many career women still feel that they encounter a “glass ceiling” that informally limits their career trajectory. Given the expense of daycare and women’s average lower wages, many women who want a career feel that they should stay home for the financial health of the family. Despite this, the work of full-time homemakers continues to be devalued. Women who take time off from a career or choose to work part time while their children are young often find that their career never fully regains ground. The Family and Medical Leave Act of 1993 (FMLA) provided some protection for those who take time off for family caretaking; however, it is difficult to implement, and it does not address social expectations or many of the informal limitations inherent to workplace culture. The second wave saw the establishment of the women’s rights movement in tandem with advances in rights for people of color and the lesbian, gay, bisexual, and transgender (LGBT) community. Views on the family became more multifaceted as a result of these social changes. Scholars such as Harriette McAdoo and Angela Harris began writing about the unique needs of African American families in parenting children within oppressive systems. Feminist scholars criticized the media’s biased representations of welfare and African American single mothers. According to Susan Faludi, the author of Backlash: The Undeclared War Against Women, the 1980s witnessed a countermovement against feminism in which the media actively cultivated a fear of career women by promoting nonexistent problems such as an “infertility epidemic” or “man shortage” that were supposedly the result of women having abandoned their family duties. Part of this backlash entailed the conservative movement’s promotion of “family values,” which championed the return to the traditional family structure people associated with the 1950s. Despite the gains made during the second wave of feminism, there was an increasing mindfulness that women of color and lesbians often felt marginalized by the mainstream liberal movement.

Feminism

533

Feminists such as Alice Walker, Patricia Hill Collins, and bell hooks were instrumental in advocating for the unique perspective of black feminism and pointing out the complex and inseparable relationship between race, class, sexual orientation, and gender oppression. This school helped birth the notion of intersectionality. Audre Lorde contended that the category of “woman” is so inherently full of these “intersections” that it makes definitive statements about women’s rights useless. In critiquing the predominantly white, heterosexual, middleclass leanings of the second wave and insisting on women’s right to define themselves, these activists helped usher in the third wave of feminism. The Third Wave The third wave initially overlapped and largely rose in reaction to the second wave, and many consider it ongoing today. Third wave feminists challenge simplistic dichotomies (e.g., man/woman, us/them), and instead celebrate the complexity and ambiguity of identity. This movement coincides with the advancement of poststructuralism and queer theory, and reflects a growing rejection of essentialism in favor of socially constructed sexual and gender identities. For instance, Judith Butler argues that identity should be seen as unstable and performative—identity is formed only by how one enacts it. This allows for traditionalist gender and sexual roles to be reinterpreted in people’s daily lives and helps subversive practices (i.e., those that contradict the “norm”) become more mainstream. The third wave also adds a commitment to examining language and the way in which it contributes to gendered oppression. For instance, gender-neutral language is championed as a means of altering rigid ideas about social and family gender roles (e.g., caretaker or homemaker instead of housewife). The reclamation of women’s voices and sexuality as a means to empowerment is a primary value. Playwright Eve Ensler’s The Vagina Monologues exemplifies this through individual stories of female sexual experience presented without framing them within socially preferred norms. The third wave has also embraced the movement to reclaim words that have been sullied by negative association. Bitch magazine employs a previously derisive term as a means of empowerment. The Riot Grrrl movement challenges one to see the word girl as powerful. Modern feminists openly embrace their sexuality,

534

Feminism

emphasize their personal strengths, and often take anticorporate or antimainstream stances. Third wave feminists enjoy gains made by earlier generations of feminists, and have a wider choice of the social roles open to them. They are less invested in institutions or formal structures. Their voices are often heard in online blogs, rather than academic journals; and their causes may reflect individual interests rather than the party line of NOW. Feminists remain committed to eliminating all aspects of oppression, and are particularly focused on the oppression of women around the world. This reflects the postcolonialist critique that feminism has typically operated with a dominant Western view. For instance, the Half the Sky movement, launched by Sheryl WuDunn and Nicholas Kristof ’s book of the same name, addresses gender and sexual oppression in developing nations. The issues of lack of reproductive choice, domestic violence, rape, and oppressive work practices are applied to all areas of the globe, and new issues such as human trafficking are addressed in a global context. Conclusion U.S. society’s views about gender roles and family have changed in accordance with other social and economic gains over the past 130 years. Feminism has won many important advances, despite the fact that the movement continues to attract numerous conservative critics. Others call for a postfeminist stance. Views on women and family structure have relaxed, and the traditional nuclear family now represents only a small portion of today’s families. Through its three waves, feminism has helped women gain the right to vote and has ushered in a culture of complicated and diverse gender roles. Women now have opportunities that they could never have imagined in 1890. However, many women continue to experience oppression in many areas, which means that feminism’s job is not done. Discourse calling for a return to “family values” has morphed from the 1980s’ attack on divorced and single parent families to the vitriol aimed at LGBT families. Women still encounter career barriers and are paid less than their male counterparts. Although many families require the income of two working parents to survive, social policies have failed to provide quality daycare programs to all families and have not addressed the career impediments posed by family caretaking. The vital, unpaid occupation

of caretaker is still not valued, despite the fact that millions of women are now part of the “sandwich generation” that is simultaneously caring for aging parents and young children. However, we are likely to see another shift in social roles in the future. In the 21st century, more women than ever have highpaying, powerful careers, and more women than men are in college. As a result of the recession of 2008 and 2009, many formerly employed men have decided to stay home and assume the role of primary caregiver. These trends will likely impact the future of feminism. Laura L. Winn Florida Atlantic University See Also: Abortion; Birth Control Pills; BreadwinnerHomemaker Families; Civil Rights Movement; Cohabitation; Contraception and the Sexual Revolution; Cult of Domesticity; Defense of Marriage Act; Domestic Violence; Feminist Theory; Gender Roles; Hite Report; Hochschild, Arlie; Maternity Leaves; Mommy Wars; Mothers in the Workforce; Myth of Motherhood; Roe v. Wade; Separate Sphere Ideology; Stay-at-Home Fathers; Third Wave Feminism; Wife Battering. Further Readings Baumgardner, J. and A. Richards. Manifesta: Young Women, Feminism, and the Future. New York: Farrar, Straus & Giroux, 2000. Butler, J. Gender Trouble: Feminism and the Subversion of Identity. New York: Routledge, 1990. Collins, P. H. Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. New York: Routledge, 2000. Ensler, Eve. The Vagina Monologues. New York: Villard, 2007. Faludi, Susan. Backlash: The Undeclared War Against American Women. New York: Anchor Books, 1991. Friedan, B. The Feminine Mystique. New York: Norton, 1963. Hochschild, Arlie, and Anne Machung. Second Shift: Working Parents and the Revolution at Home. New York: Viking, 1989. Wolf, Naomi. The Beauty Myth: How Images of Beauty Are Used Against Women. New York: William Morrow, 1991. WuDunn, Sheryl and Nicholas Kristof. Half the Sky: Turning Oppression Into Opportunity for Women Worldwide. New York: Knopf, 2009.



Feminist Theory Feminist theory provides a framework for understanding how power operates in families. This theory calls attention to the link between the private experiences and relationships between women, men, and children in families and in society at large. Feminism disputes the assumption that the experiences of white, heterosexual, middle- and upper-middle class, educated men represent the universal experience of being human. When women and less privileged men are compared to this standard, they are found lacking. Privilege refers to the greater importance given to powerful men’s ways of thinking, doing, and being in the world. Patriarchy, which means “the power of the father,” sustains the view that the male experience defines what it means to be human. At its core, feminist theory is a critique of patriarchy and the social structures (such as laws, economic systems) and cultural behaviors (such as customs, practices, beliefs) that keep male power in place. Initially, feminist theory focused on describing ways in which women were oppressed in the work place and in the home. For example, many versions of feminist theory (e.g., liberal, radical, and cultural) tried to explain why women received less pay than men for the same work, why women were formally or informally barred from higher paying and prestigious occupations (such as business, medicine, and political leadership); and why women were responsible for nearly all of the labor in the home (e.g., childcare, housework, and kin care). Feminist theory provides a lens for understanding how labor in the home is invisible: it is private, unpaid, unseen, and unacknowledged. Furthermore, feminist theory proposes a course of action for the social change needed to rebalance the injustices caused by gender inequality. Over the past four decades, feminist theory has evolved from a major focus on women, gender difference, patriarchy, and oppression. In the 21st century, feminism offers more advanced understandings of the ways in which multiple identities intersect over the life course. Intersectionality includes the ways in which race, class, gender, age, sexual orientation, ability/disability status, nationality, and other forms of social stratification combine to construct individual experiences and well-being in families and society. Although the feminist perspective on intersectionality has replaced the earlier notions that guided feminist theory, the core

Feminist Theory

535

ideas remain relevant: feminist theory critiques the status quo, looks for ways in which inequalities are constructed in society, and seeks to address those inequalities through empowering women and others whose lives are on the margins of society. A hallmark of feminist theory is its continuous ability for selfcritique and revision in order to provide more inclusive and diverse definitions of power and privilege, as well as pathways to social change. Feminist Theory and Activism: Three Waves As a theory critical of the status quo and one that advocates for social change, feminist theory is entwined with the history of feminist activism. Feminist activism has been associated with three waves. Each of these waves has generated different versions of feminist theory. First wave feminism was associated with the 19th and early 20th centuries during the effort to secure women’s suffrage (women’s right to vote), a battle that ensued for 70 years. First wave feminism, which promoted the interests of white middle-class women and emphasized equality with men, resulted in the passage of the Nineteenth Amendment to the U.S. Constitution, ratified in 1920. Second wave feminism, associated with the mid- to late-20th century, began in the mid-1960s around the time of widespread social activism in the United States. The civil rights movement and the protest against the Vietnam War were major catalysts that increased women’s renewed activism for their private and public rights. Second wave feminism primarily focused on gender, wage, and sexual equality with men by raising women’s consciousness and promoting public protest about the ways in which women’s experiences were devalued and their opportunities for full citizenship were denied. Feminist theories in the second wave were instrumental in obtaining equality in the workplace, calling attention to women’s double shift (where women worked for pay outside the home and without pay inside the home), the politics of housework, and bringing national attention to domestic violence and sexual abuse. A variety of ways of viewing the relationship among gender, power, and social change emerged during the second wave. The most widely used was liberal feminist theory, rooted in gender equality. The phrase “just add women and stir” is associated with a gender equality theory. Radical feminism was a second important theory. The term radical

536

Feminist Theory

refers to changing at the root, and this theory provided a framework for significantly challenging male control over women’s bodies. This perspective made many gender-linked abuses visible, such as rape, domestic abuse, wife burning, lack of contraception and abortion rights, and female infanticide. A third theory, cultural feminism, advocated complete withdrawal from the dominant society. Women were advised to live in women-only spaces, refuse to vote, stop paying taxes, and avoid contact with the patriarchy in any way. Other perspectives included Marxist, psychoanalytic, and postmodern feminism. Standpoint theories, those associated with the experiences of a particular type of woman, also provided key feminist ideas. These theories arose from the critiques of African American feminists and of lesbians, in particular, who argued that the second wave focus on gender equality addressed the concerns of white middle-class married women with children, thus excluding lesbians, women of color, aging women, and disabled women, among many others. This critique showed that one voice could not speak for all: just as there is no universal human experience, there is no universal woman’s experience. Instead, women’s lives, and men who do not experience the privileges associated with the white male standard, must be considered as a matrix of oppression. The matrix is envisioned as an intersection of gender, race, class, sexual orientation, age, ability/ disability status, and nationality, in which lives are structured in society. Oppressions are not additive, for example, being female and African American does not equal being doubly oppressed. Instead, multiple oppressions intersect in ways that disproportionately advantage and disadvantage. A woman of color working as a physician is likely to experience her intersectionality in different ways than a white woman working part time in a convenience store. This process of questioning the universal experience of gender as applicable to all women contributed to the development of third wave feminism. Feminist theorizing in the third wave is difficult to identify because of the vast variety of ways in which it is done. The feminist maxim, that there are a thousand kinds of feminism, is especially applicable to theorizing in the third wave: feminists may reject the concept of gender, criticize standpoint theorizing, emphasize intersectionality, or argue for the importance of activism over

intellectual theorizing. Yet all of these positions are perspectives associated with feminist theory in the third wave. Theorizing Gender: From Differences to Intersectionality In feminist theory, gender was initially conceptualized as a dichotomous variable: male versus female. Roles associated with the male sex and the female sex were identified: men were naturally rational, and women were naturally emotional. Men’s roles were instrumental, and they worked outside the home for pay. Women’s roles were expressive, and they worked inside the home without pay. The concept of sex roles, implying only biological sex, was linked to gender differences. Gender difference, however, was soon critiqued as too dichotomous (e.g., only male and female) and too universal a concept to capture the fact that gender is experienced in multiple ways, depending upon the relationship of gender to other forms of power and social stratification, such as race and class. The concept of gender relations became a new way to highlight the social construction of gendered roles and relationships. Gender relations acknowledges the link between private and public spheres. Through this dynamic concept of gender as a social construction not simply assigned at birth, gender was now seen as a “doing,” rather than a “being.” “Doing gender” operates at multiple levels. At the individual level, doing gender is accomplished through identity, beliefs, and attitudes. At the interactional level, doing gender is performed in relationships. At the institutional level, doing gender is seen in the organization of women and men into types of work and family responsibilities that have different resources and rewards. Feminist theorizing has evolved, primarily by feminists critiquing the limitations of existing theory, to conceptualize the distinctions among different intersectional positions. For example, an older lesbian woman who came of age during the 1960s, before the emergence of same-sex marriage rights, has experienced a very different partnership history than a young lesbian growing up in a state where same-sex marriage is legal. A woman from a Spanish-speaking culture, with the cultural dynamic of marianismo (the practice of women being nurturing and self-sacrificing), relative to women in other racial ethnic groups, may have a

Fertility



unique perspective on motherhood because of her cultural beliefs and practices. Intersectionality is a complex concept that is difficult to measure in research; for this reason, feminist scholars continue to refine it. Two useful ways of thinking about intersectionality are locational and relational. Locational intersectionality refers to oppressed groups that share a specific standpoint or disadvantaged social position. By giving voice to the experiences of individuals who face a similar kind of marginalization, for example, the immigrant domestic workers who serve as nannies or maids for wealthy families, the concept of locational intersectionality allows others to understand how individual experience intersects with larger institutional constraints. Relational intersectionality goes beyond an individual or standpoint perspective. It refers to the actual social processes that produce inequality for everyone, not just those who are oppressed or marginalized in society. An example of relational intersectionality is how the concept of heteronormativity, defined as the assumption that being heterosexual is normal and universal, operates in structuring family life for all families, including those headed by single individuals, heterosexual couples, gay couples, and lesbian couples. Feminist Theory and Family Studies Feminist theory in family studies begins with the perspective that families have both private and public spheres. Private family dynamics and the broader, public social systems in which individuals and families live and work must be examined together. Feminist theorists challenge assumptions about the natural order of family life and shed light on the structural inequalities linked to gender, race, and class that shape the opportunities families have for meeting their developmental needs. Feminist theory has broken new ground in family studies by identifying the tension inherent in providing and receiving care. Caring for a dependent being and caregiving relationships are often contradictory and ambiguous, typically because of the tacit assumptions that caring is women’s work, it is unpaid or underpaid, it is invisible and unappreciated, and it is done by sacrificing one’s needs and desires. Feminist theory has also paved the way for new understandings of men’s lives by promoting child rearing, child care, household labor, and emotional availability in intimate relationships as also the domain of men.

537

Furthermore, feminist theory has bridged the gap between thought and action. Many of the important social changes for families have been inspired by or supported by feminist theory. Feminist theory in family studies involves using knowledge for positive change to understand and benefit family life— not simply the universal family, but the families of diverse women, men, and children. Katherine R. Allen Virginia Tech See Also: Conflict Theory; Constructionist and Poststructuralist Theories; Domestic Ideology; Feminism; Gender Roles; Third Wave Feminism. Further Readings Collins, Patricia Hill. Black Feminist Thought: Knowledge, Consciousness, and the Politics of Empowerment. Boston: Unwin Hyman, 1990. Ferree, Myra Marx. “Filling the Glass: Gender Perspectives on Families.” Journal of Marriage and Family, v.72 (2010). Freedman, Estelle B. No Turning Back: The History of Feminism and the Future of Women. New York: Ballantine, 2002. Lloyd, Sally A., April L. Few, and Katherine R. Allen, eds. Handbook of Feminist Family Studies. Thousand Oaks, CA: Sage, 2009. Risman, Barbara J. “Gender as a Social Structure: Theory Wrestling With Activism.” Gender & Society, v.18 (2004). Shields, Stephanie A. “Gender: An Intersectionality Perspective.” Sex Roles, v.59 (2008).

Fertility Fertility, or the ability to reproduce, serves the critical function of replacing a society’s members lost through death, allowing it to endure rather than become extinct. In order for a society to have a stable population size, it must maintain a total fertility rate (TFR) high enough to replace members; this is often called replacement level fertility. TFR is defined as the average number of children that each woman in a society is projected to produce across her lifetime, given current trends. Though in

538

Fertility

most cases a high TFR leads to increased population, and a low TFR leads to decreased population, replacement level fertility differs for each country and depends on mortality rates. In the United States, a low rate of infant mortality and increased overall life expectancy equate to a replacement level TFR for the United States (and most developed Western countries) of 2.1, meaning that the average woman needs to have 2.1 children to maintain a steady population. The additional 0.1 above true replacement (i.e., two people make two people) accounts for those who die before having children and are unable or choose not to reproduce. The United States and much of Europe has seen their total fertility rates rise and fall across the 20th century, corresponding with times of economic boom and bust. While the early 1900s saw the U.S. TFR near 3.5, World War I and the Great Depression forced many couples to limit family size due to financial concerns, reflected in the lower TFR of 2.2 during these years. However, the thriving economy that followed World War II brought the TFR to a peak of 3.7 during a period of increased fertility in the United States that became known as the baby boom. The baby boom generation, as these children came to be called, consisted of those born between 1946 and 1964; they are the largest cohort in U.S. history. By the late 1960s, the TFR slowly began to decline to the near-replacement levels that the United States has in the early 21st century. Though subreplacement level fertility rates are a growing concern for the overall population stability of many countries in Europe, near-replacement level fertility, coupled with the strong immigration pattern into the United States, provide consistent population growth that is not anticipated to slow or decline in the near future. In general, fertility is contingent on several key factors. The most important component is the availability of a mating partner. Though males and females are typically born in roughly equal numbers, some social dynamics determine whether one obtains a mating partner, such as desirability by potential mates, perceived health for mating, or ability to provide resources for offspring. Timing of mating is also important in fertility, because women can only become pregnant while they are ovulating, which is a limited window in their monthly cycle. Finally, levels of fertility are dependent on both biological and social influences.

Influences on High Fertility Levels Many factors can lead to increased fertility. Throughout history, the best predictors of high fertility have been related to biological factors. For example, better access to health care and birth specialists, such as doctors or midwives, has provided safer opportunities for women to bear children, resulting in a decrease in both infant and maternal mortality rates. In addition, a focus on preventative and overall health care, as well as increased nutrition, has increased the number of women who bear healthy children, while limiting the number who suffer from infertility, miscarriage, or stillbirths. Beyond biological resources, social influences can encourage increased fertility. Religious institutions and leaders may place a value on large families as a reflection of divine bounty, while discouraging contraception as circumventing divine will. Agriculturally based societies may require large families for subsistence-based living. For these families, having many children increases the chance that the family will have a sufficient labor supply to perform the tasks necessary for survival. It also ensures that someone will be able to take care of them in their old age. Women who are highly religious tend to have more children than their nonreligious counterparts. Strongly religious women are also more likely to adhere to traditional gender roles that prioritize motherhood and stress the importance of family. Immigrant women are more likely to have higher fertility than nonimmigrant women. Influences on Low Fertility Levels Just as there are biological and social influences that can increase fertility, there are factors that can decrease fertility. When a group or a society experiences reduced access to nutrition or medical resources, such as with famine or for those in extreme poverty, they may experience reduced fertility. This may be because they are physically unable to sustain a pregnancy, or because they choose not to have children. In fact, even the improved biological resources that increase fertility do so only to a point. When life expectancy in a society increases and becomes stable, infant and maternal mortality rates are low, and overall health is high, fertility rates begin to level out. Women no longer need to produce a large number of children in order to ensure that some make it to adulthood; therefore, the number of births needed



for replacement drops, and fertility levels experience a corresponding decline. Yet across history, when fertility rates have decreased through individual choice, as opposed to environmental factors, the main causes have been changing culture definitions of ideal fertility. Ideal fertility can be thought of as the culturally prescribed expectations of having children, and it can be influenced by gender roles, changing norms in social institutions, and the opportunities that exist before, or instead of, children. Several opportunities in modern American culture have led to delayed parenthood for both men and women. As college and graduate school have become more common, individuals are waiting until they complete their education and become financially stable to marry or start families. Furthermore, higher educational attainment often leads to careers that are time consuming and demand commitment, further postponing family formation. As American society shifts to a more egalitarian balance in gender roles, opportunities for achievement and fulfillment outside of the home have empowered more women to significantly delay childbearing, or even forego motherhood altogether. In addition, as a woman’s age at first marriage increases, there is a corresponding increase in a women’s age when she gives birth to her first child. Greater access to contraception and family planning options have allowed couples to choose when they will have children. Waiting to have children until a couple is older and more established allows them to have the resources necessary to provide a fulfilling life of opportunities for their child. However, by beginning motherhood at a later age, women shorten the window of potential fertility (fecundity), raising their chances of experiencing primary infertility (difficulties with conceiving or carrying a first pregnancy to term) and secondary infertility (difficulties with conceiving or carrying a subsequent pregnancy to term). In societies like the United States that are dominated by a service-based economy that defines postindustrialization, families have shifted from a source of production to a mechanism of consumption, and therefore, having several children may be seen as a burden. Pregnancy and parenthood represent a large financial, emotional, and physiological commitment, particularly for women who often must forego educational or occupational opportunities to have and

Fertility

539

raise children. American parents today often feel that they must devote a great deal of time, attention, and resources to the development of each child in order to ensure their eventual success. Though the rewards of having children are innumerable, many parents believe that the level of dedication needed to ensure a child’s success cannot be sustained for a large number of children. There are several advantages to lower fertility levels for the individual and society. When parents have fewer children, they are able provide greater interpersonal and financial resources for each child. In addition, because the risk for complications or death from pregnancy or childbirth increases with every conception, lower fertility levels decrease maternal mortality rates. At the societal level, excessive population growth can lead to challenges in resource allocation, scarcity, and political and economic instability. Furthermore, high fertility expectations within a society often produce gender inequality because women are encouraged to focus all their energy on reproduction and child rearing. Not only are they limited in the opportunities available to them outside of the home, but when their value is exclusively tied to their ability to reproduce, they can be socially vulnerable if they have difficulties with fertility. Difficulties With Fertility For most women in the United States, fertility is taken for granted. It is only when a woman faces challenges in conceiving or carrying a child to term that she and her loved ones give it thought. Infertility occurs when a woman is unable to become pregnant after trying to conceive for a period of at least a year. Depending on its causes or outcomes, infertility can also be called sterility, being barren, subfecundity, impotence (for men), or involuntary childlessness. Because fertility is a symbol of health, wellness, and adult status, women who wish to have children and are unable to may feel less than whole. Infertility can lead to depression, marital or relational stressors, sexual dysfunction, feelings of rejection, shame, loss of friendships, and financial strain. Though advancements in medicine and technology have allowed may women who would have once been considered barren to have children, many couples who turn to Assisted Reproductive Technology (ART) never conceive. Gender roles for women in

540

Film, 1930s

the United States have expanded to include much more than motherhood, yet society still places significant status on the label of mother for women, leaving those who are infertile to feel incomplete. Furthermore, in cultures where a woman’s value is tied to her ability to provide children to her husband, infertility can leave her vulnerable to abandonment, divorce, or in extreme cases, death. Voluntary Childlessness Though most American women have children at some point in their lives, a growing number of women are electing to remain childless. Voluntary childlessness, also known as being childfree, differs from infertility in that women (or couples) make the choice not to have children, as opposed to being biologically unable to conceive or carry a pregnancy to term. Individuals who make the choice to be childfree are not distressed by forgoing the parenting role in their lives. Childfree women and men feel a sense of freedom in their lives, and often report stronger, more egalitarian relationships with their partners because they have more time, energy, and resources to devote to one another and the relationship. However, there are often social implications to the decision to be childfree. Parenthood is seen as an important adult transition that is highly desirable; choosing to skip this transition is seen as a failure to fully become an adult. While infertile women are often pitied for their inability to have children, childfree women are usually seen as selfish, cold, and less feminine, regardless of their reasons for choosing to not have children. Though voluntary childlessness currently has a small impact on the greater fertility levels of society, the increasing percentage of the population who makes the choice to be childfree will play a larger role in long-term population growth or decline across the coming decades. Mari Plikuhn Sarah E. Malik University of Evansville See Also: Adolescent Pregnancy; Assisted Reproduction Technology; Birth Control Pills; Childless Couples; Contraception and the Sexual Revolution; Demographic Changes: Zero Population Growth/ Birthrates; Family Planning; Infertility; Multiple Partner Fertility; Prenatal Care and Pregnancy; Primary Documents 1986; Surrogacy.

Further Readings Aarssen, Lonnie W. “Why Is Fertility Lower in Wealthier Countries? The Role of Relaxed Fertility-Selection.” Population and Development Review, v.31/1 (2005). Davis, Kingsley, Mikhail S. Bernshtam, and Rita RicardoCampbell, eds. Below-Replacement Fertility in Industrial Societies: Causes, Consequences, Policies. New York: Cambridge University Press, 1987. Easterlin, Richard A. “Twentieth-Century American Population Growth.” The Cambridge Economic History of the United States. Vol. 3. New York: Cambridge University Press, 2000. McQuillan, Julia, Arthur L. Greil, Lynn White, and Mary Casey Jacob. “Frustrated Fertility: Infertility and Psychological Distress Among Women.” Journal of Marriage and Family, v.65/4 (2003). Morgan, S. Philip. “Is Low Fertility a Twenty-FirstCentury Demographic Crisis?” Demography, v.40/4 (2003).

Film, 1930s The 1930s witnessed the maturation of the Hollywood film industry. “Talkies” had been introduced in 1927, and quickly became the standard, ushering in Hollywood’s golden age, marked by the studio system, in which actors, as well as most directors and writers, were attached to specific film studios under long-term contracts. Furthermore, studio pictures were generally filmed on lots owned by the studios, with crews employed by the studios. This led to a distinct tone and feel to each studio’s pictures, not only because of a stable talent roster but also because of the guiding hands of the studio heads. In many cases, actors and even directors had little say over which films they appeared in (though especially popular ones had unofficial sway, insofar as it was in the studio’s best interest to keep them happy). The “Big Five” studios, each of which owned production facilities, a film distribution company or division, a theater chain, and maintained long-term contracts with a large number of big-name stars, were Fox, MGM (owned by Loew’s Incorporated), Paramount, RKO, and Warner Brothers. Universal and Columbia were close runners up, while United Artists (formed by silent movie stars Charlie Chaplin, Douglas Fairbanks, Mary Pickford, and pioneerring director



William Powell and Myrna Loy starrted in the1934 film The Thin Man, based on the detective novel by Dashiell Hammett. The book became the basis for a successful six-part film series.

D. W. Griffith) primarily financed independent films. These last three studios are sometimes called the “Little Three.” The Studio System and Comedies MGM dominated Hollywood in the 1930s, leading in box office proceeds every year from 1931 to 1941, until it was overtaken by Paramount. MGM was home to many of the “screwball comedies” of the era. Early complaints about the talkies was that they featured wall-to-wall dialogue, often at the expense of action. If this was true of anything, it was true of the screwball comedies, in which dialogue was rapid-fire, slangy, and full of wordplay—a far cry from the language-neutral universalism of the Keystone Cops or Chaplin’s silent films. The best of the genre was the Thin Man series, which paired William Powell and Myrna Loy in 1934 in their first of 11 movies together. Powell had been a leading man for years, but Loy—despite being a Midwestern Scandinavian American—had often been cast as Asian and Indian femme fatale in melodramas. The Thin Man series allowed her to employ her natural sense of comedic timing as Nora Charles, wife of Powell’s Nick Charles, a retired private detective who keeps

Film, 1930s

541

getting drawn into cases, usually at Nora’s urging. Nick and Nora begin as newlyweds at the start of the series, but soon have a young son, Nicky Jr., who grows up in the last of the films. A recurring joke throughout the series is Nick’s womanizing past, which leads many of his old associates to believe that Nora is either just the latest conquest, or in later movies where he is known to be married, a woman with whom he is having an affair (introducing her as Mrs. Charles only makes them assume that he is saving face). While sex is neither depicted nor even implied, the series wears its sexuality on its sleeve, which along with the heavy drinking of the first few movies, was the subject of criticism. Under the early days of the studio system, the style of filmmaking called classical style or classical Hollywood style developed amid all eight of the major studios, and was emulated abroad. Indeed, much of what was notable about the New Wave film movements of later decades was the departure from this style. In the classical style, the mechanisms of filmmaking—the camera, sound recording, editing, and mixing—never draw attention to themselves. This does not mean that camera work is simple or limited, although some film critics have compared the style to that of a filmed play. Three-point lighting is traditionally used, while shots are composed to create depth in the filmed space. In the narrative itself, time is linear, except for flashbacks (which were popular devices in the golden age, even more than today), and genre conventions are closely followed. The Hays Code In the early 1930s, movies covered diverse ground as the classical style was developed. The Motion Picture Production Code, or Hays Code (named for MPAA president Will Hays), was not enforced until 1934. Drafted in 1930, the Hays Code limited content in motion pictures on moral grounds. Hays had been hired to clean up Hollywood’s image in the previous decade, just as Kennesaw Landis had been hired to rehabilitate Major League Baseball after the 1919 World Series scandal. But the advent of sound in particular led to content concerns, not only because it raised questions about “appropriate” vocabulary in dialogue, but also because of the effects that it had on the increased sophistication and seriousness of film storytelling. Like television later, movies were objects of concern because of the ease of access and the likelihood of families attending

542

Film, 1930s

with children in the audience who, it was felt, should not be exposed to certain subject matter. In its 1930s form, the code was known for its “Don’ts” and “Be Carefuls”—lists of material that was outright banned, and that which was to be used with caution and in context. Banned were profanity, nudity (and suggestions of nudity, which generally included not only silhouettes, but also scantily clad actresses), illegal drugs, “sex perversion,” interracial relationships, childbirth, and ridiculing the clergy. To be treated with caution were various topics of crime, violence, and sexual relationships, even between married couples. The code specifically singled out wedding night scenes as an area of caution, in addition to “excessive kissing.” The code was voluntarily adopted, but the purpose of its adoption was to discourage the continued existence of state and local film censorship boards, which made the filmmaking business difficult when standards varied from one county to the next. Movies made from the early sound era (but especially from 1930 to 1934) are often called “pre-code films,” though the term technically includes silent films. Pre-code films were subject to a different level of scrutiny, and while most are not significantly different in content from those of later in the decade, many have become historically notable for treating serious subjects in a serious manner, the likes of which would not be seen again until the 1970s. The genre most impacted by the code was the social drama, especially movies that the studio censors called “sex pictures”—movies in which sexual relationships played a significant role in the plot, and in so doing, explored a changing family dynamic and social landscape in a country experiencing significant social change. Constituting about a fifth of the output of the major studios, and appealing to a predominantly female audience, these movies would prove too difficult to produce under the restrictions of the code. Norma Shearer, one of the great actresses of the decade, starred in several notable pre-code films, including Let Us Be Gay (1930), Strangers May Kiss (1931), A Free Soul (1931), Private Lives (1931), and Riptide (1934). Along with Joan Crawford and Greta Garbo, she was one of MGM’s biggest stars, and had more freedom in her roles because of her marriage to “boy genius” producer Irving Thalberg (who died in 1936 at the age of 37). Her role in

1930’s The Divorcee earned her an Academy Award. At a time when divorce was still difficult to obtain and highly stigmatized, and nearly 40 years before the first no-fault divorce law was adopted, Shearer starred as a woman whose marriage falls apart when she discovers her husband’s affair and has an affair with his best friend in retribution. After her divorce, she discovers that a former flame is still in love with her and willing to leave his wife for her. The movie’s frankness in its discussion of sexuality and relationships was widely praised. Other pre-code films directly addressed adultery, premarital sex, and sexual relationships, and because of their largely female audience, featured strong female characters played by some of the greatest actresses of the golden age. “Bad girl” pictures were common, the best of them starring Shearer, Barbara Stanwyck, or Jean Harlow. Many of these portraits of “fallen women” were commentaries on the widespread entrance of women into the Depression-era workforce, and reflected concerns in many American households. On July 1, 1934, the pre-code era ended as the Production Code Administration was formed and required all released films to obtain a certificate of approval. Walt Disney and the Rise of the Animated Family Movie Although The Wizard of Oz has since become regarded as an iconic classic, the most successful family film of the 1930s was Walt Disney’s Snow White and the Seven Dwarfs, which in its year of release (1938) was also the highest-grossing film of the decade, a record toppled the next year when Gone with the Wind was released, eventually grossing enough to make it the most successful film of all time, adjusted for inflation. Snow White was the first full-length cel animated feature film, and established Disney as the king of animated features. Not until the late 20th century would any other studio put out animated features on a regular basis for a significant period of time, and the animated feature became virtually synonymous with the Disney movie. This alone is a large part of why animated movies in the United States are considered family fare, whereas other countries use the medium more frequently for adult features. Snow White was in development for almost four years, and it set the tone for animated features to come, focusing on an adaptation of a familiar fairy



tale, a female protagonist (the first of the “Disney princesses,” as they were rebranded in the 21st century), and a contrast between the seriousness of the main storyline and the comic relief provided by the supporting cast—in this case, the seven dwarfs. In future movies, the comic relief was usually provided by the main character’s animal companions. Cartoons had been strongly associated with music, in part because of the silent period, and in part because even with the advent of talkies, synchronizing speech with animated mouths was more difficult than synchronizing it with liveaction. Furthermore, songs ate up time, allowed the plot to move at a slower pace, and meant that dialogue did not need to be written. Snow White included a number of songs that became famous, including “Some Day My Prince Will Come” and “Whistle While You Work.” Subsequently, the inclusion of songs without making a film a musical became a staple of the American animated feature. Snow White and the Seven Dwarfs was awarded with an honorary award at the 11th Academy Awards, in the form of one full-size Oscar and seven miniature ones, and pioneering Russian director Sergei Eisenstein called it the greatest film ever made. The profits from the movie financed the construction of the Walt Disney Studios in Burbank, where the next animated features began production for the coming decade. Warner Brothers was consistently the underperformer among the Big Five studios, but it dominated in animation. While Walt Disney’s animation studio was less prolific in its production of shorts as it shifted to the increasingly complicated and labor-intensive production of featurelength cartoons, Warner Brothers continued to focus solely on short cartoons, mainly the Merrie Melodies (1931–69) and Looney Tunes (1930–69) series. Originally designed to showcase music from the music publishers and record companies that Warner Brothers owned, the series were overhauled after 1935 when the character Porky Pig became a breakout star. Porky, as well as Daffy Duck (introduced in 1937), were Warner Brothers’ most popular cartoon characters until Bugs Bunny was introduced in the 1940s, and their cartoons rivaled Disney’s Mickey Mouse and Donald Duck in popularity. The plethora of ducks was no coincidence; Looney Tunes itself was a play on Disney’s shorts series Silly Symphonies.)

Film, 1930s

543

However, the Hays Code impacted even animated films. Six of the “Censored Eleven” Warner Brothers cartoons dating from the 1930s were withdrawn almost completely from circulation because of offensive content. While many cartoons later withdrawn from circulation featured characters smoking cigarettes, drinking alcohol, or exhibiting a level of violence that later generations of parents found unacceptable, “Hittin’ the Trail for Hallelujah Land” (1931), “Sunday Go to Meetin’ Time” (1936), “Clean Pastures” (1937), “Uncle Tom’s Bungalow” (1937), “Jungle Jitters” (1938), and “The Isle of Pingo Pongo” (1938) all feature outrageous racial caricatures. In the 1930s, American race relations were quite strained, and a resurgent Ku Klux Klan, inspired in large part by Griffith’s Birth of a Nation (1915), enjoyed mainstream acceptance as a fraternal organization in some parts of the south and midwest. “Sundown towns,” in which local laws, restrictive covenants, and sometimes lynch mobs forbade black people from remaining in town after dark, were still found throughout the country, not just in the segregated south. Movies avoided portraying African American characters too favorably or honorably, or they risked being boycotted by southern movie theaters. Racist caricatures were not just accepted in movies—they were expected. While the civil rights movement did not gain stream until the 1950s, caricatures of African Americans became much less common as World War II began at the end of the 1930s, partly because they were displaced by anti-Japanese caricatures. Bill Kte’pi Independent Scholar See Also: Film, 1940s; Film, Silent; Radio: 1920 to 1930; Radio: 1931 to 1950. Further Readings Barrier, Michael. Hollywood Cartoons. New York: Oxford University Press, 1999. Maltin, Leonard. Of Mice and Magic. New York: Penguin, 1987. Wayne, Jane Ellen. The Golden Girls of MGM: Greta Garbo, Lana Turner, Judy Garland, Ava Gardner, Grace Kelly, and Others. Cambridge, MA: Da Capo Press, 2003.

544

Film, 1940s

Film, 1940s The 1940s were solidly in the middle of the golden age of Hollywood, when the studio system that grew out of the early sound era still dominated the production of motion pictures and most star talent—actors, writers, and directors—were held under long-term contracts to studios that guided their careers. These studios, the largest of which were MGM, RKO, Fox, Warner Brothers, and Paramount, not only owned the film production facilities, but also the distribution networks and theater chains, which gave them the kind of vertical integration enjoyed by the industrial tycoons of the turn of the century. While the golden age persisted through the 1950s, and Hollywood remained dominated by elements of the studio system until the late 1960s, the 1940s were the last decade in which all the elements were in place. Moving forward, the studios became less powerful as the result of antitrust court decisions that undid some of that vertical integration, and soon the movies would be forced to compete with television, which posed a threat greater than radio and theater ever did. Some film historians date the end of the golden age with either the late 1940s,when the studio system first lost traction, or the mid 1950s, when television became widespread. In this era, the movies became one of the principal sources of entertainment for American families. Television did not become commonplace until the end of the following decade, and remained in black and white for years, while color films quickly became the default choice for family-oriented movies after the 1939 successes of The Wizard of Oz and Snow White. Live entertainment was not nearly as common as it had been before the advent of the talkies; vaudeville was on its last legs, and live music had lost some of its popularity to the prevalence of radio and record players. The introduction in 1941 of improved speakers also contributed to the popularity of the drive-in movie theater, which had been introduced in 1933 in New Jersey, but did not catch on nationally until this decade. Drive-in theaters appealed both to families, who could let small children sleep in the back seat, and to young teenage couples on dates. Film Noir In the 1930s, most crime movies had been gangster films, but in the 1940s, a leaner, meaner, and

smaller-scale crime drama developed, characterized by a tone somewhere between cynicism and worldweary resignation. French film critics called it film noir, though the term was not well known in Hollywood until the 1970s, long after the period had ended. Instead, film noir pictures—from about 1940 to about 1958—were usually grouped together with melodramas, which similarly focused on small-scale conflicts involving a handful of people, rather than the mass casualties of a Jimmy Cagney gangster picture. Noir revolves around loners, more often than not—men who do not trust or are not trusted by the world, and who betray or are betrayed by strong female characters with personal agendas. One of the best examples is Jacques Tourneur’s Out of the Past (1947), largely told in flashback, in which Robert Mitchum’s character reveals to his girlfriend that his past has caught up to him: he has been found by a crime lord played by Kirk Douglas, whose girlfriend (Jane Greer) Mitchum had been hired to find, and had fallen in love with instead, before she betrayed both of them. Like the pre-code movies of the early 1930s, noir movies were frank in their sexuality and their approach to morality, but avoided both the titillation of many of those movies (which the Hays Code would not have permitted) and the moralizing. Barbara Stanwyck, who had starred in many pre-code “bad girl” pictures, played Double Indemnity’s femme fatale. Family Films of the 1940s Noir was an important trend in 1940s film, which reflected the uncertainty of the country after the domestic struggles of the Great Depression, the horrors of World War II, and questions about human nature and morality raised by Nazi Germany. However, 1940s film was not dominated by such darkness. Indeed, the decade also saw a large chunk of the career of Frank Capra, whose films aimed to capture in narrative the strength of the human spirit. It’s a Wonderful Life (1946) was not a great success in its time, despite its Academy Award nominations, but it was rediscovered years later thanks to television networks hungry for thematically appropriate Christmas programming. An even more hopeful movie is Preston Sturges’s Sullivan’s Travels (1941). Sullivan’s Travels is simultaneously screwier than Sturges’s earlier movies—featuring at one point a sped-up chase scene in which Joel McCrea’s titular character is in



a souped-up go-cart—and more heartfelt. Sullivan is a film director who makes light diversions and is determined to make something “real,” a movie that speaks to the human condition. His quest to seek real human experiences to draw from accidentally lands him on a prison chain gang for his own murder. He soon finds that the only respite that these prisoners get from their horrible lives—a respite shared by the black church that hosts them for this occasion—is an occasional motion picture, and that the cartoons and screwball comedies are what they love most. They do not need to be reminded of life’s problems; their laughter assuages their pain. Though prisoners were the audience depicted, they seem to stand in for audiences everywhere, particularly in the aftermath of the Great Depression, in Sturges’s musing on the role of film in American life. Animated Films In the late 1930s, Walt Disney had produced the first full-length cel-animated feature, Snow White and the Seven Dwarfs, which was the most profitable movie in history up to that point, and he used the profits to build a new animation studio in Burbank. From this studio came a steady stream of animated features, all of which are today considered classics, and which were vital family entertainment during a decade when young men left for war. In later decades, most Disney features followed the example set by Snow White, taking fairy tales or familiar children’s stories as their source material and incorporating comic relief, funny animal companions, and music. In the 1940s, however, in part because Walt still had direct involvement in so much of the studio’s work, things were more experimental. Pinocchio (1940) struck a less serious tone than Snow White (which had incorporated imagery inspired by German expressionist films) or the later fairy tale movies, which usually paired a serious main story with comic relief supporting characters. In Pinocchio, the talking animal character, Jiminy Cricket, is more somber. The great achievement for Disney in 1940 was Fantasia. Consisting of eight unrelated pieces of animation, each varying in style and narrative but all set to classical music, Fantasia was only Disney’s third feature, yet it remains the studio’s most ambitious. Only a few short years after Disney revolutionized film with the first animated feature, it launched Fantasia as the first commercial film in stereophonic sound. It may

Film, 1940s

545

have been too ambitious: the special sound reproduction technology required equally special equipment in order to screen the film, and an attempt to tour the country as a roadshow made very little money due to the constraints imposed by World War II. In addition to anthology films like Fantasia and Saludos Amigos, two major features were released during World War II for which production had begun in the prewar years when labor was still available. Dumbo (1941) and Bambi (1942) both focused on animal characters, rather than humans, and were adapted from recently published children’s books. Both were successful, though not the hit that Snow White had been. In the next decade, Disney returned to the fairy tale well for Cinderella, setting the tone for much of the company’s future product. While Disney pioneered animated features, the golden age of cartoons mainly continued in the form of theatrical shorts. Having begun in the 1930s, the golden age in the 1940s saw fewer studios overall putting out cartoons on a regular basis, but a greater number of studios competing at the upper echelons of quality with Disney and Warner Brothers. The Fleischer Brothers, who had pioneered sound cartoons and had been known for the characters of Betty Boop and Popeye in the 1930s, sold their studio to Paramount in 1941, but continued to oversee production, and soon began a series of Superman cartoons that developed a following as loyal as that of the comic books. Labor-intensive at a time of diminishing entertainment budgets, the Superman cartoons were discontinued in 1943 when the Fleischers left Paramount. Animation in the 1930s had been dominated by Disney and Warner Brothers, and in 1940, Warner Brothers introduced Bugs Bunny, who proved even more popular and enduring a character than Porky Pig and Daffy Duck, who had preceded him. The Merrie Melodies series, which had originated in order to popularize music owned by the company and had been used mainly for one-shot cartoons, was soon dominated by Bugs, and two years after his introduction, Warner Brothers finally overtook Disney in the profits and popularity of animated shorts. While both studios continued to produce shorts during the war years, Disney’s attention was divided between the shorts and the labor needs of features, while Warner Brothers produced some of its best work during the war. Even after Tex Avery left, whose genius as an animator many consider

546

Film, 1940s

Cary Grant in 1941. Known for his transatlantic accent, debonair demeanor, and “dashing good looks,” Grant is considered one of classic Hollywood’s definitive leading men.

second to none, Warner Brothers did not slow down, thanks to a deep talent pool that included Friz Freleng, Bob Clampett, and Chuck Jones. Jones reinvented the 1930s character Egghead as Elmer Fudd, who would be the main nemesis to both Bugs Bunny and Daffy Duck throughout the rest of the studio’s history. The 1940s also saw the introduction of many of the Warner Brothers’ best-known characters: Foghorn Leghorn (1946, created by Bob McKimson), the Goofy Gophers (1947, created by Bob Clampett), Marvin the Martian (1948, created by Chuck Jones), Pepe Le Pew (1945, created by Jones and Michael Maltese), Wile E. Coyote and the Road Runner (1949, created by Chuck Jones), Sylvester the Cat (1945, created by Friz Freleng and Bob Clampett), and Tweety Bird (1942, created by Freleng and Clampett). Throughout the 1940s, at least two and usually three or four Warner Brothers shorts were released every month. Contrast this with MGM’s output, for instance, which released fewer than a dozen cartoons most years.

MGM was the home of Tex Avery, though, and it made up for in quality what it lacked in quantity. The 1940s saw Avery produce some of his best work, after creating Bugs Bunny and Daffy Duck for Warners. He came to MGM in part because of feeling stifled at Warners, and the fate of his creations after he left—Daffy Duck became more of a mook than a maniac—bears that out. His first MGM cartoon, “Blitz Wolf,” parodied Adolf Hitler, and was nominated for an Academy Award in 1942. The following year, he produced the first of his several “Red” cartoons (“Red Hot Riding Hood”), which pushed the boundaries of sexuality in cartoons, and perhaps thanks to his expressionistic style, were able to do so nearly as much as the pre-code Betty Boop cartoons had done. At the end of the decade, his conceptual cartoon “The House of Tomorrow” (1949) was the first of a number of mockumentary “Tomorrow” shorts that combined satire of obsessive consumerism with slapstick comedy. Avery’s MGM years might be best remembered for Droopy, the sad-faced dog introduced in 1943, who became one of MGM’s best-known characters (despite going unnamed for his first four cartoons). Droopy’s popularity was only exceeded by that of Tom and Jerry, MGM’s major success in this decade outside of Avery’s work. Introduced in 1940 by William Hanna and Joseph Barbera, the feuding cat and mouse starred in 114 theatrical shorts for MGM in the 1940s and 1950s, before the MGM cartoon studio was shut down, and gave Hanna and Barbera the clout to move on to create their television animation empire. That television animation, years down the line, would rely on a technique called limited animation, in which many elements of the frame remained static (often with minimally detailed backgrounds), so that fewer hours of labor were necessary to complete a cartoon. In television, this would be necessary in order to keep up with the demanding production schedule and to fill large chunks of time. In theatrical cartoons, however, it was simply a way to save money. The UPA studio (United Productions of America), founded in 1943 by animators who had left Disney during a strike, used limited animation for many of its cartoons, and in 1948 it became the theatrical cartoon division of Columbia Pictures, one of the “Little Three” studios. UPA is best remembered today for two contributions: the Mr. Magoo series, which began as theatrical cartoons and later

Film, 1950s



became a popular television series; and the Oscarwinning Gerald McBoing Boing, about the boy who spoke in sound effects. Bill Kte’pi Independent Scholar See Also: Film, 1930s; Film, 1950s; Radio: 1931 to 1950. Further Readings Barrier, Michael. Hollywood Cartoons. New York: Oxford University Press, 1999. Bordwell, David, Janet Staiger, and Kristin Thompson. The Classical Hollywood Cinema. New York: Columbia University Press, 1985. Maltin, Leonard. Of Mice and Magic. New York: Penguin, 1987. Wayne, Jane Ellen. The Golden Girls of MGM: Greta Garbo, Lana Turner, Judy Garland, Ava Gardner, Grace Kelly, and Others. Cambridge, MA: Da Capo Press, 2003.

Film, 1950s The 1950s have been heralded as a golden age in American history, a time of prosperity and happiness after a long war. This utopian view of life was central to the films of the 1950s, which tended to include a problem that was permanently resolved by the end of the film. In addition, families in films during this decade were typically portrayed as the ideal nuclear family, defined as a patriarchal father, stay-at-home mother, and multiple children. In addition, families in 1950s films tended to be depicted as white and middle class, and typically lived in an idyllic suburban setting. The problems that families faced ranged from difficult issues that threatened the structure of the family, to minor problems that were resolved relatively easily. Of the top 10 family films of the 1950s, five are Disney cartoons: Lady and the Tramp, Cinderella, Sleeping Beauty, Peter Pan, and Alice in Wonderland. These films encompass themes that were present in family films for this decade, including strong gender roles for males and females, broken families trying to reach the nuclear ideal, and

547

main characters in search of their identity within the family. Lady and the Tramp uses traditional gender norms as strict guidelines for the main characters, who are both dogs. Lady is told to behave in a calm, consistent, and polite manner, and she is punished when she behaves in an unladylike manner, such as when she leaves the house and is taken to the pound (or arrested, in human terms).The actions of Tramp are accepted (including his promiscuous manner with other dogs) because he is male and single, which mirrors traditional societal expectations that males can be active and unrestrained until the point of marriage. However, Tramp has to give up his bachelor ways once he chooses to settle down with Lady and start a family. Cinderella is also constrained by strict traditional gender norms because she is allowed only to cook, clean, and sew until her prince finds her and sweeps her off her feet. Disney movies have been notorious for depicting broken families in search of assimilating to the nuclear ideal of a strong father, caring mother, and happy children. Three of the five top Disney films of the 1950s describe families in which one or both of the parents is dead or missing. Cinderella’s mother dies when she is very young and her father quickly remarries so that Cinderella has a mother figure; but he also dies at a young age, leaving Cinderella to be raised by her evil stepmother, who forces her into a life of servitude. The image of an evil stepmother is consistent in many movies because of the belief that the only happy family is the original, nuclear family. Similarly, Peter Pan is the story of a young boy who ends up alone after his neglectful parents do not realize that his stroller has rolled off. However, he realizes that he wants to become a part of a family when he meets Wendy, John, and Michael Darling, who chose to leave the idyllic Never Land because they wanted to return to their family. The conflict present within the portrayal of these characters is partially due to their desire to find an identity within the family. Alice in Wonderland is the story of a young girl with an active imagination who is trying to avoid the Red Queen, the epitome of an abusive, controlling motherly figure. The premise is that Alice engages in her imaginative adventures to avoid real life, but as is usually the case, real life issues present themselves in a fictional manner when she is in a dream-like state. Lady and the Tramp also demonstrates the battles of an individual trying to find an identity within her

548

Film, 1950s

family, as Lady was the adopted child (dog) who felt neglected when her parents had a biological child (the new baby) together. Lady’s identity crisis plays out as she rebels against her parents’ rules, but she realizes that her parents still love her by the end of the movie even after having a child. Similarly, Cinderella selflessly tries to get on her stepmother’s good side, even after constant abuse and neglect, because she wants a caring mother. The other top-grossing films of the 1950s also demonstrate family themes. The King and I is a film about a single father. A common theme in movies about households headed by a single father is for the father to search for a replacement mother for his children. The trend in the 1950s was to depict fathers as the dominant, emotionless provider for the family, which was balanced by a mother who was submissive, caring, and nurturing. As a result, any family without a mother or father would be in desperate need of the opposite gender parent to provide balance. This gender dynamic was also apparent in Seven Brides for Seven Brothers. Although none of the brothers had children as they searched for brides, their main goal was to find a feminine companion to balance their masculinity. Old Yeller may be the best example of how families were portrayed in films during the 1950s. This story is about a hard-working white family, slightly down on their luck, which consists of a father, a caring mother, and two sons. The older son, Travis, emulates his father, whereas the younger son, Arliss, is caring toward people and animals. In many films from the 1950s, the father’s role as breadwinner means that he is frequently absent from activities that happen within the house. This is also the case in Old Yeller, as the boys’ father leaves the house for an extended period of time in order to make money to sustain the family. As a result, Travis is thrust into the position of head of the household because he is the oldest and he needs to take care of things around the house that only a man can do. Another theme within films can be seen here, when Travis has to find the courage to maintain the household in his father’s absence. The defining moment of the movie comes when Travis has to euthanize the family dog, Yeller, because he has been infected by a rabid wolf. The day after Travis puts down Yeller, his father returns and offers some words of wisdom about being a man, having to make tough decisions, and focusing on the good in the world. The theme here

is the emphasis on father–son relationships and how a boy becomes a man. While the mother in Old Yeller has more time onscreen than the father, her relationship with Travis is not emphasized because he is taking over the masculine duties on the farm. However, the relationship between the mother and Arliss is highlighted because she caters to his love of animals, and even lets him help her in the kitchen. Because Arliss is the younger son, he does not need to conform so strictly to traditional male expectations, at least not yet. Alternatively, father-daughter relationships were rarely a focus in 1950s films. Those that did address a father-daughter relationship tended to display a highly sexual undertone in which the daughter was sexually attracted to the father. This was supported by the popular Freudian theory at the time. Freud’s psychoanalytic theory also helped explain the father-son relationships that were a common theme in movies, because sons wanted to be like their fathers and were jealous of their fathers at the same time. Freud’s Oedipal complex claims that a boy experience a stage where he wants emulate his father, but he is also jealous of the relationship between his father and his mother, who the boy also loves. This theory can be seen in most movies of the 1950s, but none more so than in the iconic Rebel Without a Cause. James Dean plays Jim in Rebel Without a Cause, the first movie of its kind to focus on teenage angst and rebellion against parents. Jim’s father is a strong man in the workplace but is unhappy in the home because he is married to a woman who does not acknowledge his role as the patriarch of the household, a theme that became more common in films in the later 1950s. Jim begins acting out and getting into trouble in and out of school as a result of his troubled home life. Freud’s theory explains that Jim wants to be like his father, but he recognizes his father’s weaknesses, which he does not want to emulate, and so he is rebelling from possibly becoming his father. As the film progresses, the audience sees Jim alternate between rebellion and seeking comfort from his parents, while finding happiness from neither realm. At the end of the movie, after Jim’s best friend dies from a gunshot wound, Jim’s father promises his son that he will change and become the strong father figure that Jim needs. Thus, the movie ends on this happy and hopeful note. This,

Film, 1960s



too, is a common theme in family films of the 1950s: problems are resolved by the end so the happy family can continue their lives free from trouble. Despite Hollywood’s depiction of these themes in 1950s films—the nuclear family as the ideal family; a masculine father, feminine mother, and devoted children; identities based on one’s role and acceptance within a family; father-son relationships as the main storyline; and a happy ending—families in reality could seldom live up to such utopian ideals. Andrea L. Roach University of Missouri See Also: Adolescent and Teen Rebellion; Breadwinner-Homemaker Families; Cultural Stereotypes in Media; Disney/Disneyland/Amusement Parks; Freud, Sigmund; Gender Roles in Mass Media; Nuclear Family; Single-Parent Families. Further Readings Bruzzi, S. Bringing Up Daddy: Fatherhood and Masculinity in Post-War Hollywood. London: British Film Institute, 2005. Coontz, S. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books, 1992. Pomerance, M., ed. American Cinema of the 1950s: Themes and Variations. New Brunswick, NJ: Rutgers University Press, 2005.

Film, 1960s The way families were depicted in films dramatically changed in the 1960s and reflected more variety than in previous decades. One reason for this was that production companies were releasing more films per year than they had in previous decades, which allowed for a greater range and variety in the subjects they tackled. Changes that occurred over the 1960s for families in film included a shift from the happy patriarchal family of the 1950s to the feminist view of the unhappy patriarch and matriarch; a new emphasis on nuclear families as dysfunctional, but which could be fixed by a well-meaning outsider; stories about families of various races and ethnicities; and the idea that oppression could be lifted by the type of

549

positivity and hopefulness popular in the 1960s culture of free love and liberation. Themes The early 1960s saw feminism becoming more popular as more books were published on the subject and more women were obtaining careers outside the home. This is reflected in films from the early 1960s, which depict the patriarchal family as dysfunctional, mainly due to the fathers’ unhappiness at home. One example can be seen in the film The Apartment, which won the Academy Award for Best Picture in 1960. The film centers around C. C. Baxter, a man who is unhappy at work and at home and is engaging in an extramarital affair with a colleague from work. Alternatively, women in these films are usually depicted as unhappy because they are being oppressed by their husbands. The result of this dynamic is usually that the husbands and wives become unfaithful to each other. The problem then resolves itself when the couples realize they care deeply for each other and do not want the family to fall apart. In The Apartment, Baxter is the only unfaithful partner and his wife kicks him out of the house when she found out about his infidelity. Baxter realizes at the last moment that he does not want his marriage to end but his wife will not take him back. This varied slightly from similar films, whichhad happy endings, and is a more direct picture of the feminist viewpoint that males were causing the destruction of the American family. However, films such as this were not well received by audiences, and there was a sharp decrease in the number of father-headed family films that were produced in the later 1960s as a result. A theme that appeared in movies as a result of the feminist movement of the 1960s was the unhappy domestic woman. This theme presented itself most forcefully in the movie Rosemary’s Baby, which is the story of a woman who becomes pregnant and later gives birth to a satanic child. The movie depicts Rosemary as strongly oppressed by her husband and her male gynecologist, both of whom are members of a satanic organization. Rosemary’s life is domestic; she rarely leaves her New York City apartment and because of this she is bored and restless. The movie begins with Rosemary’s goal to have a child with her husband and ends with her unwillingly giving birth to a physical form of the devil, resenting her domestic life and

550

Film, 1960s

her role as wife and mother. This reflects feminist thought in that feminism is about freeing women from their domestic roles. Films of the 1960s often focused on the nuclear family and addressed issues that were seen as common but that needed to be fixed. The most prevalent of these issues were emotionally and psychologically absent parents. Mary Poppins was released in the mid-1960s and focused on a nuclear family with two children, a boy and a girl, who have a strait-laced, workaholic father and a mother who is an active feminist, which frequently takes her away from the home. Both parents in this movie are well meaning but they do not spend enough quality time with their children. Mary Poppins becomes the children’s nanny and ends up teaching the whole family the value of spending time with and caring for each other. Once she reunites the family, her job is done and she leaves to attend to another family somewhere else that needs her expertise. This was a theme within dysfunctional family films of the 1960s—an outsider (such as a nanny, teacher, or mentor) comes in and points out to the parents what they are doing wrong. Once the family changes their ways, everyone is happy. In Mary Poppins, the father realizes that his rules are too rigid and he is spending too much time at work, whereas the mother realizes that her activism is preventing her from spending time with her children. The movie ends with the family flying a kite together, which the father has fixed and which contains a tail that was the sash the mother had worn at a rally. Another direction that family films took in the 1960s was toward single-father-headed households. Some of these films depicted the type of singlefather families that were seen in films from the 1950s, where the goal of the father, and ultimately of the film, was to find a replacement mother for the one who had left or died. However, other films such as To Kill a Mockingbird depict single fathers as caring and capable, without an overwhelming desire to find a replacement mother. To Kill a Mockingbird is a film known for more than just its depiction of family, but family is also one of the defining characteristics of the movie because of Atticus Finch, a small-town attorney who is also the embodiment of a caring and wise parent who is also single. Atticus treats his son and daughter, Jem and Scout, with the same respect that he gives to everyone. Atticus has rules and expectations for his children, but he also

shows them compassion and empathy. The portrayal of Atticus in To Kill a Mockingbird was seen as further demise of the strict, uncaring patriarchy of the 1950s. Diversity In previous decades, films focused on mainly white actors and actresses. In the 1960s, several popular films were produced about other ethnic groups to various levels of acceptance. A Raisin in the Sun was the first film of its kind to show a limited view of African American life in the United States. The film received rave reviews for its acting, but it did not do well at the box office as a result of audiences and critics not accepting the storyline. A Raisin in the Sun was about an African American family whose father has recently died, leaving behind a widow, daughter, and son who has a wife and child of his own. The family lives in a tiny city apartment, but they receive an insurance policy payout that gives them enough money to purchase a house in a working-class white neighborhood. The film centers on the family after the loss of the father but also delves into race issues as the mother tries to purchase her dream house and experiences resistance both from the neighborhood and her son. A Raisin in the Sun was one of the first films to move past stereotypes to show the essence of a family. However, the film did poorly because it depicted a family trying to integrate into white America, which was still a somewhat political topic in 1961. Films with ethnic main characters that did well were usually stories of assimilation, where the characters become more white or Americanized to fit in, as was seen in the West Side Story. West Side Story is a retelling of the classic Shakespeare play Romeo and Juliet, with an Italian American gang and a Puerto Rican gang standing in for the Capulets and Montagues. Tony, a member of the American gang the Jets, falls in love with Maria, a relative of a member of the Puerto Rican gang the Sharks. Neither side wants the two to be together because of the racial and cultural differences, but Tony and Maria try to find a way. The movie ends after Tony is shot and dies in Maria’s arms. Maria does not die, as Juliet does in the original play, but some might say she dies a cultural death because she embraces American culture and rejects the Puerto Rican culture that led to the death of her true love. This is an example of how movies that feature assimilation fared better in the box office and with critics

Film, 1970s



than movies in which the characters tried to integrate with mainstream white culture. A final theme of note in 1960s films is their overall optimism. A movie that embodies this theme is The Sound of Music, which dramatizes the story of the Von Trapp family in Austria during World War II. Maria is the governess, employed to take care of the Von Trapp children and who becomes the love interest of the children’s father, a widower. After escaping the Nazis in Austria, the family is seen walking to their freedom in Switzerland. Many family films in the 1960s feature families who triumph over oppression, like the Von Trapps, by the end of the movie. These films reinforce the 1960s culture of free love and the ability for anyone to be liberated from various forms of oppression. Andrea Roach University of Missouri See Also: African American Families; Assimilation; Civil Rights Movement; Cultural Stereotypes in Media; Feminism; Feminist Theory; Nuclear Family; SingleParent Families. Further Readings Bruzzi, S. Bringing Up Daddy: Fatherhood and Masculinity in Post-War Hollywood. London: British Film Institute, 2005. Grant, B. K., ed. American Cinema of the 1960s: Themes and Variations. New Brunswick, NJ: Rutgers University Press, 2008. Maltin, Leonard. Of Mice and Magic. New York: Penguin, 1987.

Film, 1970s Film in the 1970s was dominated by New Hollywood, a name for both a period and a movement. The studio system that had governed the golden age of Hollywood since the advent of the talkies had gradually declined after the 1940s, and by the end of the 1950s, the vertical integration that had put film production, distribution, and exhibition (theaters) in the same hands had ended. Even before the advent of VCRs and cable television, the 1970s were a time of more movie options for many Americans.

551

More independent theaters showed films made outside of the Hollywood machine. More revival theaters showed older movies. More foreign movies were available. Grindhouse theaters thrived on B-movies, horror movies, exploitation films, and the more accessible foreign films, especially kung fu movies and Asian gangster pictures. Pornography even achieved something like mainstream acceptance with movies like Deep Throat entering the national consciousness. New Hollywood At the business end, New Hollywood was defined by the loosening control of studios over the film business. Time and again, young directors who had an early success were given something akin to autonomy over their next feature, with the studios hoping to capture lightning in a bottle—a significant change from the classical Hollywood period, when many movie ideas originated at the executive level before they were turned over to a director and team of writers. While the star vehicle model of filmmaking—in which a movie is constructed around its intended star, with a script tailored to his or her strengths in the hopes of maximizing the box office effect of that star’s popularity—has never gone away, the New Hollywood period became known as a director-driven period. At the creative end, the easiest way to characterize the New Hollywood generation of filmmakers is as the first generation of film school graduates. These directors and screenwriters had not only been trained to approach film and genre with a critical eye, they were unusually film-literate, familiar with foreign films that often escaped the notice of the general public (and most of the film industry) as well as with film criticism. It was the New Hollywood generation that made Hollywood aware of the French term film noir, for instance, referring to a strain of crime movies produced in Hollywood from about 1940 to about 1958. The Vietnam War was a frequent topic of the New Hollywood movies, with Francis Ford Coppola’s Apocalypse Now the most obvious example. While Apocalypse Now translated Joseph Conrad’s novella Heart of Darkness to a Vietnam War setting, other movies interrogated the impact of the war on American families and the soldiers who returned from combat duty. Michael Cimino’s The Deer Hunter, one of the best and most important films of

552

Film, 1970s

the decade, focuses on Pittsburgh-area steelworkers and their lives before, during, and after the war. Hal Ashby’s Coming Home portrays a love triangle between Jane Fonda, a disabled Vietnam vet played by Jon Voight, and her soldier husband, played by Bruce Dern. The 1970s were a decade in which, even more than in the 1960s, the Leave it to Beaver model of the American family no longer seemed realistic, and many films addressed these changes head on. Though the title character in Martin Scorsese’s Alice Doesn’t Live Here Anymore is not a divorcee, she responds to being widowed with much the same combination of relief and trepidation, and relocates with her son to California to pursue the singing career that she had put on hold for the sake of her marriage. Though Peter Bogdanovich’s The Last Picture Show is set in 1952, with its sexual politics, frank talk of abortion, and depiction of unhappy marriages as the most common marriages, it seems at times as though the point is to assure 1970s audiences that the moral turmoil they find themselves in has been here all along. In The Sugarland Express, the feature film debut of director Steven Spielberg, Goldie Hawn and William Atherton (in one of his only lead roles) play a couple desperate to keep their family together despite Atherton’s imprisonment and the likelihood of their son being placed in foster care (at a time when many real-life mothers were pressured by social workers to put their children up for adoption for just such reasons). Toward the end of the decade, the multi-Academy-Award-winning Kramer vs. Kramer explored the impact of divorce when a character played by Meryl Streep leaves Dustin Hoffman’s character, forcing him to care for their young son on his own. The most famous family of 1970s film is the Corleones, the Italian American crime family of Francis Ford Coppola’s The Godfather. Considered by many critics and fans alike to be the greatest American film, it took a whole new approach to the gangster picture, in large part by focusing on the family part of “Mafia family.” Here, it is not just the crime and the conflicts with other criminal groups that are important but also the relationships of father to son, brother to brother, and husband to wife—not apart from the crime plot but as a part of that plot, as generational tensions impact the way that the Corleone criminal syndicate is run, and Michael Corleone’s relationships with his blood family and

as his new wife tear him in opposite directions as he takes the helm of the family in the wake of his father’s death. Many of the films of the 1970s, even those not directed by New Hollywood directors, reflected the disillusionment of the time, as Vietnam War casualties mounted, the optimism of the 1960s waned, and the culture of narcissism was condemned by references to the 1970s as the “Me Decade.” Some of these films included Sidney Lumet’s Network (a cynical satire of television news and its role in society, angrier, dirtier, and more pessimistic than Frank Capra’s Meet John Doe), Martin Scorsese’s Taxi Driver (about an alienated Vietnam veteran and his struggle with life in New York City), and horror director Wes Craven’s early movies Last House on the Left and The Hills Have Eyes, each of which explores helplessness and revenge with a level of violence and sinisterness greater than his later commercial hits like A Nightmare on Elm Street. Tobe Hooper’s The Texas Chainsaw Massacre was loosely inspired by a real-life cannibal serial killer, but portrayed a whole family of cannibals who were at least partially assimilated into rural American society. The Omen and The Exorcist portrayed more urban, upper-class horror, playing on fears of Satan, even as religious fervor in the country ebbed. Even Steven Spielberg’s Close Encounters of the Third Kind, overall a positive and uplifting movie, played on detente-era anxieties and Me Decade disillusionment in its treatment of the mystery of alien encounters and the effect that a religious-like experience has in disrupting Richard Dreyfuss’s otherwise normal working-class American family. New Morals It would be easy to look at the 1970s and conclude that this disenchantment infected the whole of popular culture. The lines between adult and children’s entertainment were blurred and crossed: former Disney animator Ralph Bakshi released his feature film debut in 1972, Fritz the Cat, an X-rated slice of life satire. Producer Bill Osco oversaw X-rated softcore pornographic films, released to mainstream theaters, satirizing Flash Gordon (Flesh Gordon, 1974) and Alice in Wonderland (1976). Another theme emerges upon an examination of the decade: the dissatisfaction and even dangerousness of the nation’s youth. While Saturday Night Fever’s legacy has been reduced to star John Travolta’s strut and a



handful of dance scenes, the movie portrays disco as a distraction from the desperations of a life that includes racebaiting, gang violence, gang rape and date rape, unexpected pregnancy and the possibility of abortion, and suicide. Jonathan Kaplan’s 1979 Over the Edge is one of the bitterest coming-of-age films ever released, depicting teenagers turning to violence, not as a form of rebellion against tyrannical parents or an unjust society, but simply because they live in carefully designed suburban communities where there is nothing better to do. Even The Bad News Bears, a children’s baseball movie, featured an unprecedented level of swearing, references to underage sex, and an alcoholic lead in Walter Matthau’s coach. The anomie of New Hollywood eventually imploded. As the 1980s dawned, directors who had made money for the studios in the past by following their visions were given more and more money to pursue passion projects, which either had little commercial potential, or went so far over budget that even a hit would have lost money. 1980’s Heaven’s Gate (directed by Cimino) is the

Goldie Hawn starred in a string of above average and successful films in the 1970s. This photograph was taken in 1978 to promote her CBS show, Goldie.

Film, 1970s

553

best known example because its production was so lengthy and laborious and its failure bankrupted the venerable United Artists (a studio founded in the silent era). Even Steven Spielberg—who survived the end of New Hollywood unscathed—had a famous misfire with the World War II homefront comedy 1941. Studios eventually took control back from the directors, and the 1980s and later decades were dominated by carefully tailored blockbusters and tentpole releases. Even that strategy, though, was based on lessons learned in the 1970s, as a result of three of the decade’s most popular movies: Spielberg’s Jaws (1975), George Lucas’s Star Wars (1977), and Ridley Scott’s Alien (1979). Each was a blockbuster success, relied heavily on special effects, and provided a blueprint that guided numerous sequels, which were still a novelty in the 1970s. There had always been huge successes in Hollywood but in constant dollars, these new blockbusters did not quite approach the box office success of past hits like Gone With the Wind or Snow White and the Seven Dwarfs. However, in establishing franchises, they represented a new kind of success, and in their level of merchandising (especially in the case of Star Wars), they created revenue streams never before exploited to such an extent. Movie properties had been licensed to toy makers, lunch-box manufacturers, and comic-book publishers before, but the extent to which Star Wars was licensed, and the success of so many of those licenses, was unprecedented. Going forward, children’s or family movies and merchandising would go hand in hand, and children’s toys (lunch boxes, clothing, and backpacks) would increasingly be branded. Family Movies The 1970s saw children’s movies produced on a considerable scale, even before the advent of the directto-video market that would be responsible for so many children’s movies in later decades. While Disney’s animated output in the decade is considered subpar relative to its earlier work, it produced a staggering number of live-action films for children and families, many of which are considered classics by Generation X. From 1970 to 1979, Disney released 45 live-action films, including Don Knotts and Tim Conway in two Apple Dumpling Gang movies, several movies in the Herbie series, the original Escape

554

Film, 1980s

to Witch Mountain and its sequels, and up-andcoming star Jodie Foster in Freaky Friday, an intergenerational body-switching comedy that has been duplicated and remade numerous times. The advent of television had a huge effect on the film industry and was responsible in part for the shortening of the filmgoing experience, as movies were less and less likely to be preceded by newsreels and shorts. By the 1970s, for instance, no animation studio was producing theatrical shorts—a staple for most of cinematic history up until the previous decade—on a regular basis. Disney had turned most of its attention to features; other studios had either ceased operations entirely or had been repurposed as television animation studios. Hanna Barbera, the most prolific animation studio of the decade, was founded by two animators who started out creating Tom and Jerry for MGM in the 1940s. Bill Kte’pi Independent Scholar See Also: Film, 1960s; Film, 1980s; Me Decade; Television, 1970s. Further Readings Berliner, Todd. Hollywood Incoherent: Narration in Seventies Cinema. Austin: University of Texas Press, 2010. Bisking, Peter. Easy Riders, Raging Bulls: How the Sex, Drugs, and Rock ‘n Roll Generation Saved Hollywood. New York: Simon & Schuster, 1998. Harris, Mark. Pictures at a Revolution: Five Movies and the Birth of the New Hollywood. New York: Penguin, 2008.

Film, 1980s The 1980s was a time of conservative beliefs and concern for the family as an institution; consequently, these values and apprehensions were often mirrored in the cinema of this time period. The presidency of Ronald Reagan ushered in an era of right-wing ideals, reinforced by the public’s worry about divorce, which would reach peak rates in this decade. Movies like Parenthood (1989) celebrated the nuclear family, which consisted of a father, a mother, and their

biological children. This decade also saw an increasing presence of mothers in the workplace and the solidification of the women’s civil rights movement. Films would showcase the working mother role, at the same time endorsing women’s attempts to achieve the nuclear family ideal (e.g., Look Who’s Talking, 1989; Baby Boom, 1987). In terms of identifying the source of family breakdown, at least in cinema, the United States shifted most of the blame to fathers. Movies like Rain Man (1988), Fatal Attraction (1987), and The Shining (1980) implied that fathers, through their absence, failure, or betrayal, could be responsible for the problems of the family unit. In a slightly different vein, some films would present fathers as somewhat incompetent, but loving (Three Men and a Baby, 1987). Either way, if the family was shown as dysfunctional, it was usually in some part the father’s fault. Most of the movies aimed at the younger generation involved adventure. Films like The Goonies (1985) and Labyrinth (1986) featured children and teenagers going on great adventures; typically, these characters were also in some sort of trouble. The importance of family was usually implied in these films, as many of the quests involved a resolution of the family. In terms of diversity, 1980’s cinema featured very little. Few movies showcased nonwhite families or households headed by lesbian, gay, bisexuals, transgendered (LGBT), or single parents. Celebrating the Nuclear Family The nuclear family model of the 1950s is often associated with the relative peace and prosperity of that time. Although ideals would shift in the decades following the 1950s, the 1980s were characterized by a return to conservative beliefs, both politically and in relation to the family. Divorce rates were increasing, as was fear for the sanctity of the family unit. President Reagan emphasized conventional family values and worked to support laws that encouraged marital and familial unity. American cinema of the 1980s echoed the renewed social investment in the family norm, often featuring nuclear families. Parenthood is the story of Gil, sales executive, husband, and father of three. Gil finds out that his wife is pregnant with their fourth child, and he must decide whether work or family is more important. Gil’s sisters



are also dealing with being parents. Helen’s high school–aged daughter Julie marries and becomes pregnant. His other sister Susan fights with her husband because she wants more children. All three families have their issues and deal with potential breakups, but at the end of the movie, Gil’s son arrives, Julie and her husband are happily raising their child, and Susan is pregnant. Parenthood shows the challenges of being part of a family, emphasizing that not every family is perfect, but that sticking together as a family is a worthy goal. National Lampoon’s Vacation (1983), Back to the Future (1985), and Honey, I Shrunk the Kids (1989) are other examples of movies that feature families facing a problem or challenge, and show the importance of family cohesion. It could be argued that mothers’ increasing presence and fight for equality in the workplace threatened investment in the traditional (which was equated with functional) family norm. In 1980s cinema, the importance of the mother role was emphasized within the family setting, presenting the preference for traditional families in a positive manner, instead of focusing on how the absence of the mother figure could cause trouble. These movies would often depict families who did not look like the traditional unit, but whose stories revolved around attempting to get as close to it as possible. Look Who’s Talking tells the story of Mollie, who becomes pregnant after having an affair with Albert, a married man. Albert eventually leaves his wife, but not for Mollie, and refuses to be a father to their child. Mollie is so upset that she goes into labor and is rushed to the hospital by taxi driver James. James and Mollie develop an arrangement in which James babysits her son Mikey. Over time, Mikey and James bond, and after several horrible dates and a run-in with a seemingly remorseful Albert, Mollie realizes that James is the husband and father who she has been looking for. Mollie’s goal throughout the entire movie is to provide a father for Mikey, exemplifying the importance placed on two-parent household configurations. The happy ending does not happen until Mollie finds both love and a father for her baby. Adoptive parenting is also showcased, but it is often implied that two parents are ideal. In Baby Boom, J. C. Wiatt is a businesswoman who has no time for love or leisure. She is named the guardian of a baby girl, whose deceased mother is Wiatt’s

Film, 1980s

555

cousin. Motherhood proves to be very stressful, and after losing her job, Wiatt moves into a broken-down house in the country. Despite feeling frustrated by her housing situation and the stress of parenting, Wiatt finds success in developing a baby food recipe. She later meets a man and finds a way to balance work, love, and motherhood. Here again, a mother and father figure are needed for a family to be complete, and the film also hints at the implications of being a “career woman.” Even in Annie (1982), it is implied that Daddy Warbucks is developing a relationship with his secretary Grace, which will in turn provide the best home for Little Orphan Annie. Family Crisis: Fathers at Fault If families in film were not depicted as functional, traditional, and happy, they were shown in crisis. Cinema often mirrored the anxiety that Americans felt about increasing contemporary pressures and stress, accompanied by the growing economy and focus on work. Again, the women’s civil rights movement had not only gained momentum, but was accepted, and very few wanted or were willing to fight it. Consequently, women’s role in the breakdown of the family was not really addressed. A movie like Kramer vs. Kramer (1979), the story of a wife and mother who leaves her family to “find herself,” would not have been popular in the mid- to late 1980s. Instead, during this time period, responsibility for the failure of the family shifted from the mother to father. The father as source of familial breakdown was typically presented in a few ways. In some films, the complete absence of any father figure was the underlying issue in family problems. In Rain Man, Charlie is estranged from his father and discovers after his father’s death that he has a brother, who has been named beneficiary of their father’s millions. Charlie travels cross-country with his brother Raymond to settle the matter of inheritance with his lawyers. Along the way, Charlie remembers that Raymond was separated from the family when Charlie was a child. Their father believed that Raymond had tried to burn Charlie, and placed him in a mental institution. In this film, the absent father not only cuts Charlie out of his inheritance, but is also entirely responsible for breaking up the family. E.T. the Extra-Terrestrial (1982) and The Karate Kid (1984) are other examples of movies in which

556

Film, 1980s

families seem to be in some distress due to an absent father. Other movies would include the father in the narrative, but through his actions, he would somehow fail or betray his family. Fatal Attraction is the story of Dan, who has a weekend fling with female colleague Alex while his wife Beth and daughter are out of town. Dan attempts to break things off with Alex, but she refuses to let go and begins to harass Dan and his family. She repeatedly calls his office and house, kills his daughter’s pet rabbit by boiling it on his stove, and even kidnaps the daughter. Eventually, Alex attempts to kill Dan and Beth, but Beth shoots her before she can do so. Here, Dan’s extramarital affair invites the obsessed woman into his family members’ lives, endangering their safety. In The Shining, Jack, his wife Wendy, and son Danny move into a hotel for the winter so that he may work on his writing. It is soon obvious that the hotel is haunted, and Jack becomes possessed with killing his wife and child. Wendy and Danny escape, but only after Jack becomes lost in a hedge maze and freezes to death. Here, the father both betrays and leaves his family. Other movies featuring father figures who betray or disappoint their family members include Star Wars: The Empire Strikes Back (1980) and On Golden Pond (1981). Although the failed father archetype saturated 1980s cinema, a few movies portrayed fathers as caretakers. If featured, they were often inept, albeit loving, in their roles. A good example of this type of scenario is the movie Three Men and a Baby. Jack, Peter, and Michael are New York City bachelors. The men find out that Jack has fathered a baby girl when the mother drops her off at their apartment. The entire movie revolves around the three men trying to figure out how to care for the baby while getting caught up with drug dealers. Eventually, it appears that the three bachelors have accepted their role as the baby’s caregivers, but the mother arrives to take the child back. Ultimately, the mother decides that she does not want to parent alone and she moves in with Jack, Peter, and Michael so that they can all raise the baby together. Even here, one can see the exemplification of the nuclear family unit, with a mother, father, and child(ren) because the men can be seen as both adults and children based on their behavior throughout the movie. Mr. Mom (1983) is another movie in which the father takes the main parenting role. Here, Jack

loses his job and becomes a stay-at-home dad while his wife begins a new job. Again, the father is not very competent in his new role, and at the end of the movie, the parents switch back to their traditional roles of the male breadwinner and the housewife. Fathers often got a bad rap in films of the 1980s, and any chance of a happy ending typically meant the resolution of a complete nucleartype family. Children in Trouble Many films of the 1980s were directed toward younger audiences. Commonly, children and teenagers found themselves going on grand adventures, typically attempting to solve a problem. In The Goonies, neighborhood friends are facing the foreclosure of their families’ homes. They find a treasure map, and in an attempt to have one last escapade, the children follow the map’s directions in search of riches. They find the treasure in a cave, and outsmart fugitives to escape, fleeing with a handful of jewels. It turns out that the value of the jewels is enough to save their homes and community. The importance of family may not be as explicitly demonstrated as it is in some movies previously discussed, but the message is there. The entire community serves as a proxy for the family unit, and is resolved by the end of the film. Ferris Bueller’s Day Off (1986), Flight of the Navigator (1986), and Beetle Juice (1988) are other films featuring children involved in crazy adventures who ultimately learn the importance of family. In the film Labyrinth, Sarah hates her little brother, for whom her seemingly aloof parents make her babysit. She wishes that he would be taken by the King of the Goblins, and when he is, she enters the magical Labyrinth to rescue him and bring him home. Again, it is implied that family is important. Sarah realizes that she loves her brother, and the friends that she makes on her journey, in a way, become her chosen family (a concept that develops into a more common occurrence in the 1990s). Portrayed Families Lacked Diversity One characteristic lacking in American cinema of the 1980s, which would not start to appear until the next decade, is that of diversity. It was not uncommon to see ethnically diverse families on television (e.g., The Jeffersons, and The Cosby Show), but typically, only white middle-class

Film, 1990s



families were featured in the movies. Aside from adoptive parenting situations, nonnuclear family configurations were rare, and same-sex parents and single parents were not frequently found in 1980s cinema. In summary, beliefs consistent with 1950s Americana were welcomed during this return to convention and focus on the family, and these values were reflected in films. Families in 1980s cinema either looked like the nuclear standard or consisted of members trying their hardest to achieve unification of the family, whether it be single mothers or fathers who were to blame for the dissolution. Children ultimately learned the importance of family after finding adventure and overcoming obstacles. American families might have actually been more diverse than what was shown on screen, but the media often exemplifies the norms and values of its society, including the conservatism and concern for the sanctity of the family found in this decade. Sarah Mitchell University of Missouri See Also: Gender Roles in Mass Media; Nuclear Family; Social History of American Families: 1981 to 2000; Television, 1980s. Further Readings Bruzzi, S. Bringing Up Daddy: Fatherhood and Masculinity in Post-War Hollywood. London: British Film Institute, 2005. Coontz, S. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books, 1992. Harwood, S. Family Fictions: Representations of the Family in 1980s Hollywood Cinema. New York: St. Martin’s Press, 1997. Prince, S., ed. American Cinema of the 1980s: Themes and Variations. New Brunswick, NJ: Rutgers University Press, 2007.

Film, 1990s The 1990s were a decade that exemplified a time of change in American family values, many of which were reflected in its cinema. During this time of relative affluence, Americans revisited how they

557

conceptualized the family. While the cautionary tale of the dysfunctional family was not completely abandoned, people were beginning to consider the possibility that all families did not have to look the same. An unprecedented number of women were in the workforce, and fathers were expected to play a bigger role in family life than in previous decades. Divorce and stepparent relationships were commonplace, as were multicultural and lesbian, gay, bisexual, and transgender (LGBT) families. Films of the 1990s reflected these new paradigms. Lighthearted movies featuring young heroes who triumphed despite their incompetent and/or absent parents were widespread. Most involved a happy resolution of a family that did not necessarily have to look like the nuclear family of previous generations (although it often did). Movies of this type, such as Home Alone (1990), Matilda (1996), The Parent Trap (1998), and Angels in the Outfield (1994) made huge profits. There were also serious movies that featured families gone wrong; Sling Blade (1996) and American Beauty (1999) are examples of what can happen when parents do not parent. While nuclear families were still very common in film, the acceptance of nontraditional families in society coincided with movies like Mrs. Doubtfire (1993). Fathers could be just as caring and involved in their children’s lives as mothers. Movies in this decade also featured families made up of nonwhite ethnicities (Aladdin, 1992), stepparents (Stepmom, 1998), and even same-sex couples (The Birdcage, 1996). Happy Families The concept of the nuclear family as a mother, father, and children was the hallmark of the 1950s. Stephanie Coontz explained that even though many families in this time period did not look like this, the media portrayed the perfect family in this way, and frequently still does. Films made in the 1990s address this issue in different ways. Some explore what can go wrong in traditional families or presents upsets to the conventional family form. Many of these movies were family films, and a majority of them had children or teens as main characters battling some obstacle, usually inept adults who were often their parents. In Home Alone, 8-year-old Kevin is left at home when the rest of his family rushes to make a flight for their Christmas vacation. Eventually, his mother realizes that they have left him behind, but she is not able to

558

Film, 1990s

get a return flight right away. This leaves Kevin to deal with two burglars, who also eventually realize that he is home alone, therefore making his house the perfect target. Kevin ultimately outsmarts the burglars in time to wake up Christmas morning to the return of his family. Kevin initially wished for his family to disappear, but by the end of the movie, he realizes that he needs and loves them. Home Alone is a great example of an optimistic child-centric film that gives the message that having a family is ideal, even if they are sometimes annoying. In some of these family movies, children had bad parents, or only one or no parents, but by the end of the movie, there is resolution of the family, sometimes involving an adoption. Matilda presents a young character who has to overcome the adults in her life. Matilda hates her parents; they are uneducated, mean, and tease her incessantly for being a book worm. They enroll her in a school under the supervision of the evil headmistress, Miss Trunchbull. Matilda cleverly and telepathically defeats a series of bumbling adults, including her parents, agents spying on her parents, and finally Miss Trunchbull. She befriends a nurturing teacher at the school, Miss Honey, and at the end of the movie she is adopted by Miss Honey with her parents’ permission. Here, a heroic child discovers the love of a family, even when her biological family turns out to be bad for her. Some movies featured children attempting to put their families back together. Twins Annie and Hallie, separated at birth, spend an entire movie trying to do so in The Parent Trap. The girls meet at summer camp, and instead of going home, they switch places in order to spend time with the parent they have never met. Eventually, their parents not only realize that they have the wrong daughter, but also that they cannot live without being the complete family that they once were. The Parent Trap is a remake of the 1961 movie of the same name. Not much has changed in this rendition besides the wardrobe, and it is interesting to see that the desire to achieve the nuclear family is still being exemplified. Similarly, Angels in the Outfield is the story of a boy who, in his wish to leave foster care and be with his father, wishes for the Los Angeles Angels to win the pennant. The Angels win thanks to help from actual angles, but instead of being reunited with his father, the boy is adopted by the caring manager of the team. In this case, the end result is not the

reconstitution of the original nuclear family, but ultimately everything works via adoption. Unhappy Families When the subject of dysfunctional families was explored in movies made for more mature audiences, happy endings were no longer commonplace. Child abuse and neglect cause trouble for 12-yearold Frank in Sling Blade. Frank befriends mentally disabled Karl, who was recently released from a psychiatric hospital for killing his mother and her lover when he was 12. Karl sees that Frank is being abused by his mother’s boyfriend Doyle. In order to protect his “new family,” Karl kills Doyle and turns himself in to the authorities. This movie lacks the happy resolution of family-friendly movies, and in this case shows what can go wrong when individuals come from broken homes. American Beauty tells the story of a family on the brink of dissolution. Lester is a man who hates his job and his wife, Carolyn. Their daughter, Jane, hates them both. All three are dissatisfied with their lives and with each other. Lester reverts to acting like a teenager, smoking marijuana, and working out to impress Jane’s 16-year-old friend. Frank, the neighbor, thinks Lester is gay (after seeing him in what appear be compromising situations with his son), and kisses him, revealing his closeted orientation. Lester refuses him, and after deciding not to seduce Jane’s friend, he looks at a family photo, longing and hopeful for the reunification of his family. At that moment, he is shot and killed by Frank. These two movies provide examples of how destructive a family that does not function properly can be. Most of the characters did not experience happy endings, but viewers got the sense that something went wrong somewhere for these initially intact families. During the 1990s, many films portrayed characters in less stereotypical ways than before. Previously, when individuals in family films were not part of a nuclear unit, usually the story was about a single mother, some other mother figure, or an absent father. This decade saw an increase in stories about single fatherhood and invested fathers in general. Mrs. Doubtfire is the story of playful father Daniel, whose breadwinner wife, Miranda, asks for a divorce. Daniel has just thrown a rowdy birthday party for his son, and Miranda decides that she can no longer take his immaturity. A judge grants Daniel weekly supervised visits, but Daniel needs to see his three children every day. Desperate, he asks for his brother’s help

Film, 2000s



in turning him into an elderly British nanny, Mrs. Doubtfire, so that he can see his children more often. In the 1990s, divorce was common and increasingly accepted. Mothers were typically given custody of children by default, but courts began to consider fathers’ rights and obligations in childcare. Mrs. Doubtfire reflects this changing climate by bringing an invested father’s concerns to the forefront. Ultimately, Daniel is found out by his family, and even though he and Miranda do not get back together, she and the legal system realize just how dedicated he is to his children, and he is allowed to spend time with them whenever he wants. Other popular movies like Sleepless in Seattle (1993) and Big Daddy (1999) also feature men in caretaker roles. These films were representative of a time when fathers and father figures were contributing to the family’s emotional needs in growing numbers. In addition to recognizing that men were very capable of being loving, devoted parents, Americans were beginning to explore the idea of the nontraditional family. Realizing that not all families are white and middle-class, Disney started to put out movies featuring more diverse ethnicities like Aladdin, Pocahontas (1995), and Mulan (1998). Several live action films of the 1990s presented different family configurations. In Stepmom, Jackie and Luke are divorced and sharing custody of their two children. Luke asks his live-in girlfriend Isabel to marry him, which greatly upsets his ex-wife and his daughter Anna. Isabel and Jackie butt heads, and when Jackie finds out she will die of cancer soon, the two women reveal what has been bothering them. Isabel is afraid that she will not be able to live up to Jackie’s example as a mother and Jackie is worried that she will be forgotten. They begin to resolve their issues and accept the future together, taking a Christmas photo of the whole family that includes them, along with Luke and the children. In this time period, families consisting of same-sex parents became more visible. The Birdcage features Val and Barbara, a newly engaged couple who want their parents to meet. Val’s father, Armand, and his partner, Albert, meet Barbara’s conservative parents Kevin and Louise. To Val and Barbara’s surprise, the families ultimately get along and both attend the young couple’s wedding. Additionally, the belief that individuals could choose their kin became more widespread during this time period. When Lucy saves a man’s life in While You Were Sleeping (1995), she grows to

559

love and become part of his family when the family mistakenly assumes that the two are engaged. By the 1990s, many Americans realized that the Leave It to Beaver nuclear family of the 1950s had never accurately depicted reality. Thus, presenting realistic family dysfunction, which had become more accepted in the films of the 1970s and 1980s, was generally a starting point for a film’s conflict. Sarah Mitchell University of Missouri See Also: Gender Roles in Mass Media; Nuclear Family; Responsible Fatherhood; Social History of American Families: 1981 to 2000. Further Readings Bruzzi, S. Bringing Up Daddy: Fatherhood and Masculinity in Post-War Hollywood. London: British Film Institute, 2005. Coontz, S. The Way We Never Were: American Families and the Nostalgia Trap. New York: Basic Books, 1992. Holmlund, C., ed. American Cinema of the 1990s: Themes and Variations. New Brunswick, NJ: Rutgers University Press, 2008.

Film, 2000s The decade of the 2000s was marked in the United States by the tragedies of the 9/11 terrorist attacks, two long wars, the destruction of Hurricane Katrina, a series of mass shootings, and an economic meltdown. These disheartening events inspired Americans to begin looking inward with the goal of examining oneself, finding one’s identity, and coming to terms with reality. Movies about the family often had these subjects at heart. Family-friendly films like Lilo & Stitch (2002) feature characters who establish their identities within the family and learn that it is okay to be different. While movies made with children in mind often contained happy endings, films for more mature audiences reflected an authenticity in the characters’ interactions with one another and in the way that the movies ended. Families were shown to have conflict, but instead of being labeled dysfunctional, they were presented as typical. The dynamics of parent–child and sibling

560

Film, 2000s

relationships were heavily portrayed in movies such as Running With Scissors (2006) and Rachel Getting Married (2008). There was also a focus on divorce and remarriage (The Squid and the Whale, 2005), and unplanned pregnancy (Juno, 2007) as common, albeit emotionally charged, occurrences. Additionally, movies like Brokeback Mountain (2005) and Transamerica (2005) increasingly exposed audiences to lesbian, gay, bisexual, and transgender (LGBT) families and the issues that they faced. New Idea of Family In family-friendly films of previous decades, especially the 1990s, it was common for the main characters to search for a resolution to family issues. They either needed to find new parents or get their parents back together. Family films of the 2000s moved away from the idea of an incomplete family seeking wholeness. Families existed in varying configurations, and there was an emphasis on the ability to choose one’s kin and accept family members for who they were. In Lilo & Stitch, a highly aggressive alien crash lands on the Hawaiian island of Kaua’i. He is adopted by an eccentric little girl, Lilo, and her older sister, thinking that he is a dog. After trying to escape and causing trouble, Stitch realizes that family is more important than being destructive and decides to remain with his new family. Lilo and her sister Nani are orphans, but the movie does not focus on this aspect of their family. The family configuration is merely a backdrop to the larger issue of choosing one’s family and embracing differences. Stitch, Lilo, and Nani choose to be a family together, and it is okay that Stitch remains slightly mischievous and Lilo continues to be a little strange. The Hawaiian concept of ohana, or the idea that families are made up of more than biological relatives, is essential to the story, providing an example of how different cultures conceptualize the definition of family. Several other family-friendly movies of the decade, like Shrek (2001), Finding Nemo (2003), The Incredibles (2004), Ratatouille (2007), and Kung Fu Panda (2008), demonstrate that it is okay to be an individual and that family members must accept and appreciate each other for their differences. This decade saw an increase in films that delved deeply into strained family relationships, portraying them as everyday occurrences, instead of detrimental to the characters’ development. Most, if not all, family members were complex, highly developed characters (even children), each having a specific and

unique relationship with others. Little Miss Sunshine (2006), for instance, showcases the complicated, interpersonal dynamics among each family member, from the 7-year-old quirky daughter to the drugabusing grandfather. However, even though multiple familial interactions were illustrated in films of this decade, some were more central to a particular movie’s plot than others. Parent–child relationships were among the most commonly featured, often depicting unstable parents who made their children’s lives difficult. In Running with Scissors, Augusten Burroughs is the son of an alcoholic father and a mentally unstable mother who decide to leave him with her unconventional psychiatrist. All of the bizarre adult parentfigures make it impossible for the young Augusten to experience a secure, nurturing upbringing. Similarly, There Will Be Blood (2007) showcases a strained relationship between Daniel Plainview and his adopted son, H. W. Daniel neglects H. W. throughout the movie, even going so far as to reveal that he is an orphan while making fun of H. W.’s deafness (for which he is partially to blame). Running with Scissors and There Will Be Blood also exemplify a widespread trend in movies of the 2000s; that of the unhappy mother and the stressed single father. Monster’s Ball (2002), The Hours (2002), and Precious (2009) all contain mother figures who are unsatisfied, frustrated in their parental roles, and ineffective, whereas Signs (2002), Me and You and Everyone We Know (2005), and Dan in Real Life (2007) feature fathers struggling to parent alone. Sibling Relationships Sibling relationships are another common dynamic explored in the decade’s cinema. Rachel Getting Married is the story of Kym, who has been temporarily released from rehab to attend her sister Rachel’s wedding. Rachel and Kym’s sibling rivalry comes to the forefront as each is irritated and jealous of attention paid to the other. Kym does not like that Rachel has chosen a friend to be her maid of honor, and Rachel resents others’, especially her father’s, preoccupation with Kym’s addiction. Rachel and Kym eventually make up, bonding over their mother’s lack of interest in both of them. Several other films such as The Darjeeling Limited (2007) and Lars and the Real Girl (2007) depict sibling conflict as revolving around parental involvement or lack thereof. For the most part, films that focused on familial discord did not portray those interactions as damaging beyond



repair, but as characteristic of the typical American family, and as something most family members could get past, or even embrace. In a similar vein, divorce and remarriage were topics explored in depth from the vantage point of parents and children. Parents often fought to bring children to “their side” during a divorce, especially when one entered a new relationship with the possibility of remarriage. Bernard and Joan have just told their sons, Walt and Frank, that they are separating in The Squid and the Whale. Tensions rise as each parent starts seeing other people, in addition to lashing out to hurt the other emotionally. Twelveyear-old Frank sides with his mother, but he is not handling the conflict well and begins to have behavioral problems at school. Walt, who is 16, identifies with his father, but also acts out by plagiarizing a Pink Floyd song for his school’s talent show. The Royal Tenenbaums (2001) is another movie that explores the effects of divorce and remarriage on family dynamics. Patriarch Royal has been long separated from his wife Etheline, and has had little contact with his three adult children. It is not until Royal finds out that his wife is engaged to her accountant that he fakes cancer to win them all back. All the relationships are strained in this family, as is characteristic of movies of this time, but it is really the threat of divorce that initiates most of the conflict. In both of these films, the family members’ bad behavior is a consequence of adjusting to the marital dissolution, but by the end, most have been able to cope with it. After speaking with a counselor, Walt begins to rationally analyze his feelings and he realizes that his mother is not the “bad guy.” Royal stops interfering so that his ex-wife and children can be happy. Divorce and remarriage are situations that many families had to deal with in the 2000s, and many movies of this decade addressed the confusion and resilience that accompanied that change. Unique to the decade were several movies featuring unplanned pregnancy. Juno is the story of a 16-year-old who becomes pregnant after a one-time encounter with her high school friend Paulie. Juno considers whether abortion, adoption, or keeping the baby would be the best decision, finally opting to give the baby up for adoption. It is difficult for Juno to manage her relationships with her family, friends, and Mark and Vanessa, the couple she chooses to raise her baby. She gives the impression that she is strong and mature, but she breaks down after witnessing

Film, 2000s

561

Mark and Vanessa end their marriage. Other movies tackle this issue of unplanned pregnancy in a similar manner. Neither of the main characters in Waitress (2007) and Knocked Up (2007) plan to get pregnant; in fact, both explicitly talk about not wanting to be pregnant. Although all three women decide not to abort their pregnancies, they undergo an honest and difficult discussion concerning what they should do and how their current life situations affect that decision. The themes of introspectiveness and realistic portrayals present in these types of movies represented the social climate of the 2000s. Lesbian, Gay, Bisexual, and Transgender (LGBT) Issues The 2000s were also exemplified by states weighing in on the legality of same-sex marriage. The debate became heated and brought LGBT issues to the forefront. Consequently, American audiences were increasingly exposed to the portrayal of LGBT family members on the silver screen. Movies like Brokeback Mountain, Milk (2008), Monster (2003), and even the comedy I Now Pronounce You Chuck and Larry (2007) focused on the hardships and adversity faced by many LGBT individuals and drew attention to their ever-present existence. Transamerica tells the story of Bree, a transsexual woman who is awaiting her final sexual reassignment surgery. She learns that she fathered a son, Toby, 17 years ago, and is not allowed by her therapist to go through with the surgery until she meets him and faces her past. Bree travels to New York see Toby, but continuously lies about her paternity on the trip back to Los Angeles, up until Toby reveals that he is in love with her. Eventually, she has her surgery and the two reunite. The movie is mainly about Bree’s search for her identity in the context of her family. Similarly, other movies that featured LGBT characters also focused on the self, and not specifically on a character’s sexual orientation. A Single Man (2009) tells the story of George, who plans to commit suicide following the death of his partner, Jim. Again, this movie is more about George’s self-reflection and newly single identity than it is about his orientation. A Single Man and most movies depicting LGBT couples and family members were reminiscent of the current social climate, emphasizing selfreflection and acceptance of differences. While the 1990s saw a change in the definition of family, the 2000s exemplified the theme of

562

Film, 2010s

self-discovery in these new definitions. The nation became reflective with each blow to its societal infrastructure. Natural disasters, the terrorist attacks of 9/11, and economic uncertainty inspired Americans to think about themselves in a wider context. Movies of this decade imparted the message that differences were not only normal, but desired. Finding one’s identity in the family was an important endeavor. The more child-centered films were typically optimistic about outcomes, whereas movies geared toward older audiences focused on presenting truth. Family dysfunction was normal and sometimes healthy, especially exemplified in parent-child and sibling relationships. Divorce, remarriage, and even unplanned pregnancies were shown as common occurrences that were not necessarily good or bad, but affected family members in ways that just needed to be examined and considered. Differing family configurations like those consisting of multiethnicities or LGBT individuals were also part of the American fabric. There was less need to “adjust” to these families than there was to simply learn about and accept them. Although much of the 2000s were filled with misfortune, the changing political climate that accompanied the end of the decade signaled a growing optimism. Movies of the 2000s reflected the desire that Americans had to find themselves, look inward, and look forward. Sarah Mitchell University of Missouri See Also: Adolescent Pregnancy; Divorce and Separation; Same-Sex Marriage; Social History of American Families: 2001 to the Present. Further Readings Batchelor, B. The 2000s. Westport, CT: Greenwood Press, 2009. Corrigan, T., ed. American Cinema of the 2000s: Themes and Variations. New Brunswick, NJ: Rutgers University Press, 2012. Mintz, Steven and Randy W. Roberts. Hollywood’s America: Twentieth-Century America Through Film. Malden, MA: Blackwell, 2010. Prince, Stephen. Firestorm: American Film in the Age of Terrorism. Chichester, UK: Columbia University Press, 2009. Ross, Steven J. Movies and American Society. Hoboken, NJ: John Wiley and Sons, 2014.

Film, 2010s By most definitions, the members of Generation X (or Gen X) hit their 40s and 50s in the 2010s, and one of the prevailing trends of the decade has been the marketing of movies playing on their nostalgia. This had also been a trend in previous decades with the baby boomers, who were a market of unprecedented size and self-fascination, with movies like American Graffiti, The Big Chill, and The Ice Storm. A handful of movies in the 2010s have directly addressed Gen X nostalgia: Hot Tub Time Machine revolved around time travel to the 1980s; The Runaways cast contemporary teen stars as the famous all-girl punk group that got its start in the 1970s; J. J. Abrams’ Super 8 was set in 1979, featuring Gen X kids as the lead characters in a story otherwise reminiscent of a darker and angstier version of one of baby boomer and producer Steven Spielberg’s films; and Grown Ups reunited many 1980s male cast members of Saturday Night Live. Even Seth MacFarlane’s raunchy 2012 comedy Ted played off the Gen X–friendly concept “Teddy Ruxpin” comes to life.” Though based on a 21st-century Saturday Night Live sketch, 2010’s MacGruber is another example of a movie playing to generation X nostalgia. MacGruber is a parody of the 1980s television action series MacGyver, starring Will Forte in the title role. The movie skewers many conventions of the 1980s action adventure genre (largely ignored by serious movies drawing on that genre, like The A-Team and The Expendables), and cast 1980s star Val Kilmer (of Top Gun, Real Genius, and Willow) as the villain. The Muppets (2011) brought Jim Henson’s beloved creations to the big screen for the first time since 1999’s Muppets From Space, and rejuvenated the franchise. The movie begins with a nostalgic look back at The Muppet Show, and seems to confirm a long-held fan theory about the continuity of the various Muppet productions: that The Muppet Movie (1979), The Muppet Show, and Muppets Tonight all share a continuity, while other Muppet productions are movies made by the Muppets under the terms of the “rich and famous contract” established in The Muppet Movie, and referenced again as a major plot point in The Muppets. This explains why the Muppets keep meeting each other for the first time in various movies. The Muppets was the first Muppet movie to acknowledge the existence of



The Muppet Show, and functions as a sequel to the TV series by revolving around the lead characters’ quest to reunite the Muppets for one last episode of The Muppet Show in order to raise the money needed to save the theater from being sold. A flurry of remakes of and sequels to movies of the late 1970s and 1980s arrived. Ridley Scott returned to the Alien franchise for the first time since 1979 to direct its prequel, Prometheus. Rise of the Planet of the Apes remade the later Planet of the Apes sequels. Tron finally received a sequel, Tron Legacy, after years of rumors. Predators was a sequel to the 1980s Predator franchise. Television shows 21 Jump Street and The A-Team were turned into big-budget movies. Movies from the 1980s that were remade include Clash of the Titans, A Nightmare on Elm Street, Fright Night, The Thing, and The Karate Kid. Family Issues Some of the most acclaimed movies of the 2010s have dealt with family issues. The biographical sports drama The Fighter, directed by David O. Russell, centered on the early career of boxer Micky Ward, who lives in the shadow of his halfbrother Dicky Eklund. Russell continued to address family dynamics in The Silver Linings Playbook, a screwball romance between two fragile and mentally ill people and the effects of their relationship on their families, and American Hustle, in which Christian Bale’s con man juggles two relationships while working with the FBI to avoid prison. Lisa Cholodenko’s The Kids Are All Right presents a lesbian couple and their family, and the disruption they experience when the teenaged children seek out their biological father. Leaves of Grass, directed by costar Tim Blake Nelson, cast Edward Norton in a dual role as a Nebraskan drug dealer and his twin brother, a Brown philosophy professor who is roped into the drug dealer’s scheme to get out of the business for good. Tom McCarthy’s 2011 film Win Win addresses the theme common to McCarthy’s acclaimed work: the creation of makeshift families among outsiders. In this case, an actual biological family is at the center, and the Flahertys, played by Paul Giamatti and Amy Ryan, take in a teenage wrestler after Giamatti becomes the legal guardian to the boy’s ailing grandfather. Much of the film is driven by the tensions between the things that family

Film, 2010s

563

members need from each other and the things they are willing to give. We Need To Talk About Kevin was based on Lionel Shriver’s novel about the mother of a school shooter that takes place some years after he has gone to jail. The novel is written in the form of letters to the mother’s dead husband, and recounts their arguments over Kevin’s growing psychopathy and his possible involvement in their daughter’s loss of an eye. The movie addresses matters less head on, functioning almost impressionistically. Terrence Malick’s first film in 13 years was the critically acclaimed Tree of Life (2011). A long, prosaic, thoughtful film even by Malick’s standards, it sets a man’s childhood memories of small-town Texas in the 1950s against the backdrop of the origins of life, bookended by a telegram in the 1960s with the news of the death of the man’s brother, and a vision of the dead returning to life. The story focuses on family conflicts: the authoritarian Mr. O’Brien, who chose a practical career over a passionate one but has had difficulty making his practical choice pay off; his two sons, of whom he is especially demanding because he wants them to be prepared for a corrupted world; and Mrs. O’Brien, the picture of nurturing and grace. Among other awards, Tree of Life shared the Gotham Award for Best Feature with Beginners, another movie which delved deeply into childhood, memories, and family. Beginners details two key relationships in the life of Oliver (played by Ewan MacGregor): his relationship with his father, who came out as a gay man late in life after the death of Oliver’s mother; and his relationship with a French actress who he meets shortly after his father’s death, while still processing his loss. Beginners led to an Academy Award for Christopher Plummer as Oliver’s father, Hal. The film addressed the ways in which family and upbringing impact one’s romantic relationships, as well as the way that relationships within the family can change over time. Jeff, Who Lives At Home was written and directed by brothers Jay and Mark Duplass, who had been central to the 2000s film movement that critics called ”mumblecore,” comprised of low-budget, small-scale dramas concerned with emotional realism. While whimsical, Jeff, Who Lives At Home also has a serious message about the meaning of life and family, even of the meaning of the search for meaning. The family at the heart of the movie are

564

Film, Silent

the Thompkins. Jeff Thompkins is 30 years old, living with his mother, and spends much of his time smoking pot in the basement and looking for meaning in the world in the wake of his father’s death. His older brother Pat is in an unraveling marriage, while his mother Sharon is distracted by messages from a secret admirer. When everything comes together, it is largely in response to Jeff ’s insistence on finding meaning in chance events. Bill Kte’pi Independent Scholar See Also: Film, 2000s; Generation X; Television, 2000s. Further Readings Detwiler, Craig. Into the Dark: Seeing the Sacred in the Top Films of the 21st Century. Grand Rapids, MI: Baker Academic, 2008. Hoberman, J. Film After Film. New York: Verso, 2012. Klosterman, Chuck. I Wear the Black Hat: Grappling With Villains (Real and Imagined). New York: Scribner, 2013. Levina, Marina, ed. Monster Culture in the 21st Century. New York: Bloomsbury Academic, 2013.

Film, Silent In 1893, Thomas Edison launched the motion picture industry by building the first film studio in the United States in West Orange, New Jersey. The studio was known as the Black Maria. The first films generally presented acts that had long entertained audiences at circuses and on the vaudeville circuit. By 1903, however, the first films appeared that told a story. All films were “silent” until the advent of sound in 1922, but they were usually accompanied by live music. The onscreen action was sometimes supplemented by written explanations known as intertitles. Silent films greatly varied in length, ranging from a few minutes to full-length films. Because the technology was new and both comedy and tragedy were broadly drawn, families were immediately attracted to the theaters that became known as nickelodeons because of the standard $.05 admission fee. In 1907, approximately 2

Mary Pickford was known as “America’s sweetheart,” “Little Mary,” and “The girl with the curls.” Pickford was one of the Canadian pioneers in early Hollywood and became the first female co-owner of a film studio in the United States.

million Americans attended nickelodeons. Within two years, there were 9,000 nickelodeon theaters located throughout the United States. The oldest silent film that still survives is Vanishing Lady (1896), a film by Frenchman Georges Méliès, who began his entertainment career as a magician. Many silent films have been restored and are available to modern audiences through film festivals, television, streaming services, the Internet, DVD, and Blu-Ray. Both film professionals and movie fans have found creative ways to honor the silent film and its impact on American families. In 2009, a silent film won an Academy Award for Best Short Film; and in 2011, a black-and-white silent film won in the Best Picture category. Family Audiences The movies that appealed to families in the early years of the silent film era often starred child actors, particularly those produced by Thanhouser



Company, Biograph, and Broncho. The films of Edwin S. Porter were particularly popular with families because Porter offered an idealized version of American family life. In 1905, Porter released The Little Train Robbery, a remaking of his The Great Train Robbery (1903), with a cast made up entirely of children. Impoverished tenant children played a large role in Cohen’s Fire Sale (1907),in which Porter had them helping a shopkeeper attempt to rescue a shipment of hats mistakenly picked up by a delivery truck. Other directors also featured child actors. In The Land Beyond Sunset (1912), for instance, the main character is a young newsboy whose grandmother steals his earnings to buy alcohol. He escapes from his unhappy life with supernatural assistance from members of the Fresh Air Fund, a real organization that provided a taste of country life for inner-city children. As the industry matured, the family, rather than individual members, became the focus of plots. The most enduring of the child-centered silent films may be Hal Roach’s Our Gang series, which began in 1922 as short silent films and made the transition to “talkies” and was regularly shown as television reruns for decades. Familiar tales such as Robin Hood (1921) and Peter Pan (1923) were popular with family audiences. Other story lines focused on family, life, history, mythology, and biblical stories. Musicals, vaudeville, and animal acts also continued to have wide appeal with American family audiences. While some states had child labor laws in the early years of the 20th century, reformers had been unable to pass federal measures; and state laws were not always enforced. According to the 1900 census, 1.75 million Americans between the ages of 10 and 15 were in the workforce. In large cities, employed children had a certain degree of independence, and many regularly visited nickelodeons after school and before reporting for evening jobs. It was estimated that 20 percent of theater audiences in Detroit, Michigan, and Madison, Wisconsin, were unaccompanied boys between the ages of 11 and 18. Social reformer and settlement house founder Jane Addams described unruly street children who attended nickelodeons in large groups. Middleclass children were more likely to see silent films in the company of older family members. The early silent films invited audience participation, and families liked to sing along with illustrated

Film, Silent

565

songs. Audience members also frequently shouted out, cheered, or booed in response to action taking place on screen. However, as the silent film industry matured, plot lines became more complex, and audience participation was discouraged. As movie theaters were remodeled, audiences became physically distanced from screens. Children no longer came to nickelodeons unaccompanied, and continuous music gave the audience fewer openings to respond vocally to action taking place on screen. Changing Values Gender roles also underwent a transformation as the silent film industry matured. Early films depicted girls involved in the action, but later on, females came to be stereotyped as the moral conscience of the family and community. For example, in In the Border States (1910) and The Little Girl Next Door (1912), young girls played a key role in saving their families from dangerous situations. In the former, a young girl secretly helps a Confederate soldier who repays her by saving the life of her father, a Union soldier. In the latter, a young girl is killed when accompanying her friend’s family on an outing. Her grieving father is convinced not to ruin the family he blames for her death by the intervention of his daughters’ best friend and the ghost of his daughter. At the same time that girls were placed on pedestals, boys continued to be portrayed in more active roles. In The Evidence and the Film (1913), a young messenger boy is falsely accused of stealing money. In Drummer of the 8th (1913), a young boy runs away to join the Union Army but is killed even as his family prepares for his homecoming. By the 1910s, filmmakers had seen the advantage of using silent films to improve family life by teaching children in the audience to behave properly outside the theater. They began featuring middle-class families whose daily lives focused on their children. Right and wrong were clearly defined for young audience members. These trends continued into the early days of television. Popular Family Film Stars The most beloved star of the silent film era was Charlie Chaplin, the British-born actor whose Little Tramp character is still instantly recognizable for his threadbare dapper appearance, moustache, cane, and sad eyes. Chaplin’s films included The Immigrant (1917), The Kid (1921), The Gold Rush

566

Film, Silent

(1925), and The Circus (1928). In 1919, Chaplin joined filmmaker D. W. Griffith and actors Mary Pickford and Douglas Fairbanks in creating the United Artists film studio. Chaplin suffered a blow to his image in the United States in the 1950s during the McCarthy era but he was restored to favor in 1972 when the Academy of Motion Pictures presented him with an Academy Award for writing the musical score for Limelight. Although she was born in Canada, Mary Pickford became known as “America’s sweetheart” because of her work in films such as Poor Little Rich Girl (1917), which made over $1 million at the box office, Pollyanna (1919), Little Lord Fauntleroy (1921), and Tess of the Storm Country (1922). At a time when most female stars had little control over their careers, Pickford proved to be an astute businesswoman. She was the first female in U.S. history to become a co-owner of a film studio, and she cofounded the Motion Picture Academy of Arts and Sciences with Douglas Fairbanks, who was known for his masterful swordplay in his swashbuckling films. Having paid his dues in vaudeville, Buster Keaton was well suited to silent films. With his flawless sense of comedic timing and deadpan stare, he was highly popular with family audiences. Keaton’s films included Seven Chances (1921), Sherlock, Jr. (1924), The General (1926), and The Cameraman (1928). Of all his films, Steamboat Bill, Jr. (1928) has proved the most enduring. The plot focuses on a young man who returns home to help his father hold onto his ferry business and he predictably falls in love with the daughter of the film’s villain. Examples of Family Films Many of the films of the silent film era were about poor families living at the height of American industrialization. A large number of the films were about immigrants, and many families were Jewish. In East and West (1923), the tale of Morris Brown focuses on a generational conflict between a Jewish immigrant and his highly Americanized daughter. The full-length silent film, Hungry Hearts (1922), based on the short stories of Anzia Yezierska, depicts the Levin family who live on New York’s crowded and impoverished Lower East Side. One of the most respected names in the silent film industry was director D. W. Griffiths, who presented a saga of intergenerational conflicts in Romance of a

Jewess (1908), one of his earliest works. Griffith is best known for his sweeping saga, Birth of a Nation (1915). The film is simultaneously considered a masterpiece and one of the most racist films ever made. In Old Isaac, the Pawnbroker (1908), the plot features a young girl with an ailing and poverty-stricken mother who is helped by a local shopkeeper. Slapstick comedies were extremely popular with silent film audiences, as was evidenced by the domestic comedy series Izzie and Lizzie and the Laurel and Hardy films. Silent Films in the Twenty-First Century Americans of the 21st century still have many opportunities to enjoy the art of the silent film. In addition to those films shown on television by Turner Classic Movies, the Independent Film Channel, or Fox Movie Channel, silent films can be streamed from services such as NetFlix. Some classic silent films are available on DVD and Blu-Ray. Additionally, museums and theaters often sponsor festivals featuring silent films. The Museum of Modern Art (MOMA) in New York, for instance, holds silent film festivals geared for children 6 and up and their families. Admission is free, and live music is provided by Ben Model, MOMA’s resident film accompanist and a silent film historian. In addition to highlighting works of the silent film era, filmmakers of the 21st century have produced new films that have paid homage to the genre set in the early 1920s, the black and white film The Artist (2011) demonstrates that modern audiences can still be captivated by silent films. Directed and written by Michel Hazanavicius and starring Jean Dujardin as George Valetin and Bérénice Bejo as Peppy Miller, The Artist won five Academy Awards, including those for Best Picture, Best Director, and Best Actor in a Leading Role. The actors depend mainly on facial expressions and gestures to present their story, and the music is integral to the unfolding tale. Intertitle cards are used at significant plot points. In 2011, the film Hugo, by award-winning director Martin Scorsese, a dedicated film preservationist, pays tribute to Georges Méliès, the Frenchman who helped to define the genre of silent films and who became known as the “father of special effects.” Méliès built the first film studio in France, but World War I played havoc with the industry, and many of his approximately 500 films were intentionally destroyed. In Hugo, Méliès, heartbreakingly

First Generation



portrayed by Ben Kingsley, has been living in obscurity and operating a toy kiosk at a Parisian train station. He is restored to a position of honor through the efforts of Hugo Cabret (Asa Butterfield), a young orphan. The film shows clips from Méliès’ films, including A Trip to the Moon (1902), around which the plot of the movie is built. The epitome of modern silent family films may arguably be La Maison en Petits Cubes (The House of Small Cubes), the animated film by Japanese director and writer Kunio Kato that won an Academy Award for Best Short Film in 2009. The film depicts an old man whose town is flooding, causing him to continually build his house upward to escape the water. When he drops his favorite pipe, the old man visits the lower levels of his house of cubes, allowing him to relive the life that he had enjoyed with departed family members. The film is a classic example of the beauty involved in telling a story of a family’s life and love without words. Elizabeth Rholetter Purdy Independent Scholar See Also: Fair Labor Standards Act; Film, 2000s; Film, 2010s; Middle-Class Families. Further Readings Addams, Jane. The Spirit of Youth and the City Streets. New York: Macmillan, 1909. Altman, Rick. Silent Film Sound. New York: Columbia University Press, 2004. Bachman, Gregg and Thomas J. Slater. Silent Film: Discovering Marginalized Voices. Carbondale: Southern Illinois University Press, 2002. Butler, Ivan. Silent Magic: Rediscovering the Silent Film Era. London: Columbus, 1987. Butsch, Richard. The Making of American Audiences: From Stage to Television, 1750–1900. Cambridge, MA: Cambridge University Press, 2000. Cohen, Paula Marantz. Silent Film and the Triumph of the American Myth. New York: Oxford University Press, 2001. Ezra, Elizabeth. Georges Méliès: The Birth of the Auteur. New York: Manchester University Press, 2000. Musser, Charles. Before the Nickelodeon: Edwin S. Porter and the Edison Manufacturing Company. Berkeley: University of California Press, 1991. Sullivan, Sara. “Child Audiences in America’s Nickelodeons, 1900–1915: The Keith/Albee

567

Managers’ Reports.” Historical Journal of Film, Radio, and Television, v.30/2 (June 2010).

First Generation The term first generation generally refers to a chronology by which immigrants are categorized according their arrival or birth within a country. The term has two distinct meanings: a first-generation immigrant can be a foreign-born individual who has immigrated to a new country of residence, or it can be an individual born in a country to which his or her parents immigrated. In the United States, first generation generally applies to foreign-born immigrants. Contemporary political discussions about immigration are directly related to the construction and the perpetuation of normative American family structures. Immigration and Generational Status The chronological organization of immigrants based on their date of entry to the United States has historically been a means of documentation. Apart from the involuntary appropriation of Africans into slavery in the United States, immigration before 1820 was very low. From 1850 to 1930, foreign-born populations increased from 2.2 to 14.2 million, primarily because of the reduced costs of transoceanic travel, which functioned as an incentive for immigrants to move to the United States. After the passage of the Immigration Act of 1924, immigration rates continued to decline between the 1930s and the passage the Immigration and Nationality Act of 1952, which maintained national origin quotas that restricted certain populations from immigrating to the United States. The Immigration and Nationality Act Amendments of 1965 removed some quotas and once again resulted in a substantial increase of firstgeneration immigrants to the United States. First-generation immigrants in the 1960s were sometimes met with hostility, depending on their country of origin. Much of this opposition was due to a commonly held belief that new immigrants posed a substantial threat to the cultural and socioeconomic stability of the United States, despite evidence that first-generation immigrants contributed to (rather than detracted from) its overall

568

First Generation

socioeconomic prosperity. First-generation immigrants faced many obstacles while trying to integrate into the fabric of the American national identity. As immigration and ethnic heterogeneity increase, social and political decision makers frequently demand reductions in the government funding of public services upon which many immigrants rely. Furthermore, language barriers and other sociocultural obstacles may jeopardize the foundation upon which first-generation immigrants base their success. One explanation for this aversion may be “ethnic nepotism,” a term originally coined in 1960 by the sociologist Pierre L. van de Berghe, which is a human tendency for in-group bias or favoritism for people with the same ethnicity within a racially diverse society. Ethnic nepotism is a type of cultural xenophobia characterized by an unreasonable fear or hatred of the unfamiliar, especially when the objects of the fear or hatred are cultural elements that are considered alien. This concept is particularly applicable to firstgeneration immigrants because of their fidelity to the customs, rituals, and social practices of their home country. Long-acculturated individuals react to these immigrants with an ethnocentrist condemnation. Ethnocentrism is the practice of judging another culture solely by the values and standards of one’s culture. Thus, unlike second and subsequent generations of family members who are more likely to be acculturated to U.S. customs, first-generation immigrants are under substantially different and unique pressures. The barriers to cultural assimilation for firstgeneration immigrants are many. Cultural assimilation is the process whereby an individual or group’s original cultural identity becomes indistinguishable from and dominated by that of another society or nation. First-generation immigrants face four primary obstacles in this process that subsequent generation citizens do not. Socioeconomic status, geographic distribution, second-language acquisition, and intermarriage are all areas in which firstgeneration immigrants are analyzed by native-born citizens, and to which they often hold themselves accountable. Intergenerational Relationships, First-Generation Challenges Intergenerational families comprised of grandparents, parents, and children often represent a wide

array of degrees of cultural assimilation. First-generation immigrants tend to possess a greater variation in educational attainment, types of occupations, and income levels than that of second and subsequent generations. First-generation immigrants frequently settle in ethnic enclaves, which provide a support network and some of the comforts of home that help make the transition to a new country easier. While first-generation immigrants frequently learn to speak some English, second-generation immigrants are often bilingual, and third-generation immigrants frequently only speak English. High rates of intermarriage, while considered an indication of social integration and operating as an agent of cultural assimilation, are not that common with first-generation immigrants, but they tend to be very common in second and subsequent generations. First-generation immigrants are often the subject of political discussions of illegal immigration. Unlike second and subsequent generational groups whose citizenship, by virtue of their birth in the United States, is uncontested, first-generation immigrants are subject to legal scrutiny. Common to those discussions are issues of national identity, cultural diversity, immigration and customs enforcement, and economic competitiveness in a global marketplace. An especially important characteristic of first-generation immigrants who are undocumented is that they are young and represent a proportionately higher share of new births in the United States, thereby making their children the fastest-growing segment of American society relative to other social groups. Thus, the intergenerational nature of such families is particularly challenging considering contemporary legal doctrines about citizenship, nationality, and immigration. The consequences of this complex legal status can result in a change to family dynamics in which traditional first-generation breadwinners may be legally incapable of fulfilling their traditional roles, while second-generation children may take on these responsibilities, thereby usurping the familial power structure by virtue of their greater employment opportunities. Because the Fourteenth Amendment to the U.S. Constitution guarantees citizenship to anyone born on U.S. soil, and such legal status determines the accessibility of resources available to him or her, under current public policies, first-generation immigrants and

Flickr



their families are faced with a complex picture that is continually evolving even today. Michael Johnson, Jr. Washington State University See Also: Acculturation; “Anchor Babies”; Arranged Marriage; Assimilation; Central and South American Immigrant Families; Chinese Immigrant Families; German Immigrant Families; Immigrant Children; Immigration Policy; Indian (Asian) Immigrant Families; Intergenerational Transmission; Mexican Immigrant Families; Middle East Immigrant Families; Multigenerational Households; Multilingualism; Polish Immigrant Families; Vietnamese Immigrant Families. Further Readings Jimenez, Francisco. The Circuit: Stories From the Life of a Migrant Child. Albuquerque: University of New Mexico Press, 1997. Klapper, Melissa R. Small Strangers: The Experiences of Immigrant Children in America. Lanham, MD: Ivan R. Dee, 2007. Sollors, Werner. Multilingual America: Transnationalism, Ethnicity and the Languages of American Literature. New York: New York University Press, 1998. Suarez-Orozco, Carola and Marcelo M. Suarez-Orozco. Children of Immigration. Cambridge, MA: Harvard University Press, 2002. Telles, Edward E. and Vilma Ortiz. Generations of Exclusion: Mexican Americans, Assimilation and Race. New York: Russel Sage Foundation, 2009.

Flickr Flickr is social networking Web site that encourages users to share and manage their digital photos and videos online. Flickr allows users to set up a free account, whereby they can share personal photographs with a select group of people or with the entire Flickr community. As a social networking site, Flickr has become a popular online outlet for artistic creativity that has brought people with varied interests together from around the world. Caterina Fake was the cofounder of Flickr. From the headquarters of her Vancouver, British

569

Columbia, online gaming start-up company Ludicorp, Fake recognized the potential for entertainment associated with online photo sharing. She collaborated with her programmer husband to establish the Flickr project in 2004. In 2005, Internet giant Yahoo! purchased the site, causing the number of registered users and uploaded photos on Flickr to increase exponentially. Since this time, Flickr’s popularity has continued to multiply and by September 2009, their Web site—Flickr.com—was ranked 33rd in terms of global Web traffic. The design and structure of Flickr is motivated by two organizational objectives. The first objective is to help people make their photos available to other users. The ability to control privacy options allows users to privately and securely share their photos with their select family and friends. However, users also have the option to make their photos fully public, allowing anyone with Internet access to view their published content. The second goal of Flickr is to assist people in organizing their photos and videos. The rise of digital technology has made it simple for people to capture and store a vast number of images on their electronic devices. In recognizing the potential for disorganization to arise from this new form of mass image gathering, Flickr introduced the process of collaborative organization, which allows multiple users to contribute to organizing an individual’s shared content. More specifically, users can give their Flickr contacts permission to organize their content, as well as add comments, notes, and tags to individual images. Photos and videos published on Flicker are visible to anyone; however, registration is required for users to upload content, create a profile, and establish contacts. In catering to diverse contemporary lifestyles and technological preferences, Flickr offers various options for users to upload photos using the Web, mobile devices, and home computers. In addition to using the official Flickr Web site, users can also share photos via email, posting to blogs, in RSS feeds, and through mobile applications. Flickr allows audiences to explore photograph collections posted by amateur and professional photographers, as well as groups such as libraries and museums. Finding content is simplified through the process of storing content as metadata, making all data searchable. A World Map link on the site also

570

Focus on the Family

allows users to search photos according to the location in which they were taken. Through the process of geotagging, which allows users to add location information to their photos, users can browse the world to see where other people have been and what they saw. In January 2008, Flickr collaborated with the Library of Congress to launch The Commons, an expansive collection of public photos gathered from an international community of select libraries, museums, and archives. The main objectives of this educational initiative include increasing exposure to cultural heritage photographs, and providing a way for the general public to participate in the production of information and knowledge through their engagement within The Commons. Getty Images exists as a second prominent collection featured on the Flickr Web site. Getty, which is a recognized leader in the field of stock photography, allows members of the Flickr community to promote their art by facilitating opportunities for them to license it to businesses and other users. Upon formal approval of licensing requests, users have the opportunity to participate in their collection of royalty-free and rights-managed photography. Flickr also operates a blog that displays the work and experiences of professional and amateur photographers. Flickr’s blog readers can also participate in artistic challenges designated by the managing blog editor. For example, a weekly project called “FlickrFriday” encourages readers to creatively capture images that are consistent with a particular theme. Inspired by such themes as “On The Waterfront” or “Polka Dots,” blog followers submit photos for a chance to be featured on the Flickr blog. Made possible by the Internet and prompted by the proliferation of digital photography, Flickr has created a way for individuals and families living in geographically disparate places to connect through photographs. Through the operation of their collaborative community, Flickr supports millions of users’ ambitions to effectively organize and share photographically captured life moments. Stephanie E. Bor University of Utah See Also: Blogs; Facebook; Internet; Myspace; YouTube.

Further Readings “Digital Birth: Welcome to the Online World.” http://www.businesswire.com/news/home/ 20101006006722/en/Digital-Birth-Online-World (Accessed July 2013). Graham, J. “Flickr of Idea on a Gaming Project Led to Photo Website.” USA Today (February 27, 2006). http://usatoday30.usatoday.com/tech/products/2006 -02-27-flickr_x.htm (Accessed January 2014). Vaughan, J. “Insights Into the Commons on Flickr.” Libraries and the Academy, v.10/2 (2010).

Focus on the Family Focus on the Family (FOTF) is a nonprofit Christian ministry founded in 1977 by author and psychologist James Dobson. The organization’s stated objective is “to spread the Gospel of Jesus Christ by helping to preserve traditional values and the institution of the family.” Headquartered in Colorado Springs, Colorado, Focus on the Family sponsors events, publishes periodicals and books, and creates radio, video, and Internet resources that promulgate Evangelical Christian doctrine as well as the organization’s socially conservative message, with an emphasis on topics related to family life such as marriage, adoption, and parenting. History Dobson formed Focus on the Family in 1977, not long after the debut of his weekly radio broadcast by the same name. The Focus on the Family radio broadcast, which in 1980 expanded to a daily format, continues to serve as a flagship venue for the organization’s message. In the 1980s, Focus on the Family expanded its activities to include print publishing and video production. In 1984, the organization opened an office in Canada, and has since established offices in Australia, South Africa, Indonesia, Korea, Taiwan, Malaysia, New Zealand, Egypt, Singapore, and Ireland. Dobson, a psychologist and family therapist by training, served as chief executive officer of the organization until his resignation in 2003, at which point he became chairman of the board of directors and Dan Hodel became president. Hodel was succeeded by Jim Daly in 2005, and in 2009



Dobson resigned his chairmanship and cut all ties with the organization, citing philosophical differences with Daly. Positions on Social Issues Focus on the Family was created as a venue for Dobson’s advice and commentary on various social issues. Dobson, a conservative Evangelical Christian, advocated a literal interpretation of many passages from the Bible that address social issues such as parenting, marriage, and sexuality, as well as doctrinal matters such as creationism, the perfect accuracy of the Bible, and the superiority of Christianity over other religions. Throughout Dobson’s tenure and after his 2009 departure from the organization, Focus on the Family maintained these positions and published them via its diverse media outlets. Authoritarian Parenting In 1970, Dobson authored Dare to Discipline, in which he advocated an authoritarian style of parenting that placed emphasis on limits and discipline tactics. Dobson drew on a literal interpretation of Christian scriptures to recommend that parents respond strongly to signs of defiance from their children. A variety of discipline techniques are recommended, including spanking. Focus on the Family continues to publish material in favor of spanking and similarly harsh parenting strategies, even providing detailed instructions for spanking in order to inflict pain without causing injury. Opposition to Abortion Focus on the Family takes the position that embryos and fetuses are human beings and that abortion involves the killing of a person. The organization is opposed to abortion in all cases, except for when the mother’s life is threatened by ongoing pregnancy. The organization supports efforts to make abortion illegal, as well as legislation to restrain legal abortion, such as waiting periods, requirements that medical professionals disclose risks and alternatives to abortion to their patients, and fetal homicide laws. Heartlink, a subsidiary of Focus on the Family, makes educational resources available to pregnancy resource centers that provide counseling to pregnant women. Heartlink also offers grants to pregnancy resource centers

Focus on the Family

571

to fund the purchase of ultrasound machines and training in their use. Marriage and Gender Roles Marital quality and stability is one of Focus on the Family’s core values. The organization has produced dozens of books, videos, articles, and online resources aimed at supporting its concept of traditional marriage, which it defines as a union between one man and one woman in a monogamous, committed relationship. In addition to print and video resources, Focus on the Family maintains a database of marriage and family counselors across the country who meet their standards of Christian belief and who agree with the organization’s teachings on marriage. Subsequent to an endorsement of marriage as a cultural good, Focus on the Family teaches that all sexual relationships outside of marriage undermine the institution, in addition to being sinful. The organization frames sexuality in spiritual terms, asserting that sexual intimacy creates a spiritual bond that must be reserved for marriage. Additionally, the organization teaches a complementary view of gender roles, asserting that human beings were created by God in the form of two unique genders with distinct characteristics and roles. For example, mothers are considered primarily responsible for nurturing their children, whereas fathers are taught to focus on validating each child’s sense of self-worth. Same-Sex Relationships Focus on the Family teaches that homosexual relationships are sinful and opposes efforts to legalize same-sex marriages and civil unions. The organization supports counseling for individuals who experience same-sex attraction and teaches that homosexual individuals can develop heterosexual orientation. Celibacy is recommended for individuals who continue to experience same-sex attraction. Additionally, Focus on the Family opposes adoption by homosexual couples as well as by unmarried couples and individuals. Political Activity Focus on the Family asserts that Christians have a responsibility to be politically active toward the goal of promoting Christian values in the public sphere. CitizenLink, a political action committee

572

Food Shortages and Hunger

affiliated with Focus on the Family, engages in lobbying, issue advertising, and advocacy in support of the mission and goals of Focus on the Family. The organization supports conservative political candidates through issue advertising, publications, and online resources. D. Greg Brooks University of Missouri See Also: Abortion; Discipline; Evangelicals; Gay and Lesbian Marriage Laws; Gender Roles. Further Readings Buss, Dale. Family Man: The Biography of Dr. James Dobson. Wheaton, IL: Tyndale House, 2005. Gilgoff, Dan. The Jesus Machine: How James Dobson, Focus on the Family, and Evangelical America Are Winning the Culture War. New York: St. Martin’s Press, 2007. Klemp, Nathaniel J. The Morality of Spin: Virtue and Vice in Political Rhetoric and the Christian Right. Lanham, MD: Rowman & Littlefield, 2012. Viefheus Bailey, Ludger H. Between a Man and a Woman: Why Conservatives Oppose Same-Sex Marriage. New York: Columbia University Press, 2010.

Food Shortages and Hunger Although the United States produces enough food to meet the caloric needs of its citizens, many families suffer food shortages as a result of poverty. Many types of families are affected by food insecurity and hunger, though minority and single-parent households are disproportionately affected. Food shortages and hunger also vary regionally and according to population density. Many public and private programs exist to combat the issue. Private programs include soup kitchens, food banks, and community gardens. Three major public programs addressing food shortage and hunger are the Supplemental Nutrition Assistance Program (SNAP), Women Infants and Children (WIC), and the National School Lunch Program (NSLP). Despite efforts to eradicate

hunger, the problem still plagues many American families today. Hunger and Food Shortage Defined A family is food secure when all members have consistent access to food that meets their dietary needs and preferences that allows them to lead active and healthy lives. Food insecurity implies the opposite: individuals do not have consistent access to nutritionally adequate and culturally acceptable foods. The U.S. Department of Agriculture (USDA) measures food security on a four-point scale ranging from “high food security” to “very low food security.” The USDA does not measure hunger per se, because hunger is a subjective, physiological condition and food insecurity is an economic and social condition. The Committee on National Statistics of the National Academies of Science defines hunger as “a potential consequence of food insecurity that, because of prolonged, involuntary lack of food, results in discomfort, illness, sweakness, or pain that goes beyond the usual uneasy sensation.” Hunger is therefore a condition included in the USDA measurement of very low food security. Hunger varies in severity. Most people have experienced hunger to some degree, even if only fleetingly. However, persistent long-term hunger results in serious conditions, including undernourishment or malnutrition. Undernourishment occurs when the body does not receive enough energy through food, especially protein. Undernourishment can be a direct result of food shortage when there are not enough high-energy foods available to the families living in a community. Malnutrition occurs when the quality or variety of foods in the diet are insufficient. Malnutrition can occur even when there is plenty of food if the types of foods available do not supply the body with adequate amounts of required nutrients. There are various indicators of food shortages at the national, regional, or local scale. Average daily calorie supply per person is the most common indicator of food shortage at the national level. This measure of food shortage reflects the amount of food produced in the nation. Food shortages due to insufficient food production is not a problem in the United States. The nation produces enough food to feed its citizens roughly 3,500 calories per person per day. Food shortage in America tends to be local, related more to physical and economic access to safe, affordable, and nutritious food than



to issues of agricultural productivity. The presence of “food deserts” can serve as an indicator of local food shortage locally. Food deserts are areas that lack affordable sources of healthy foods. They tend to be in rural areas, where isolated communities lack access to a sufficiently stocked, reasonably priced grocery store, and in impoverished urban areas without grocery stores within walking distance for people who lack reliable transportation. In these urban areas, usually the main sources of food are overpriced convenience stores that provide unhealthy options or fast food outlets that contribute to obesity and other health problems. American families have faced food shortages and hunger at many times in the nation’s history. They plagued rural families in the mid-1800s when bad weather and economic pressures drove many small farmers off their land. The loss of small farms ironically increased agricultural productivity by concentrating agricultural land in the hands of those who best knew how to maximize yield. Yet, it also left many rural families with no land, jobs, or pay to purchase food. Later, millions of Americans lost their jobs during the Great Depression; food shortages grew as the Great Plains became the desolate Dust Bowl throughout the 1930s. While foodstuffs were closely rationed during World War II, widespread hunger was avoided by careful central planning of the nation’s food supply. After that, the prosperity of the 1950s led most Americans to assume that hunger was no longer a serious problem. That changed with the eye-opening 1968 CBS News documentary, Hunger in America, which brought the problem into the public eye once again. Since the 1960s, hunger in the United States has increased, despite public and private efforts to reduce or eradicate it. Statistics of Hungry Families Hunger is highly associated with poverty. Not all poor families are hungry, but all hungry people are poor. Forty-seven million Americans lived at or below the poverty level in 2010, which amounts to 15.1 percent of the population. The number of households experiencing low or very low food security was similar, 48.8 million people (17.2 million households), or 16.1 percent of the population. Only 7.4 percent of households with an income-to-poverty ratio of more than 185 percent were food insecure, whereas 33.8 percent of households with a ratio of less than 185 percent were food insecure. The income-to-poverty

Food Shortages and Hunger

573

ratio is total gross household income divided by the total income needed to support all individuals living in the household. Although people commonly refer to “the poverty line,” the measure of poverty will vary depending on household composition. In 2010, 6.7 million U.S. households (5.4 percent) experienced very low food security. At least one member of these household reported reduced food intake because of lack of food. Adults report indicators of hunger in most very low food secure households, but approximately 976,000 children experienced very low food security, and at least one child in the household reportedly had reduced food intake and disrupted eating patterns at some time during the year due to food insecurity. Household food security varies significantly depending on the composition of the household. For example, households with children are much more likely to be food insecure than households without children, and the age of the children is also a factor. In 2009 and 2010, the prevalence of food insecurity was higher for households with children under the age of 6 than for households with children under 18 years old. Single female-headed households with children experienced a food insecurity rate of 35.1 percent compared to single maleheaded households (25.4 percent), while married couples with children experienced the lowest rate of food insecurity (13.8 percent). Children are most at risk of food insecurity in female-headed households, black, non-Hispanic households, Hispanic households, households with incomes below an income to poverty ratio of 185 percent, and households in principal cities of metropolitan areas. Food insecurity is also higher than the national average for black households (25.1 percent) and Hispanic households (26.2 percent), whereas it is lower than the national average for white, non-Hispanic households and households headed by non-Hispanics of other or multiple races. The trends are similar with regard to race and household structure for very low food secure households. Food security also varies regionally. It is more prevalent in the south and west and slightly less prevalent in the Midwest and northeast. Families in the southeast are at greatest risk. Food insecurity was statistically higher than the national average of 14.7 percent from 2009 to 2011 in Mississippi, Texas, Arkansas, Alabama, Georgia, California, and North Carolina. Families living in principal cities of

574

Food Shortages and Hunger

a metropolitan area were also more likely to be food insecure, compared to nonmetropolitan areas and suburbs. In cities, food insecurity is often due to lack of physical access to food, combined with inadequate economic access. Food insecurity also exists in rural communities, though it may affect a smaller number of people because rural populations are smaller than urban or suburban populations. Addressing Food Shortage and Hunger Throughout much of the 20th century, the private and public sectors made many efforts to address food shortage and hunger among American families. Because poverty is the main cause of hunger, many programs to reduce or eliminate hunger also focused on reducing poverty or providing services specifically to low-income households. The most common form of hunger relief in the early 20th century was soup kitchens. These were largely private sector efforts; soup kitchens and bread lines became a primary mode for feeding millions of unemployed and homeless families during the Great Depression. It was not until the 1960s that food banks emerged and people realized that hunger was still a problem. Food banks, or food pantries, often run by religious organizations or nonprofits, provide low-income families with food that they can prepare at home, and they have remained essential in the fight against food security for decades. Over 6 million households received food from a food bank at least once a year from 2006 to 2011. A more recent development, mostly in the private sector, is the development of community gardens. Community gardens are meant to provide access to affordable, fresh, healthy food in low-income areas, which are often also food deserts. Community gardens encourage families to plant their food in a common garden or a family plot within a garden area in order to overcome physical and economic barriers to accessing food. Research is inconclusive regarding the effectiveness of this approach. The first government effort to address hunger and food shortage came through the Federal Surplus Relief Corporation, which dispersed food to charitable organizations to provide poor people with food during the Great Depression. Other programs created under the New Deal during the Great Depression focused on alleviating poverty, a primary cause of hunger, by reducing unemployment and raising

wages. Most Americans believed these government efforts in the 1930s had led to the end of hunger in the United States, but public outrage over the prevalence of hunger reported at the time put pressure on the government to address the issue. States began issuing what are known today as food stamps. In 2014, there were three federal programs providing food assistance to families. Food stamps, or the Supplemental Nutrition Assistance Program (SNAP), was the most prominent anti-hunger program serving over 40 million Americans annually. The program is intended to assist households with the cost of purchasing nutritionally adequate food for their families. Over 75 percent of households using SNAP include children and nearly one-third include elderly or disabled individuals. SNAP is only available to households in which the incometo-poverty ratio is equal to or less than 130 percent (with the poverty line at about $23,800 annually for a three-person family in 2010), assets are limited, and the net income is less than or equal to the poverty line. The Special Supplemental Nutrition Program for Women, Infants and Children (WIC) is similar to SNAP, but targets low-income pregnant, postpartum, and breastfeeding women, and infants and children up to age 5 who are at nutrition risk. Over 8 million people benefit from WIC each month, half of them children, another two million infants, and approximately two million women. This program cost the federal government $7.2 billion in fiscal year 2010. Another government program instrumental in addressing hunger and food shortage is the National School Lunch Program (NSLP). The NSLP was established in 1946 to address undernourishment of children. The program expanded in the late 20th century to include breakfast and offers free or reduced meals to children from low-income families. Similar to SNAP, children from households with an income to poverty ratio equal to or less than 130 percent are eligible to receive free meals. Families with an income to poverty ratio of 130 percent to 185 percent are eligible for meals at a reduced cost of $0.40. This program reaches over 30 million children every school year and costs the federal government over $9 billion. Over half of very low food secure households reported using at least one of the three major federal food and nutrition assistance programs,

Food Stamps



the majority using SNAP (41.5 percent). These programs, along with private sector efforts, have become the primary means for addressing hunger in the United States. Kelly Monaghan M. E. Swisher University of Florida See Also: Dual Income Couples/Dual Earner Families; Ethnic Food; Food Stamps; Frozen Food; Head Start; McDonalds; National Center for Children in Poverty; New Deal; Obesity; Poverty and Poor Families; Pure Food and Drug Act 1906; Single-Parent Families; Supermarkets; TANF; Welfare. Further Readings Brown, J. Larry and Deborah Allen. “Hunger in America.” Annual Review of Public Health, v.9 (1988). Coleman-Jensen, Alisha, Mark Nord, Margaret Andrews, and Steven Carlson. “Household Food Security in the United States in 2010.” U.S. Department of Agriculture, Economic Research Service. http://www.ers.usda.gov/media/121076/ err125_2_.pdf (Accessed January 2014). U.S. Department of Agriculture, Economic Research Service. “Food Security in the U.S.” http://www.ers.usda.gov/topics/food-nutrition -assistance/food-security-in-the-us.aspx#.Ufb 2YhZ560s (Accessed July 2013). U.S. Department of Agriculture, Economic Research Service. “Food Access Research Atlas.” http://www .ers.usda.gov/data-products/food-access-research -atlas.aspx#.Ufb8kxZ560s (Accessed July 2013). U.S. Department of Agriculture, Food and Nutrition Service. “Supplemental Nutrition Assistance Program: A Short History of SNAP.” http://www .fns.usda.gov/snap/rules/Legislation/about.htm (Accessed July 2013).

Food Stamps The food stamp program is one of the most important elements of the American social welfare system. It is the only public assistance program that is available to all family types; other programs are

575

targeted on specific groups, such as children. The current food stamp program provides low-income families with monthly grants on a debit card that can only be used to purchase food items. Families are automatically eligible if they are receiving Temporary Assistance for Needy Families (TANF). Other families are eligible if their monthly income is less than 130 percent of the poverty line and the total value of their assets is less than $2,000. The program is mainly administered and financed by the Department of Agriculture, but it is operated in each state through local welfare offices. The main purpose of the welfare program is to prevent hunger and food insecurity in low-income families. Development of the Food Stamp Program The food stamp program has its roots in the federal food assistance program that was created in 1939 during the Great Depression. The program was designed to encourage people to purchase surplus food stocks that were depressing farm prices, and at its peak, 4 million people received this assistance. Nevertheless, it was terminated because of World War II. After the end of the war, a long campaign to make food stamps a permanent part of the welfare system began. In 1961, President John F. Kennedy ordered the U.S. Secretary of Agriculture to create a pilot food stamp program in the Appalachian region, acting on a promise made in the Democratic Party platform. Within two years, several dozen pilot programs had been initiated throughout the country. Consequently, President Lyndon B. Johnson expanded these programs as part of his War on Poverty, and finally signed the Food Stamp Act in August 1964, making the program permanent. According to sociologist Theda Skocpol, American politicians naively hoped that the program would help overcome the racial divisions by eliminating poverty once and for all. In subsequent years, the program was dramatically extended and funding was expanded. Additionally, national eligibility standards were defined and methods to allow recipients to receive coupons without having to make any cash payments were established. These reforms increased the size of the program. In 1970, 4 million people received food stamps, and by 1980, the number had increased to 19.4 million. As a consequence, poverty of the American families was significantly reduced. However, this expansion came to an end

576

Food Stamps

when President Ronald Reagan instituted major cutbacks in the 1980s. The first law that initiated deep cuts in meanstested welfare programs was the Omnibus Reconciliation Act (OBRA) in 1981, which cut funding for the food stamp program by 14.3 percent. As a consequence, 1 million persons lost their eligibility. In 1996, President Bill Clinton signed the Personal Responsibility and Work Opportunity Reconciliation Act. With this reform, the Aid to Families with Dependent Children (AFDC) program was replaced by the TANF program. Benefit term limits, tighter sanctions, and work requirements were the hallmarks of the legislation. The act had major implications for the food stamps program in that benefits were reduced and the eligibility criteria changed. Most notably, families were not allowed to deduct

more than 50 percent of their rent or housing costs from their income to determine the amount of the food stamps they could receive. The overall food stamp benefit level was reduced by 3 percent and many legal immigrants were removed from the rolls. Even though the 1996 reforms instituted many changes, the food stamp program retained much of its pre-welfare reform structure. As a consequence, although food stamp enrollment had increased sharply in the early 1990s, it declined 35 percent between 1994 and 1999. In the 2000s, food stamp caseload rose by a third. In 2008, the food stamp program was renamed the Supplemental Nutrition Assistance Program (SNAP). In response to the economic crisis, the American Recovery and Reinvestment Act of 2009 increased monthly SNAP benefits by 13.6 percent

A neighborhood grocery store in New Orleans has a facade handpainted with the words “We Cash Checks, We Accept Food Stamp.” The store is fenced off, presumably abandoned since the devestation caused by Hurricane Katrina in 2005. In 2008, the farm bill renamed the existing Food Stamp program the Supplemental Nutrition Assistance Program.

Foster Care



(beginning in April 2009). In 2010, Congress passed a law that stated that the benefit increase would end in April 2014. However, this temporary boost to SNAP was scheduled to end earlier. Since November 1, 2013, American families have been faced with severe benefit cuts (i.e., from $11 for a single household to $36 for a four-person household). Scientists argue that these cuts will increase hardship and food insecurity. The program is intended to ensure that no family goes hungry, but the rate of participation declines from time to time. Many researchers believe that there is a growing gap between the need for food stamps and their use. Many families are not aware of their eligibility for SNAP, and misinformation and confusion are serious problems. Some estimate that nearly three-quarters of food insecure households are not enrolled in the program. In2013, roughly 33 million Americans, including 13 million children, suffered from hunger or food insecurity. SNAP remains popular and enjoys greater public support than other social welfare programs. Gary Tschoepe and John Hindera have found three reasons for this. First, Republicans and Democrats agree on the program’s importance. Second, taxpayers prefer in-kind programs with restrictions that prevent recipients from spending public money on items not considered essential. Third, field research has shown that the program is successful in reducing malnutrition and associated health problems. Relevance for American Families SNAP is a central element of the American welfare system. During the last decades, major changes concerning the benefit level and the eligibility criteria have been instituted. However, the major structure and function of the program remains unchanged. Bruce Jansson argued that “The Food Stamps Program was a landmark achievement because it gave millions of impoverished families the resources to purchase food in quantities not possible to meager welfare checks.” Michaela Schulze University of Siegen See Also: ADC/AFDC; Family Consumption; Poverty and Poor Families; Poverty Line; TANF; War on Poverty; Welfare; Welfare Reform.

577

Further Readings Burnham, Linda. “Welfare Reform, Family Hardship, and Women of Color.” ANNALS of the American Academy of Political and Social Science, v. 577/1 (2001). Eisinger, Peter. “Food Assistance Policy (United States),” Encyclopedia of Social Welfare History in North America, John M. Herrick and Paul H. Stuart, eds. Thousand Oaks, CA: Sage, 2005. Hoynes, Hilary W. and Diane W. Schanzenbach. “Work Incentives and the Food Stamp Program.” Journal of Public Economics, v. 96/1–2 (2012). Jansson, Bruce S. The Reluctant Welfare State: Engaging History to Advance Social Work Practice in Contemporary Society, 7th ed. Belmont, CA: Brooks/ Cole, 2012. Joliffe, Dean, et al. “Food Stamp Benefits and Child Poverty.” American Journal of Agricultural Economics, v.87/3 (2005). Purtell, Kelly M., Elizabeth T. Gershoff, and Lawrence J. Aber. “Low Income Families’ Utilization of the Federal ‘Safety Net’: Individual and State-Level Predictors of TANF and Food Stamp Receipt.” Children and Youth Services Review, v.34/4 (2012). Skocpol, Theda. Social Policy in the United States: Future Possibilities in Historical Perspective. Princeton, NJ: Princeton University Press, 1995. Trenkamp, Brad and Michael Wiseman. “The Food Stamp Program and Supplemental Security Income.” Social Security Bulletin, v. 67/4 (2007). Tschoepe, Gary J. and John J. Hindera. “Explaining State AFDC and Food Stamp Caseloads: Has Welfare Reform Discouraged Food Stamp Participation?” Social Science Journal, v. 38/3 (2001).

Foster Care There is a strong connection between the development of the current U.S. foster care system and the numerous social and political changes that have occurred throughout the 20th century. In 2014, nearly 400,000 children were involved in the foster care system in the United States, a substantial decrease from 500,000 at the beginning of 2000. Despite this decrease and the overall improvements in services offered to families, the system’s price tag is huge for a relatively small population. The federal

578

Foster Care

government spends nearly $5 billion per year reimbursing states for their foster care costs. The child welfare system includes a range of programs and services designed to improve the wellbeing of children and the adults who care for them. Foster care is one part of the overall child welfare system in the United States and pertains to out-ofhome services for children who have been removed from the care of their parents or primary guardian because of abuse, neglect, or maltreatment. Specific reasons for removal include homelessness, parental substance abuse, domestic violence, and neglect. As a result of research and policy initiatives, there have been many improvements to the foster care system; however, some issues have proven difficult to address. Children are entering the foster care system with more severe emotional and behavioral disorders than in the past. In addition, the outcomes for youth who remain in foster care decline the longer they are in it. Research has shown that as many as half of the youth in foster care may experience problems finishing high school, become involved in the juvenile or criminal justice system, or verge toward homelessness as they enter adulthood. The meaning of “family” in the foster care system is inclusive, extending to key players such as the children, their parents or primary guardians, foster caregivers, and caseworkers. Foster families are often embedded in a much larger network that includes various legal representatives, guardian ad litem volunteers, mental health or social service providers, medical personnel, education staff, and others, depending on the needs of the children and other family members. Included also may be policymakers and government workers who make and administer decisions affecting children and families in the foster care system. Types of Foster Care Placements Foster care placements vary but typically include traditional foster family care, kinship or relative care, treatment care, medical or behavioral care, residential or group home care, respite care, emergency care, and other long-term placements (e.g., foster-to-adopt, guardianship care, or another planned permanent living arrangement [APPLA]). Within foster care, the child is viewed as a ward of the state government and foster caregivers are often viewed as substitute parents who must be willing to work within the roles and responsibilities

granted them by federal and state policies. Other informal placements are possible. For example, parents or guardians may arrange for help from a relative or friend without formal involvement with the child welfare system. However, little is known about the prevalence and influence of these types of informal foster care placements because they exist outside of governmental tracking. Historical Context: 1300 to 1800 Foster care has a long history, beginning as early as the 14th century in England with the doctrine of parens patriae, meaning “parent of the state.” Originally, kings were considered the legal guardians of their people, and in exchange for this responsibility, the king received the people’s labor and profits off the land on which they lived. Although the king profited most often from this arrangement, social changes and increasing numbers of those in need led to legal challenges of the king’s guardianship. This led to the king bearing more responsibility over his most vulnerable subjects (i.e., those unable to care for themselves). The English Poor Law Act of 1601 was created in response to the increase in family poverty resulting from shifting social circumstances. According to this law, children could be separated from their poor families and apprenticed to wealthier families (often as indentured servants) until they reached the age of adulthood, as determined by their master. As English colonization of the New World began, many children were indentured and brought to America. As the Poor Laws evolved in the colonies, intervention in cases of extreme maltreatment or neglect occurred, and masters were held responsible. When parents or masters failed to supply the basic needs of children or apprentices, teach them a trade, or provide moral or religious education, they were punished or had their ward removed. Changes Between 1800 and 1935 With the influx of immigration to the United States in the 19th century, the apprenticeship system broke down further as the children’s rights movement gained momentum and labor laws prohibited them from working in factories. Although it became more acceptable for abused or neglected children to be removed from their home, this seldom happened due to a lack of suitable alternatives. The first



orphanages and almshouses were created to offer alternative placements. State laws creating a legal precedence for the state’s responsibility to care for dependent children accompanied the rise of these institutions. However, many of the orphanages and almshouses became overcrowded and children ended up living on the street. It was only through the support of a small number of programs developed at the state level and by private organizations that some children living in out-of-home placements could be returned to their families. Children who were not reunited either gained employment or were sent west on orphan trains to rural areas to work on farms. Discriminatory practices were common at this time, with placements and services limited or completely withheld from minority children. Despite the attempt of some early African American reformists and advocates, the majority of minority children were cared for by neighbors and friends. This informal caregiving was also common practice among immigrant families. The continued overcrowding of orphanages and almshouses led to two developments. The first was the social shift of favoring orphanages as placements for children who had been removed from their homes over foster homes. Orphanages and almshouses began to disappear around this time. The second was the implementation of child labor laws to protect children from being taken advantage of in the labor force. Simultaneously, the prevailing attitude was that women should be predominantly responsible for the care of children. Despite the intended benefits to child well-being, implementation of this legislation often trumped parents’ rights in many ways due to the restrictions on mothers who wanted or needed to work to provide for their families. 1935 to 1974 Throughout the 20th century, there was increasing acceptance of the government’s role in creating policies that protected family life. The first federal foster care programs were created under Title IV of the Social Security Act of 1935. The small but initial focus was on reaching abused and neglected children in underserved and less accessible rural areas. This evolved during the 1950s to include a greater emphasis on industrialized areas. By the end of the 1950s, many efforts to develop and reform foster care policy focused on expanding services to

Foster Care

579

overpopulated urban areas, where many children were still living on the streets. Advances in medical research and X-ray technology during the 1950s and 1960s, coupled with support from the American Medical Association, resulted in the adoption of state laws requiring mandatory reporting of child abuse in all 50 states. However, early efforts at following up on child abuse reports were modest at best because many communities could not afford to set up a reporting system and had inadequate resources to follow up on reports. Another development around this time was the concept of “drift” in foster care, first coined by researchers Henry Maas and Richard Engler and still utilized today. Drift described children who were unnecessarily removed from their family and placed into foster care because no other support services were available to help these families, some of whom had very few needs. The 1974 Child Abuse Prevention and Treatment Act (CAPTA) produced a new administrative organization that promoted new research studies and compiled research findings on child abuse and neglect, as well relegated funding to states interested in improving their reporting laws and training for child welfare workers. Despite these efforts, there was still a lack of actual services available to children and families identified as at risk or in need. Although there was a strong and growing attitude favoring family preservation, there were no policies in place to promote it. Since this time, the CAPTA has been reviewed, edited, and re-administered multiple times to clarify the definition of child abuse or expand the funds available to states. Since 1974 Increasingly, protests from citizens, child welfare workers, and politicians, accompanied by research documenting the needs of children and families involved with the foster care system resulted in upheaval of the child welfare system. In1974, approximately 60,000 child abuse reports per year were made, based on the new reporting laws, and by 1980 this number increased to nearly a million. The groundwork for permanency planning and support services was laid. Abuse reports continued to grow, reaching 2 million by 1990 and nearly 3 million by the year 2000. Since the turn of the 21st century, these reports have declined, likely beccause of changes in policies, but numbers still remain high.

580

Foster Families

Federal policy provided no change or incentive to states to create permanency and family preservation programs until the passage of the 1980 Adoption Assistance and Child Welfare Act, which pioneered services to at-risk children and families. The services were intended to ensure that children lived in a safe environment that promoted their physical, social, and emotional development into adulthood. Although it was envisioned that these services would help the child remain or be reunified with his or her parents, the policy also ensured that a portion of funding was used by agencies to maintain stable out-of-home placement options and adoption assistance for special needs children (i.e., sibling groups, older children, ethnic minorities, children with medical or emotional problems, etc.). During this time, there were also many developments in foster care policies for minority children and families who were and are still disproportionately involved in the child welfare system. In 1978, the Indian Child Welfare Act (ICWA) provided federal protection for American Indian families whose children were being removed and placed into nonIndian homes. The ICWA remains in place today and gives native tribes the power to oversee American Indian child welfare cases. The Howard M. Metzenbaum Multiethnic Placement Act of 1994 (MEPA) and Interethnic Adoption Provisions of 1996 provided policies that were also targeted toward issues of race and ethnicity. Prior to this time there was much debate about whether minority children, primarily African Americans, should be placed in interracial foster or adoptive homes. However, the MEPA intended to make interracial foster care placement and adoption easier by enforcing regulations to allow or disallow placements on the basis of race and ethnicity alone and recruited more foster and adoptive families of color. Interethnic Adoption Provisions amended the MEPA to ensure that race and ethnicity are only considered in rare situations when it is in the best interest of the child. The 1997 Adoption and Safe Family Act (ASFA) intended to decrease the amount of time that children spend in out-of-home care (i.e., drift) by allowing for concurrent planning, or following multiple permanency goals simultaneously. This legislation was also a response to a growing negative feedback of family preservation efforts, which some researchers believed to cause unnecessary harm to child well-being. The 2008 Fostering Connections to

Success and Increasing Adoptions Act was enacted with multiple goals. For example, it created new structures and updated or revised the earlier legislation. Some of the provisions of the act included supporting youth ties to their relatives; providing assistance to older youth through more transition services, educational assistance, and health care; increased adoption incentives; support to American Indian tribes in efforts to maintain their culture and protect children; and providing better training for child welfare workers. While the act was authorized through fiscal year 2013, little research was done to determine what direct effects it had on families involved in foster care. Morgan E. Cooley Florida State University See Also: Almshouses; “Best Interests of the Child Doctrine”; Child Abuse; Child Advocate; Children’s Aid Society; Foster Families; Orphan Trains; Poverty and Poor Families. Further Reading Duncan, Lindsey. The Welfare of Children. New York: Oxford University Press, 1994. Maas, Henry and Richard Engler. Children in Need of Parents. New York: Columbia University Press,1959. Nelson, Barbara. Making an Issue of Child Abuse: Political Agenda Setting for Social Problems. Chicago: University of Chicago Press, 1984.

Foster Families Foster families provide care for children who have experienced abuse and neglect, and whose living situation has been deemed by the court as dangerous. Foster families arise out of complementary needs and desires of the foster parents and the children living in their care. Foster families are essential to the child welfare system, and provide a temporary or long-term family environment that is safe and nurturing. Over the past century, the foster care system has undergone several changes. Prior to the 1930s, children whose parents could not care for them for a myriad of reasons were placed in orphanages or



unsubsidized foster families, either permanently or until the parents recovered and returned for them. The reluctance of people to care for children who were not available for adoption and not likely to become a permanent member of the family led to subsidized foster care, which attracted more families than unsubsidized care. In the 1960s, the foster care system began focusing on detecting and preventing child abuse, a major reason why children were removed from their families. This new focus resulted in a burgeoning number of children in foster care and led to policies that sought permanent and stable family settings for them. In 2011, the U.S. Department of Health and Human Services estimated that there were over 400,000 children living in foster care and the system served over 646,000 children that year. The most common type of out-of-home placement is nonrelative foster care (47 percent), followed by relative foster care (27 percent). To become a nonrelative foster family, an adult undergoes the necessary procedures outlined by their local foster care agency to obtain a license to care for children who have been removed from the care of their legal guardian. Variability in Types of Foster Families There are different classifications of foster families within the foster care system. Nonrelative foster families involve an adult who is not related to the child but who has met the necessary requirements to provide care for a child. Within nonrelative foster families, are three different types of placements: traditional foster care, specialized foster care, and treatment foster care. Traditional foster families are often the first placement that a child experiences and offer care for those without documented special needs. In specialized foster care, foster parents undergo specific training to be able to handle the special needs of the child, such as those with certain medical conditions, developmental delays, and/or disabilities. Treatment foster families provide care for children with emotional or behavioral needs that require specialized services. Relative foster care families are related to the child in some manner. This is also called kinship care, and consists of an aunt, cousin, grandparent, or sibling who agrees to temporarily assume responsibility for the child. In some instances individuals in relative foster families seek licensure, and for others this may not be necessary. Demographic

Foster Families

581

characteristics differ between nonrelative and relative foster families. Relative foster parents are on average older, more likely to be over age 65, African American, in single adult households with lower annual income, and have been fostering for less time than nonrelative foster families. Characteristics of Foster Parents and Families There is little empirical data about the characteristics of foster parents and foster families. Most existing information addresses limited demographic characteristics. The income of foster parents greatly varies, and those with higher incomes are more likely to be approved as a foster parent than those with lower incomes. Despite this, there is a supply and demand relationship between foster families and children living in foster care. The demand for foster families in some agencies is high, so a low income does not necessarily disqualify someone from becoming a foster parent. The occupations and level of education widely range among foster parents, from professionals to unskilled workers, and from those having a college degree to those lacking a high school diploma. Some research suggests that having more education and being employed full time are common traits. The majority of foster parents are married, followed by single-mother households, and the majority of foster families live in single-family homes. In the past, there has been a high percentage of Caucasian foster parents, although recent decades have seen an increase in African American, Hispanic, and Asian foster parents. Aside from such growth, there remains a shortage of diverse foster families to meet the needs of the disproportionate number of minority children in care, of which 27 percent are African American and 21 percent are Hispanic, compared to 47 percent Caucasian. Research needs to examine the influence of race of foster parent and child. Characteristics of Children in Foster Families On average, children who enter foster care are almost 8 years of age, with the largest percentage of children entering foster care less than 1 year old (16 percent). The average length of time spent in foster care is almost 24 months, ranging from less than a month to more than five years. During this time in foster care, many children often experience

582

Foster Families

about three different placements, but those with emotional or behavioral problems may experience more. The majority of children who enter foster care also have siblings who enter foster care at the same time or within a year, and estimates suggest that somewhere between 23 and 82 percent of siblings are separated from their sibling(s) at least once while in care. Interestingly, children who live in a home with other children in foster care or the biological children of the foster parent often report feeling closer to these children than the sibling from whom they were separated. Children who are placed in relative foster families are more likely to be placed with a sibling than those placed in nonrelative foster families. Children are placed in foster care following confirmed reports of abuse and neglect. Because of this, it is not surprising that the rates of mental health diagnoses within foster care samples is three to ten times greater than it is for children who live in similar socioeconomic contexts and also received Medicaid. Also, about 50 percent of children in foster care are estimated to have chronic health problems. Children who experience greater maltreatment prior to foster placement are more likely to have a mental health diagnoses while in care. Although prior maltreatment may indicate mental health concerns, conditions inherent to the foster care experience may also contribute the likelihood of such diagnoses. Several studies report that multiple transitions related to changes in placement and/or schools are linked with more mental health concerns or negative outcomes. Those with fewer placements and changes in schools had lower rates of depression. The effects of maltreatment, emotional and behavioral problems, and placement instability are not known. Outcomes for Children in Foster Families Outcomes of children who enter foster care reflect either reunification or permanency, meaning the child is either reunited with a biological parent or adopted by others. Characteristics of the foster parent can influence children’s permanency outcomes. For example, the likelihood of adoption increases when foster parents are in the childbearing age (less than 45 years old), and the likelihood of reunification increases among older foster parents. One potential explanation for why children may be

more likely to be reunified with their parent after being placed with older nonrelative foster parents is because older foster parents may work harder to facilitate visitations between the children and biological family members, welcoming the goal of reunification. Among relative foster families, age of foster parent does not affect child reunification/ permanency outcomes. Higher family income is associated with higher adoption rates among those in nonrelative foster families, but not among those in relative foster families. In nonrelative foster families, greater income may allow for more long-term care, whereas relative foster families may be able to meet the nonmaterial needs of the child for a longer period, regardless of income. Also, findings show that race of foster parent is not associated with reunification, nor is foster family status as nonrelative or relative. Comparisons between young adults who lived in foster care and those who did not show differences in long-term outcomes. Those with foster care experience were educationally and economically disadvantaged. They were also more likely to have experienced troubled marriages characterized by conflict. Some suggest that eventual marital distress may be linked to their stunted emotional development in foster care and their community under-involvement that translates into less social support. Studies do not show differences in the parenting ability or personal well-being of those who have experienced foster care, although they often report having more children than were planned and being less satisfied with parenting. Despite these somewhat negative outcomes, some children improve while living in a stable foster family care setting. For instance, rates of internalizing and externalizing behaviors decrease over time among children in foster care. These decreases may be attributed to the services that children have access to and participate in because of their placement in foster care. In fact, scholars agree that living in a foster family can provide children with the necessary time to recuperate from their experiences prior to their placement and increase their resiliency. Furthermore, youth in foster care report fewer internalizing and externalizing behaviors and greater feelings of closeness to their caregiver when in relative care. The continuity of familial relationships that is maintained in relative foster families and greater familiarity

Fragile Families



with the people, rules, and culture the child experienced in their original family may account for these changes. Building Relationships in Nonrelative Foster Families Non-relative foster families have a unique experience in that the foster parents and children must learn how to navigate a shared living experience that resembles family life. Within this context, each party brings their individual characteristics, whether they are strengths or challenges, to bear upon the success of this relationship. Each party also brings their individual family history with its rules, roles, and cultural meanings. Children living in foster care report feeling better about themselves if they also perceived their foster parent as supportive. However, they report having difficulty remembering the rules from one placement to another, as well as observing better treatment of the foster parents’ biological child than themselves. Foster parents report successful relationships when they integrate the child into their family, work to provide a smooth transition to the new home, and respond in similar ways to that of their biological children. Success is also associated with working to maintain meaningful relationships between the child and his or her biological family, while providing an emotional buffer against possible disappointments and implementing an respectful parenting approach. Armeda Stevenson Wojciak Florida State University See Also: Annie E. Casey Foundation; Child Abuse; Children’s Aid Society; Foster Care; Orphan Trains. Further Readings Orme. J. G. and C. Buehler. “Foster Family Characteristics and Behavioral and Emotional Problems of Foster Children: A Narrative Review.” Family Relations, v.50 (2001). U. S. Department of Health and Human Services: Children’s Bureau. “Statistics and Research.” http:// www.acf.hhs.gov/programs/cb/research-data-tech nology/statistics-research (Accessed January 2014). Zinn, A. “Foster Family Characteristics, Kinship, and Permanence.” Social Services Review, v.83 (2009).

583

Fragile Families The Fragile Families and Child Wellbeing Study, an ongoing and longitudinal data collection effort of about 5,000 families that began in 1998, was initially developed to assist researchers and policymakers in better understanding the characteristics and capabilities of unmarried mothers and fathers. Unmarried parents have rapidly increased as a demographic group in the United States since the 1970s. Since the study’s inception by a research team at Princeton University and Columbia University (led, most notably, by Sara McLanahan and Irv Garfinkel), the Fragile Families data has yielded an immense body of important and nuanced findings about unmarried parents and their children. Inception and Purpose The Fragile Families data were designed in response to the dramatic decoupling of marriage and childbearing in the United States. Though nonmarital childbearing (e.g., having a child outside of marriage) used to be a rare occurrence, with only 4 percent of births in 1940 and 5 percent of births in 1960 to unmarried mothers, rates of non-marital childbearing have dramatically increased since then. In 2011, the latest year for which data are available, more than 41 percent of all births in the United States were to unmarried mothers (and more than 53 percent of births to women under the age of 30 were to unmarried mothers). Though increases in nonmarital childbearing have occurred across all racial, ethnic, and socioeconomic groups, it is not randomly distributed across the population. Instead, it is more common among minority mothers and mothers with low levels of educational attainment. This unequal distribution means that it is especially important to understand the causes and consequences of nonmarital families. Prior to the inception of the Fragile Families study, researchers and policymakers knew very little about the causes and consequences of nonmarital childbearing, and this gap in knowledge was a strong motivating factor behind the survey design. This lack of research and understanding was due in part to the data sources commonly used to study family behavior, which included the National Longitudinal Survey of Youth (NLSY; begun in 1979) and the National Survey of Families and Households (NSFH; begun in 1987), both collected data

584

Fragile Families

before rates of nonmarital childbearing became high, and as such, include relatively few unmarried parents. Additionally, though these existing data sources had success in interviewing unmarried mothers, they contain very little information about unmarried fathers. It is especially challenging to survey unmarried fathers because many of them are disconnected from households. Many do not live with their children, and they have high rates of residential mobility and incarceration. The pilot studies that preceded the Fragile Families survey design, which took place in 1995 and 1996, found that many unmarried fathers were present at the hospital when their children were born, and as such, concluded that interviewing fathers during the “magic moment” of childbirth would minimize non-response among fathers. The following four specific research questions guided the development of the Fragile Families study: 1. What are the characteristics and capabilities of unmarried parents? 2. What is the nature of relationships between unmarried parents? 3. What characteristics are associated with union formation and dissolution among unmarried parents who share children together? 4. How do local welfare regimes, child-­ support enforcement, and rules for paternity establishment affect unmarried parents and their children? These four questions were at the forefront of early research using the Fragile Families data, but researchers have since expanded their analytic frames to answer a variety of additional questions. Sample and Design The baseline wave of the Fragile Families study, which includes an oversample of nonmarital births, was collected between February 1998 and September 2000. First, researchers employed stratified random sampling to choose 20 cities in the United States with populations greater than 200,000. The cities were stratified across welfare generosity, child support enforcement, and the strength of the local labor market in order to maximize variation across explanatory variables and to account for how

local contexts may affect relationships and family behavior. The final 20 cities were Austin, Baltimore, Boston, Chicago, Corpus Christi, Detroit, Indianapolis, Jacksonville, Milwaukee, Nashville, Newark, New York, Norfolk, Oakland, Philadelphia, Pittsburgh, Richmond, San Antonio, San Jose, and Toledo. Hospitals were then sampled within cities, and births were sampled within hospitals. The oversample of unmarried parents yielded a sample that included about 24 percent married parents and 76 percent unmarried parents. Because unmarried parents are not randomly distributed across the population, this sample over-represents minorities, low-income parents, parents without high school diplomas, and nonresidential fathers. When survey weights are applied, the data are representative of all births in U.S. cities with populations of greater than 200,000. During the baseline wave, mothers completed a 30- to 40-minute in-person interview at the hospital after the birth of their child. Fathers were interviewed as soon as possible after the child’s birth. About 77 percent of fathers interviewed at baseline were interviewed in the hospital. The other fathers were interviewed by telephone, usually less than two weeks after the child’s birth. Baseline response rates varied by marital status and gender, but were still relatively high. At baseline, 82 percent of married and 87 percent of unmarried mothers completed the survey, as well as 89 percent of married and 75 percent of unmarried fathers. Mothers and fathers were also interviewed by telephone when the focal child was approximately 1, 3, 5, and 9 years old. These telephone interviews with parents ask questions about, among other things, demographics, romantic relationships (with the focal child’s parent and/or a new partner), attitudes, physical and mental health, economic and employment status, program participation, and neighborhood characteristics. Of the 4,898 mothers who participated in the baseline survey, 89 percent, 86 percent, 85 percent, and 72 percent participated in the 1-, 3-, 5-, and 9-year surveys, respectively. Response rates among fathers was, respectively, 69 percent, 67 percent, 65 percent, and 54 percent. In 2013, data collection was underway for the 15-year follow-up survey. In addition, at the 3-, 5-, and 9-year surveys, a subsample of families participated in in-home



surveys that included a parent survey questionnaire and an activity booklet. In the parent survey, the child’s caregiver (the child’s mother in 96 percent of observations) answered questions about family functioning and child well-being. The activity booklet includes anthropometric measures of the mother and child, the Peabody Picture Vocabulary Test scores (and in later waves, other cognitive measures), childcare information, and observations about the child’s home environment. Additional information has been collected from children’s child care providers (when children were 5 years old) and teachers (when children were 9 years old). This research has been funded by numerous government agencies and foundations, including the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Science Foundation, the U.S. Department of Health and Human Services, the William T. Grant Foundation, the Robert Wood Johnson Foundation, and the John D. and Catherine T. MacArthur Foundation. Key Findings The Fragile Families data have yielded an immense amount of information about diverse topics such as family structure and stability, fatherhood and father involvement, multipartnered fertility (when parents have children by more than one partner), parenting, incarceration, and child well-being. Indeed, as of June 2013, nearly 400 peer-reviewed journal articles had been published using the Fragile Families data. The Fragile Families data suggest that unmarried parents are diverse and that such parents are often in romantic relationships when their child is conceived and born. Of the approximately 3,700 unmarried parents in the sample, more than half (51 percent) were cohabiting when the focal child was born. Another 32 percent were in dating relationships, 8 percent reported being friends, and 9 percent reported no contact with one another. Therefore, contrary to popular belief, nonmarital births do not commonly result from one-night stands or casual sexual encounters. Relatedly, at birth, the vast majority of unmarried parents—including 92 percent of cohabiting mothers and 95 percent of cohabiting fathers—report that there is at least a 50 percent chance they will eventually marry the focal child’s other parent.

Fragile Families

585

But few of these parents end up transitioning into marriage, and instead, nearly two-thirds end their relationship within five years after the birth. Many go on to form new relationships with different partners. Researchers have found that the disconnect between expectations and realities about union formation can generally be explained by the high, often unattainable economic and relationship standards that couples hold for marriage. Second, the Fragile Families data suggest that most unmarried fathers are involved in pregnancy and childbirth. For example, 97 percent of cohabiting mothers reported that the father helped out financially during her pregnancy, as did 84 percent of fathers in nonresidential romantic relationships with the mother and 28 percent of fathers in no relationship with the mother. Similar percentages of fathers visited mothers at the hospital during or immediately after giving birth. Mothers also reported, with variation by relationship status, that most fathers’ names are on the birth certificate (95 percent of cohabiting, 80 percent of nonresidential romantic, and 52 percent of separated) and that most children will take the fathers’ last name (93 percent of cohabiting, 74 percent of nonresidential romantic, and 37 percent of separated). Finally, nearly all (99 percent of both cohabiting and nonresidential romantic and 71 percent of separated) mothers report wanting the fathers involved in their children’s lives. Third, the Fragile Families data show that unmarried parents are more disadvantaged than married parents. These differences exist across a variety of demographic characteristics. For example, although only 4 percent of married mothers in the sample had their first child as a teenager, this was true of 18 percent of cohabiting mothers and 34 percent of mothers not married or cohabiting. Unmarried mothers and fathers are also more likely to have multipartnered fertility. These parents are also disadvantaged across a variety of socioeconomic characteristics because they have less education, are less likely to be employed, and are more likely to experience material hardship. Unmarried parents are also disadvantaged in their health and well-being. They are more likely to be depressed, more likely to report fair or poor health, and to report drug or alcohol abuse. Importantly, incarceration is much more common among unmarried fathers than married fathers. For

586

Freud, Sigmund

example, although only 7 percent of married fathers had ever experienced incarceration, this was true of 34 percent of cohabiting fathers and 37 percent of fathers in neither marital nor cohabiting relationships with their children’s mothers. Finally, the vast differences between unmarried and married parents mean that adults and children in unmarried families have very different experiences. The Fragile Families data show that unmarried parenthood—and the instability associated with it—is linked to a host of deleterious outcomes for adults and children. Among mothers, family instability is associated with worse mental health, lower social support, less favorable parenting, and more economic hardship. Both unmarried parenthood and family instability are independently, negatively associated with children’s cognitive, behavioral, and health outcomes, and some of these associations are especially strong for boys. Given that unmarried parents are disproportionately disadvantaged, and that unmarried parenthood is associated with a host of deleterious outcomes for children, some researchers have suggested that families can reproduce social inequalities. Kristin Turney University of California, Irvine See Also: Child Support; Coparenting; Multiple Partner Fertility; Parenting; Poverty and Poor Families; Single-Parent Families. Further Readings Carlson, Marcia, Sara McLanahan, and Paula England. “Union Formation in Fragile Families.” Demography, v.45/2 (2004). Edin, Kathryn and Paula England, eds. Unmarried Couples With Children. New York: Russell Sage, 2007. Gibson-Davis, Christina, Kathryn Edin, and Sara McLanahan. “High Hopes but Even Higher Expectations: The Retreat From Marriage Among Low-Income Couples.” Journal of Marriage and Family v.67/5 (2005). Reichman, Nancy E., Julien O. Teitler, Irwin Garfinkel, and Sara S. McLanahan. “Fragile Families: Sample and Design.” Children and Youth Services Review v.23/4–5 (2001). Tach, Laura and Kathryn Edin. “The Compositional and Institutional Sources of Union Dissolution for

Married and Unmarried Parents.” Demography, forthcoming.

Freud, Sigmund Commonly known as the father of psychoanalytic theory, Sigmund Freud pioneered the study of the unconscious mind in regard to human behavior. Born May 6, 1856, in Frieberg, Moravia, Freud moved with his family to Vienna, Austria, in 1860. He began studying medicine in 1873 at the University of Vienna, completing his medical degree in 1881. After studying in Paris with neurologist Jean-Martin Charcot on a fellowship, Freud began to develop the foundation for what would become psychoanalysis by experimenting with hypnosis techniques, an approach he adopted from his mentor, the Viennese physician Josef Breuer. Beginning in 1885, Freud researched, wrote, and lectured on his theory of psychoanalysis until he fled Vienna for London, England, in 1938 in order to escape the Nazis. He died in London of mouth cancer in 1939. One of Freud’s most well-known theories of family relations is that of the Oedipus complex. Freud named this condition for the Greek myth of King Oedipus, who fulfills a prophecy stating that he will kill his father and marry his mother. He argued that the Oedipus complex is rooted not only in myths of ancient history, literature, and societal conventions, but is also deeply embedded within family relations. For Freud, the myth of King Oedipus demonstrates the innately human competition between fathers and sons, and mothers and daughters. The myth also illustrates societal taboos against incest and the guilt that such incestuous feelings may cause. In its most basic configuration, the Oedipus complex describes the unconscious desires that a child feels toward the parent of the opposite sex. Thus, the child sees the parent of the same sex as a rival for the other parent’s affection. The child may have violent thoughts against this rival that he or she either represses or expresses. Freud believed that such a situation was entirely normal and positive. However, in a negative formulation, the child harbors desires toward the parent of the same sex and develops a rivalry with the parent of the opposite sex. These rivalries may also occur with other mother or father figures, as



Sigmund Freud’s psychoanalytic theory and system came to dominate the field from early in the 20th century, forming the basis for many later variants.

well as with other family members. Freud continued to refine this theory throughout his career. Freud’s initial and best-known formulation of the Oedipus complex posits the male child’s sexual desire for his mother and aggression toward his father as a phenomenon that every boy experiences in some form during childhood. The complex significantly informs the way that a child’s relationships with others take shape into adulthood, particularly in terms of sexuality. For Freud, the theory also explains the phenomenon of sexual difference, where an assumption of maleness exists as the primal state. The young girl may understand her lack of male genitalia to be the result of castration by a jealous mother who resents the girl’s incestuous feelings toward the father, while the young boy might fear the act of castration from his jealous father. For the boy, as he comes to negotiate this complex, he represses incestuous feelings toward the mother, begins to identify with the father, and the Oedipus complex eventually dissolves so that the boy may detach from parental figures and transfer his libido to another person outside of the family. For the young girl to negotiate the complex, however, she must embrace what Freud calls a “feminine attitude” toward her father and redirect her desire

Freud, Sigmund

587

for a penis toward a desire to have children. Freud believed that the child’s negotiation of the Oedipus complex was the basis for all neuroses in adult life. Controversial psychoanalyst Jacques Lacan later modified Freud’s theory in many ways, introducing the concept of a symbolic phallus and transposing this onto the child’s transition from nature to culture, the entry into the symbolic order. Early in his career, Freud also developed a theory of “family romance.” The concept might be understood as a wish or fantasy that emerges during childhood, whereby the child fantasizes to become part of an ideal family, such as a wealthy, upper class, or royal family. The fantasy of becoming the long-lost child of a wealthy or famous person, for example, offers a young boy or girl a temporary escape from the difficulties experienced with his or her real parents. The family romance is often triggered through the child’s envy and idealization of another child’s mother and father. Freud suggested that the fantasy might enable children to begin the process of separation from their parents, repress incestuous feelings, and facilitate the growth of imagination. According to Freud, a child may experience one of many variations of this family romance throughout youth, usually forgetting these fantasies in adulthood, which may later be recovered through psychoanalysis. Freud’s work has been influential in shifting professional thinking from a concentration on symptoms to the importance of immediate family relationships. His work on family relationships is significant for the later development of family therapy, providing therapists with a foundation for an understanding of family history as fundamental to analysis. Freud’s direct influence on contemporary family therapists can be observed in particular schools of therapy that continue to employ psychoanalytic concepts in sessions. His indirect influence, however, resonates through virtually every form of family therapy, which generally aims to produce a family member’s greater awareness of thoughts, feelings, and relations through analysis—a process that attempts to make the unconscious available to consciousness. Chris Vanderwees Carleton University See Also: Adolescence; Adolescent and Teen Rebellion; Gender Roles; Incest; Mental Disorders; Nuclear Family; Psychoanalytic Theories.

588

Frontier Families

Further Readings Davies, Hilary A. The Use of Psychoanalytic Concepts in Therapy With Families. London: Karnac Books, 2010. Freud, Sigmund. The Freud Reader. Peter Gay, ed. New York: W. W. Norton, 1989. Freud, Sigmund. Totem and Taboo. A. A. Brill, trans. New York: Vintage Books, 1946. Gay, Peter. Freud: A Life for Our Time. New York: W. W. Norton, 2006. Lacan, Jacques. Écrits: A Selection. Bruce Fink, trans. New York: W. W. Norton, 2002. Slipp, Samuel. The Freudian Mystique: Freud, Women, and Feminism. New York: New York University Press, 1993. Thurschwell, Pamela. Sigmund Freud. London: Routledge, 2000. Thwaites, Tony. Reading Freud: Psychoanalysis as Cultural Theory. London: Sage, 2007. Young, Robert. The Oedipus Complex. London: Icon Books, 2001.

Frontier Families Scholars use the term frontier to describe a wide variety of situations. It frequently refers to a person, place, or scenario that exists beyond established boundaries. As such, frontier families lived in the western portions of North America, outside the lines of settled communities. The frontier line was fluid, moving as more people pushed against it. For that reason, many states and territories (e.g., Oregon, Colorado, Kansas, Illinois) earned the label of “frontier” by the 19th century. Families who chose to inhabit frontier spaces attempted to make economic progress while civilizing the wilderness. The heyday of this style of family living ended with the closing of the American frontier in the 1890s. Although some men and women were born into this lifestyle, the vast majority of frontier settlers were immigrants, arriving from other locales. Two major waves of migration to the west occurred: between 1840 and 1860, and between 1870 and 1890. Historical records show entire family groups and communities travelling hundreds of miles via wagon or railroad in order to reach their final destinations. These movements proved challenging

and dangerous. Indians, environmental conditions, and illnesses all posed potential threats to a family’s well-being. At the conclusion of an often arduous journey, everyone would go about establishing temporary living quarters. These encampments would then transform into crude homes over a course of weeks. Comfort was generally of minimal concern. Instead, frontier families tended to funnel their available resources toward economic improvement via farming or other tasks. Growing cash crops such as corn or wheat could provide valuable revenue. They could also sustain a growing family unit. For many frontier families, life revolved around agricultural cycles. Work began at sunup and lasted until sundown. In critical periods, such as planting or harvesting, all members of a family would work in the fields. The men would split rails or drive livestock. The women broke sod. The children pulled weeds or guarded against groundhogs and other pests. Survival was often dependent upon environmental conditions. A drought, flood, or a plague of locusts could lead to starvation. Government programs, including the Civil War–era Homestead Act, granted plots of land to family groups, but it was up to the settlers to make these tracts productive. Frontier women, in particular, attempted to create comfortable homes in the wilderness. They wanted to replicate patterns of domesticity found in more settled areas. This required a great deal of work. Completing the laundry might take all day. Canning vegetables and fruits was necessary, but time consuming. Cooking presented particular difficulties as the vast majority of frontier dwellings only contained an open fireplace. Some tasks were simply impossible. For example, sweeping a sod house only kicked up additional dust. However, it is important to note that these domestic tasks could also reap financial rewards. Frontier women frequently sold the products of their labors (e.g., eggs, canned goods, needlework) on the open market, thereby supplementing the family income. Strong Women If farming proved unsuccessful or if it was not an option, frontier men might set out in search of additional opportunities. This often meant leaving their families behind to work as a wage laborer for part of the year. Scores of men in the Midwest, for example,

Frozen Food



found part-time employment as loggers and miners. Whereas women tended to stay on the homesteads, their male counterparts possessed more freedom of movement. While their husbands were away, the wives took on additional duties to keep the household properly functioning. This could mean paying local creditors, or fighting off hostile natives. These subtle shifts in gender roles partially explain why many western states were the first to pass community property laws. Such legislation made it possible for married women to engage in limited economic transactions. The increased mobility and minimal community supervision also contributed to the break-up of frontier families. Desertions were quite common; men would simply leave the household and never return. Census records from frontier locations frequently show single women, widows, and abandoned wives all independently running homesteads. Because of labor needs, frontier families tended to be quite large. It was not uncommon for healthy parents to have six or more children live to adulthood. All of these children would contribute to the household in one way or another, and they would typically live with their parents until they grew to maturity. If they became more of a resource drain than an asset, it was time to depart and create a separate living. This was especially the case for boys, many of whom were drawn even further west. Frontier Towns Frontier towns held a certain allure for family groups. Often starting out as simple trading posts, these gathering spots offered the possibility of community. Instead of camping out, a family might choose to rent one or two rooms from a boarding house. These establishments provided an opportunity for socialization and visiting, thus lessening the feelings of isolation common in frontier living. The establishment of churches and schools further cemented the bonds between family groups. Throughout the 19th century, frontier towns frequently resembled ethnic enclaves, with persons of a certain cultural origin choosing to live in close proximity to one another. In the northern plains, approximately a third of all settlers came directly from a foreign country. However, frontier families could also suffer because of the presence of certain town vices. In particular, late 19th-century reformers argued that

589

drinking and prostitution posed unique threats to the family. Wives were told to make their homes even more inviting so that men would not feel the need to spend time in saloons and brothels. The strength of temperance reform, in particular, led to the passage of alcohol laws in many western areas. The goal, once again, was to civilize the frontier. Men and women moved to the frontier for a wide variety of reasons, but once there, they tried to survive and thrive in the midst of a challenging environment. This emphasis on survival bonded some families together, while driving others apart. Robin C. Sager University of Evansville See Also: Primary Documents 1862; Social History of American Families: 1790 to 1850; Social History of American Families: 1851 to 1900; Westward Expansion. Further Readings Faragher, John Mack. Women and Men on the Overland Trail. New Haven, CT: Yale University Press, 2001. Moynihan, Ruth, et al., eds. So Much to Be Done: Women Settlers on the Mining and Ranching Frontier. Lincoln: University of Nebraska Press, 1990. Myres, Sandra L. Westering Women and the Frontier Experience, 1800–1915. Albuquerque: University of New Mexico Press, 1982.

Frozen Food During the second half of the 20th century, frozen food became an easy option for busy mothers trying to provide a nutritious meal for their families. As women’s roles changed within the American society, so did their approach to providing family dinners. Companies catered to the family’s changing needs by producing frozen foods that could be quickly prepared by either a homemaker, a working woman, or other family members who were either busy or had limited culinary skills. Consumers loved the convenience and variety of frozen foods, and as their tastes changed toward the end of the century, the industry responded by providing healthier options for the millions of individuals and families who rely on such meals.

590

Frozen Food

History of Frozen Food In the United States, entrepreneur and inventor Clarence Birdseye first succeeded in bringing frozen food selections to the American public. His first attempts at a quick freezing technique around 1925 resulted in products such as seafood, vegetables, and fruits that could be transported long distances and stored under proper conditions for a long time without spoiling. However, frozen foods were slow to catch on in the 1930s during the Great Depression, possibly because few people had reliable freezers. Even with strong companies such as Birdseye, Swanson, and Stokely-Van Camps Honor Brand providing more consumer options during the later 1930s, distribution was limited to large urban markets, and items were sold at high prices and considered luxury items. After World War II, frozen foods began to catch on because of the increased number of households with freezers and the rise of the suburban supermarket. During this era, Maxson Food System designed the modern frozen dinner for airlines meals for companies like Pan Am. These meals were originally called Sky Plates; however, they were complicated to produce, expensive, and not viable for mass market. Other companies, such as Quaker State Food and Stouffer’s, entered the industry, producing dinners for airlines, train travel, and the mass market. During these years, the market for frozen dinners was concentrated within the travel industry. Finally, Gerry Thomas, a Swanson executive, designed a meal comprised of sliced turkey and two side dishes, served in an aluminum tray. Swanson coined the term TV dinner, trying to link the product to the latest technology sweeping the nation. This new format and marketing angle proved successful; consumers loved the idea of a full meal that they could eat while watching their favorite shows. Homemakers loved the convenience. Prepared meals bolstered the frozen food industry as a whole. By the mid-1950s, over 20 companies entered the industry, including the Campbell Soup Company, which developed a line of frozen soups. By the end of the 1950s, frozen foods were outpacing the sale of traditional fresh fruit and vegetables. Consumers were turning to frozen options for many of their meal choices. Major supermarkets like Kroger, Safeway, and A&P were now carrying a much larger selection of frozen foods than in previous decades. These new stores could afford

to provide large freezer sections for their frozen food selections. Frozen foods were no longer seen as a luxury, but as a necessity for a new suburban consumer class. Changes in Family Dynamics: 1950s and Beyond The prosperous postwar economy boosted consumer confidence and enabled new suburbanites to fill their new homes with appliances. Refrigerator/freezer units were larger and more reliable than ever before, and homemakers wanted the latest models with the newest features. However, despite with the economic boom fostered by increased consumer spending, American families remained entrenched in traditional values. The baby boom was in full swing, and homemakers needed a way to easily prepare many meals for many mouths using the new technology at her disposal. As the 1950s progressed, the volume of frozen foods sold escalated and prices dropped. This allowed more families to purchase the frozen options and allowed families to save time and money in the household budget. Frozen dinners were easy to make and clean up, and allowed everyone to spend more time in front of the television. Fathers who worked late could still have a hot meal when they got home, and children who were picky eaters did not have to eat the same meals as mom and dad. By the late 1960s, as more women entered the workforce, frozen foods allowed family members to feed themselves, and family mealtimes became less and less the norm. In essence, the rise of frozen foods coincided with the change in family dynamics. By the 1960s, frozen food companies like Libby were advertising to young children and creating meals especially to suit children’s taste buds, creating a divide between what many children ate and what their parents ate. During the remainder of the 20th century, frozen food continued to be a part of many families meals. The final frontier of frozen food for families included a range of organic options. Companies like Amy’s created organic and vegetarian options for health-conscious consumers. Companies such as Happy Baby created organic frozen baby foods to be sold at the new health-conscious grocery stores such as Whole Foods, Trader Joe’s, and Fresh Market. Today, working mothers and fathers depend on the convenience, healthy

Functionalist Theory



ingredients, and budget-minded options that frozen foods provide. Michele H. Riley Saint Joseph’s College at Maine See Also: Family Values; Gender Roles; Television, 1950s. Further Readings Hamilton, Shane. “The Economies and Convenience of Modern Day-Living: Frozen Foods and Mass Marketing.” Business History Review, v.77/1 (2003). Shapiro, Laura. Something From the Oven: Reinventing Dinner in the 1950s America. New York: Penguin, 2004. Smith, Andrew. Eating History: 30 Turning Points in the Making of American Cuisine. New York: Cambridge University Press, 2009. Sun, Da-Wen, ed. Handbook of Frozen Food Packaging and Processing. Boca Raton, FL: Taylor & Francis, 2006. Toussaint-Samat, Maguelonne. A History of Food, 2nd ed. London: Blackwell Publishing, 2009.

Functionalist Theory Functionalist theory, also known as structural functionalism, commonly focuses on the roles and central tasks that family members should engage in. According to this theory, families with the proper structure were more likely to (1) be healthy, and (2) raise children who would become productive members of society in adulthood. Since the 1970s, criticism that the theory is elitist or myopic has gained ground, and the balance has shifted to focus more on family functions, rather than family membership. Structural functionalist theory has recently been championed by policymakers who advocate a return to “traditional” family lifestyles, and service providers who see the value of the theoretical principles in environments such as group homes. Early 20th Century From the 1920s to the 1960s, structural functionalism was a major theoretical approach in family studies. It was based on the premise that family stability

591

provided a healthful environment for couples and children. A stable environment allowed parents to model appropriate behavior between adults and teach children about proper values and behaviors. As children grew into adulthood, it was presumed that this stable environment would result in them becoming productive members of society by marrying and having children. Thus, there would be an intergenerational replication of healthy families. Collectively, society would benefit from the inclusion of healthy individuals from these families in other settings, such as schools and work places. During this period, a strong emphasis was placed on family structure. The ideal structure was a heterosexual married couple with biological children. Marriage was considered important because it gave adults a supportive relationship and encouraged fidelity. Women, as wives and mothers, fulfilled their functions by maintaining the family’s emotional well-being and taking care of children. Men, as husbands and fathers, fulfilled their functions by being employed, earning a paycheck, and disciplining the children. Thus, spousal roles were seen as complementary, but not overlapping. Married couples were expected to have children; they were a necessary component of family formation, and parents raised them to abide by the rules of society. According to the theory, the appropriate family structure facilitated the fulfillment of instrumental and relational functions. A well-formed family was similar to a well-formed machine—it worked smoothly and efficiently because it had the right components. Functionalist theory lost some credibility during the Great Depression because the breadth and scope of poverty across the nation made it evident that many family problems were not simply a result of family dysfunction and proper family membership would not be sufficient to solve its problems. However, the theory regained some importance following World War II and the rise of the baby boomer generation, those born between 1946 and 1964. Because of the millions of families created during this time, structural functionalism became a way to conceptualize family wellness as traditional values once again took center stage in society. Late 20th Century to Early 21st Century From the beginning, functionalist theory was focused on the traditional family, with a husband, wife, and

592

Funerals

children. Other family structures, such as single parents, couples without children, or unmarried couples, were considered variations that reflected deficiency, or even deviance. Functional theorists believe that these nontraditional families lacked a proper structure, and because of this, they were not able to fulfill their proper functions. Since the 1960s, functionalist theory has been strongly criticized for its inability to accept variations of the traditional family. Thousands of studies have shown that children raised in nontraditional families can become healthy and productive adults. The theory was also criticized for ignoring social barriers such as racism, or working-class families in which the mother worked outside of the home. Given the validity of these criticisms, some functional theorists began to place less emphasis on family structure and more emphasis on family functions. This neofunctionalist approach suggested that family membership might change over generations as new structures start to evolve. Thus, the definition of family can be rather fluid. However, the essential functions of families, namely affection, guidance, and fulfillment of daily needs, will likely remain unchanged. Thus, according to neo-functionalists, any family structure that can adequately fulfill these functions can be considered a healthy family.

shared history. Thus, the structural-functionalist theory can serve as one model of how to manage interactions in such settings.

Conclusion Among family theorists, structural functionalism never regained the status that it held during the early 20th century. Other theories, such as systems and chaos theory, have gained more academic attention. However, structural functionalism is highly consistent with political rhetoric that focuses on the “return to traditional family values.” Politicians’ and policymakers’ preferences for traditional families can have significant consequences when it comes to passing laws regarding issues such as daycare funding, equal pay, or welfare benefits. In addition, some clinicians and caseworkers have argued that functionalist theory is relevant to collective environments such as group or foster homes. These environments may benefit from the clarity of role divisions and function fulfillment. Such group environments often strive to create a family atmosphere among people who do not have a

Funerals

Jacki Fitzpatrick Texas Tech University See Also: Baby Boom Generation; Family Values; Marital Division of Labor. Further Readings Lansford, J., R. Ceballo, A. Abbey, and A. Stewart. “Does Family Structure Matter? A Comparison of Adoptive, Two-Parent Biological, Single-Mother, Stepfather, and Stepmother Households.” Journal of Marriage and Family, v.63 (2001). Ornaga, E., K. McKinney, and J. Pfaff. “Lodge Programs Serving Family Functions for People With Psychiatric Disabilities.” Family Relations, v.49 (2000). Pitts, J. “The Structural-Functional Approach.” In Handbook of Marriage and the Family, H. Christensen, ed. Chicago: Rand McNally, 1964. Scanzoni, J. “From the Normal Family to Alternate Families to the Quest for Diversity With Interdependence.” Journal of Family Issues, v.22 (2001).

A funeral is a ceremony for remembering and celebrating the life of someone who has recently died. Families and communities enact a range of ceremonial acts and customs to mark the passing of a loved one. These actions are generally referred to as funeral or death rites. Funerals serve as a rite of passage and denote a change, not only in the status of the deceased, but also his or her survivors. As an organized and purposeful group-centered response to death, funerals are an impetus to cope with loss. Often considered the “centerpiece” of death rituals, funerals address the religious, spiritual, and cultural needs of those involved. Social Functions of Funerals Historically, funerals have served several social functions. They provide a means of notifying others



of a death, spreading the news of a death from the immediate family to relatives, friends, and eventually acquaintances and the broader community. Beginning with obituaries and memorial cards and proceeding to ceremonies and burials, funerals acknowledge and memorialize a person’s death. They also provide a setting for disposing of the body, most commonly via burial or cremation. The funeral ceremony often aids the bereaved by helping them cope with grief. Having a place and time to honor the dead and be with family and friends benefits those in dealing with loss. Funerals also serve as a demonstration of economic and social obligations, such that roles taken by participants reflect their social and family positions. How one participates in funeral rituals and conducts oneself during this time of grief and loss can define, or redefine, familial ties and relationships. Often of special concern is the participation of children. A child’s understanding of death and participation in mourning practices such as funerals can impact their experience of grief and loss. While it is not uncommon for families in the United States to “protect” children from funerals because of concerns that they may be upsetting or traumatic, research indicates that most children benefit from participation in this important ritual, provided that they are prepared and given support both during and after the event. With variations based on culture and religious beliefs, funerals allow individuals of all cultures to maintain relations with ancestors, unite family, foster community and group cohesion, reinforce status, and restore social structure. Common Funeral Practices Throughout most of American history, deaths have largely occurred in the home in the company of loved ones. However, the American funeral, which once consisted of handmade coffins and wakes in the family home, has undergone significant transformations over time. The undertaker, initially introduced as a merchant and supplier of funeral items, has evolved into the professional funeral director of today. These funeral directors (also known as undertakers or morticians) have primary responsibility in caring for the deceased. This involves several elements of last rites that surround the funeral ceremony, including body preparation and viewing/visitation prior to the

Funerals

593

funeral, and procession and committal following the funeral. Upon death, the body is prepared for disposition. Whether at home or in a funeral home, this generally involves cleaning and disinfecting the corpse. While different religious and cultural beliefs influence body preparation practices, funerals in the United States often include the process of embalming. Adopted during the Civil War to facilitate the transportation of the dead long distances to their homes, embalming originated in ancient Egypt to preserve the dead for afterlife. Today, embalming is generally done if the body is to be viewed during a wake or funeral, although it is not required. A process of temporarily preserving the body after death, embalming is only required by law in certain cases, such as transportation of the body across state lines, and these legal requirements vary by state. The Federal Trade Commission (FTC) requires mortuaries to obtain permission to embalm in order to charge a fee for this procedure. Frequently, prior to the funeral ceremony, the family may host a wake or visitation at a funeral home, which may be an open casket viewing in which visitors can see the deceased, or a closed casket reception in which the body is not visible. If the body is to be cremated, it may still be viewed prior to cremation. While embalming is generally not done in the case of cremation, families may opt for the process if there is to be a viewing. The family may choose to rent, rather than purchase, a casket for the viewing. Visitation is often in the afternoon or early evening and generally takes place one to three days before the funeral. Whether or not there is a wake or visitation may depend on religious and cultural customs of the deceased and his or her family. The funeral ceremony may be held at the funeral home or in a religious setting such as a church. Ranging from very simple to quite elaborate, funeral ceremonies offer individuals the opportunity to express grief and share memories of the deceased. It is a time to honor and pay tribute to the deceased and affirm the importance of his or her life with those who shared it. Funerals also serve as a reminder of mortality and as a way to communicate beliefs about life and death. After the funeral, there is often a procession from the site of the funeral to the place of burial or

594

Funerals

interment of the ashes. The procession symbolizes the living accompanying the deceased to the land of the dead, and then returning to the land of the living to reestablish their lives without their loved one. Committal is the act of committing the body of the deceased to its final resting place. The most common methods of disposal are burial, cremation, and entombment. If the body is buried it is placed in the earth, often within a coffin or casket. Cremation involves burning the body to ashes at a crematorium. The ashes may be stored in an urn, buried, or scattered on land or water. In entombment, the body or ashes are permanently stored in an above-ground tomb or mausoleum. Less common methods of body disposition include donation of the body for scientific study, burial at sea, and disposal by exposure, such as cultural practices of sky burial. As with all elements of funeral rites, religious, cultural, and legal considerations impact how and where the body is disposed of. Families may choose a direct cremation or immediate burial instead of a standard funeral. Generally, following the funeral, burial, and committal, family and friends will gather together to provide emotional support and pay tribute to the deceased. Critiques of the Modern Funeral Modern funerals are criticized because of their increasingly high costs. Jessica Mitford’s bestselling critique of the American funeral industry, The American Way of Death (1963), noted that the cost of dying was rising faster than the cost of living. Based on data from the National Funeral Directors Association (NFDA), the average cost of a funeral rose from $708 in 1960 to $6,560 in 2009. The FTC reports that many funerals run over $10,000. After buying a home and car, a funeral is often one of the most expensive purchases a consumer will ever make. Funeral costs include any and all services and goods provided by the funeral home, such as body preparation (i.e., embalming or refrigeration), facilities (i.e., use of a visitation or viewing room), a casket or urn, and costs for disposition of the body. Additional costs include memorialization costs, such as a headstone or grave marker, and miscellaneous expenses such as death notices, memorial cards, and flowers. A casket is often the single most expensive item in a traditional, full-service funeral.

According to the NFDA, the average cost of a metal casket in 2009 was $2,295; however, in 2014, many sold for over $4,000 and can reach as high as $10,000. Cost varies depending on casket material type (i.e., mahogany, copper, pine, or metal), interior fabric (i.e., velvet or crepe), and design (i.e., handles or ornate hardware). Costs may also vary among conventional mortuaries and casket discounters. The FTC Funeral Rule requires that customers have access to a list of casket prices and descriptions. Consumers of funeral services are able to compare prices and evaluate services available in their community to make informed decisions, while funeral homes can help customers by negotiating reduced prices for services. Funeral Trends In the United States, burial is the most common method of body disposal, although cremation is becoming more popular. Research indicates that interest in cremation increases in accordance with the deceased’s age, education, and income. Additionally, the number of people selecting cremation for others is increasing faster than the number choosing it for themselves. Furthermore, recent years have seen an increase in direct cremation and immediate burials following death, and more families are opting to hold memorial services in lieu of traditional funerals. Memorial services perform the same overall function as a funeral; however, the body is not present. These services are more likely to take place outside of traditional funeral settings and may take place immediately following a death or many months later. With the rise of the Internet, grief and mourning have a new outlet via “cybermourning” and “virtual cemeteries.” It is not uncommon to receive a link to a death notice and to sign a virtual funeral guest book. The funeral function of death notification now frequently takes place via social media sites such as Facebook and Twitter. Families also have online access to novel options for memorializing the dead, such as cremation jewelry that contains the deceased’s fingerprints. What used to be referred to simply as burial, laying the body in the earth without embalming or a casket, is now known as a “green burial.” Some traditional cemeteries are now allowing this practice, and other entirely green cemeteries are developing. There is also a resurgence of home funerals

Funerals



and home burials as part of this green movement. With regard to home burials, in which bodies are interred on private property, instead of a cemetery, state regulations vary, and limited data exists on this do-it-yourself trend. Kelly Melekis University of Vermont See Also: Death and Dying; Estate Planning; Rituals; Trusts; Widowhood; Wills.

595

Further Readings Hoy, William G. Do Funerals Matter? The Purposes and Practices of Death Rituals in Global Perspective. New York: Routledge, 2013. Mitford, Jessica. The American Way of Death Revisited. New York: Knopf, 1998. Roach, Mary. Stiff: The Curious Lives of Human Cadavers. New York: W. W. Norton, 2003. Slocum, Joshua and Lisa Carlson. Final Rights: Reclaiming the American Way of Death. Hinesburg, VT: Upper Access Publishers, 2011.

G Games and Play All mammals play. This suggests that play serves some significant purpose borne out through years of evolutionary process. Play also underlies the human potential for innovation, happiness, and career achievement—the brain and body benefit from play. When one sees children at play or adults at leisure one often dismisses it as a break from getting “real work” done, but there are myriad benefits to play, and it contributes significantly to cognitive, physical, social, and emotional well-being. At a time when recess in schools is increasingly under attack, it is important to examine the role that play has in daily life. It is no accident that a baby’s inborn need to learn manifests itself through exploration and play. Although freeform or imaginative play and more structured games have different benefits, they both help in many areas of life. Because families are the primary site for nurturing play, it is important to understand the role of play more fully. Child Development Sigmund Freud was one of the first to signal the critical importance of play in identity and overall child development. Ample research suggests that early experiences—especially in the first three years—are crucial to long-term development in many areas. As babies play, they learn to use symbols, and this

is necessary for both cognitive development and the communication process. Play contributes to the development of both fine and gross motor skills. Playing with small toys or engaging in art activities, helps children develop the fine motor skills necessary for later activities ranging from writing to becoming a surgeon. Play can nurture resilience and increase self-confidence. When a tower of blocks falls down, and the young child successfully rebuilds it, the child learns the importance of perseverance and gains a feeling of self-worth that will help him or her tackle more complex endeavors. Creativity has often been linked to problemsolving, and one of the biggest benefits to play is that it stimulates critical thinking. Play offers a risk-free environment to take risks, challenge oneself, and experiment with alternative solutions to problems; this enhances the ability to successfully adapt to new or novel dilemmas. For example, video games—often discounted as unilaterally violent and of little value—can fuel the development of logic and critical thinking. Gamers are often presented with novel problems to which they must derive a successful solution, sometimes within a split second and requiring the processing of a great amount of contextual detail, and they learn the benefit of trying various solutions if their first choices fail. They learn to set goals, think strategically about obtaining them, and evaluate their strengths and weaknesses. As they 597

598

Games and Play

gain immediate feedback on their decisions, they learn to correct their performance. Play can be a gateway to learning in a variety of areas. The benefits to creativity and problem solving produced by play help children master concepts in a more complex way than that produced by rote memorization. Both imaginative play and participation in extracurricular school activities such as sports have been consistently linked to success at school. Overcoming obstacles and meeting one’s goals in an organized sport can support self-esteem and encourage a child to take on additional challenges such as those found in academics. Pretend play encourages vocabulary building, verbal dialogue, and cognitive organization about both real and imagined environments. These skills are important building blocks for literacy. Puzzles, organized sports, and many video games offer benefits to the spatial-reasoning skills that are critical to many aspects of learning. Increasingly, there is evidence to suggest that the familiarity with technology conferred by video-game play can have a direct benefit for future career skills. Presenting other material in new and stimulating ways enhances learning in general. As many educators and parents acknowledge, children learn best when there is some element of fun involved. Communication From the first smile at a caregiver to a game of patty cake, the infant practices communication with others through play. Play helps children express their emotions, develop and improve social skills, and cultivate a sense of self. It is fundamental for developing early relationships with others. As they learn to interact with their social environment through play, children gain important verbal and nonverbal skills, and they gain understanding of social rules and roles. For instance, a game of peek-a-boo teaches children that they can rely on their caregiver to return, and thus they can be trusted. They also learn symbol use through the realization that the pleasant experience has a name and gestures connected to it; and they learn that there are roles for participants. Even early games and exploration teach children important lessons about freedom and boundaries. Hide-and-seek teaches children boundaries about where it is unsafe to hide, and the social boundary created by caregivers when they signal that the play is done. Overall, play

encourages adult caregivers to interact with children, which in turn enhances their relationship and supports the child’s developing sense of self within relationships. Play teaches children how to cooperate with others and to tolerate those who are different from them. Because children are so inherently motivated to play, they are driven to solve complex social dilemmas and to do so in a fashion that adapts to ongoing changes in the environment. If a child wants others to play a game, the child must effectively communicate information about the game, understand the emotions of others enough to incite agreement to play, and then adapt one’s desires for the play to motivate continued participation. They must work together and follow agreed-on social rules—those that are implied, and those made explicit, such as in organized sports. They have to learn how to lose and win in ways that maintain harmonious relationships to preserve the chance to play again. As they progress into the school years, children have the opportunity to interact with others who have different backgrounds and experiences and they must acknowledge and adjust to these differences for the play to continue. Turn-taking and sharing are the most basic and yet important lessons taught by play. The ability to do both undergirds most aspects of social interaction. In fact, the capacity to delay gratification derived from mastery of turn taking has been linked to later academic success and is a necessary component in participating in sports or work groups. Video games are increasingly used therapeutically with people with autism to teach group communication skills. Organized sports and many multiplayer video games teach how to communicate and solve problems as a team to achieve particular goals. These pursuits often entail secondary social interaction (e.g., celebrating after a win in sports, or participating in a video-game community forum) that also builds communication skills. A healthy and functional approach to play in children is often a sign of a supportive, nurturing, and structured family environment. Physical and Mental Health One of a family’s biggest priorities is the physical and mental health of its members. Gaming and play have benefits for both. The physical benefits of



The San José Library Seven Trees Branch offers a weekly program for children called Game Zone. Play can help develop fine motor skills, nurture resilience, and increase self-confidence, while vigorous play and organized sports can reduce stress in both children and adults.

exercise have been well touted and physical health is important for overall well-being. Vigorous play and organized sports can decrease stress and boost endorphins that in turn lift one’s mood. Although sports may first come to mind when considering physical health, new advances in video-game research also suggest that certain interactive games support physical activity. Video games have increasingly been used to help physical rehabilitation patients to regain balance and coordination. One of the most well-documented benefits of such activities is the development of hand-eye coordination, which entails more than the development of fine motor skills—it is the ability to think quickly and react appropriately to environmental stimuli. The U.S. military has pioneered much of this research by using video games to train people in potentially dangerous arenas, such as flying a fighter plane or using a remotely operated weapon. As society becomes more technologically advanced,

Games and Play

599

it is likely that hand-eye coordination will become an increasingly important skill. Play also has an array of mental health advantages. Studies show that playing games, exercising, or engaging in organized sports can alleviate anxiety and depression. For example, video games have been successfully used to treat post-traumatic stress disorder and help people overcome phobias. For others, leisure activities provide stress relief and other outcomes that help buffer against mental illness. In many sports or games, one gains self-confidence by working toward and achieving short- and long-term goals. The feeling of camaraderie often present in group game play or sports provides a network of social support that delivers an important protective function for all aspects of well-being. For children, play therapy can provide catharsis, a sense of control over their world, and a safeguard for mental illness. Adults can talk about their feelings and problems but children often do not possess the tools or understanding to do this. Play is a child’s language. Evidence shows that even brief therapeutic free play may help children be more focused and calm in the classroom. Without therapy, children who have experienced significant trauma have less imaginative play and may become “stuck” on play themes without achieving the benefits of the play. This underscores the crucial role that play has in supporting overall mental health and well-being. Adults at Play The importance of play for adults is under-recognized. Adults engage in a number of leisure activities for relaxation, including computer games, exercise or sports, crafts, or social clubs. When one engages with others in a playful way one receives the same benefits as in childhood. The healthy social support network provided by group play is an important protective factor for warding off loneliness and mental or physical illness. Increasingly, play therapy techniques typically used with children are adapted for use with adults. Play can enhance family relationships and create a sense of community. Research suggests that seniors who remain physically active retain a higher quality of life, and doing mental activities such as Sudoku puzzles can help retain brain function for Alzheimer’s patients. Hence, it is clear that play and games perform pivotal roles in keeping brains happy and healthy.

600

Gated Communities

The family is also an important support for adult careers, and a training ground for the future career success of children. In the late 1980s, many employers began offering more company leisure activities (e.g., a ping-pong table in the break room, or a company softball league) to enhance company performance. There was increasing recognition that encouraging some “down time” for employees could decrease stress and burnout, enhance productivity, and inspire a sense of community—all with tangible and positive consequences for work performance. Allowing employees to positively interact with each other, take a physical break, and rest their brains increases creativity at work. Many employers encourage “brainstorming,” a type of imaginative thinking exercise, to encourage novel solutions to work dilemmas. Some employers have even used video games to train specific job skills, and there is evidence that playing certain simulation games may enhance business skills. In all these ways, retaining a sense of play into adulthood can promote both general well-being and improve one’s relationships and career. Conclusion Play is often the time when one feels most joyful and yet it is taken for granted and fail to recognize its many important benefits. Play is one of the most important building blocks for communication skills and healthy child development. Adults may spend a great deal of time at work, or may think about work even while resting. Viewing play as a necessary component to health and success can augment the ability to retain the more joyful and restful aspects of play. Families that regularly play together or have family game nights strengthen relationship bonds and create a supportive net that protects against stress. In supporting general health and happiness, play can have lifelong advantages. It is intensely meaningful that one of the first things babies are driven to do is play. Play is at the core of human identity—it shapes peoples lives and affects all that people do. In his book, The Disappearance of Childhood (1982), Neil Postman expressed concerns that children’s games and imaginative play were becoming an endangered species. Children were becoming less likely to play for the sake of play and instead played games with rules where winning became the

purpose of play. Frequent participation in structured, adult-supervised activities left little room for children to engage in unstructured, imaginative play, which behavioral scientists emphasize is one of the most important activities for children’s cognitive and emotional growth. Play is one of the most important building blocks for communication skills and healthy child development. As George Bernard Shaw said, “We don’t stop playing because we grow old; we grow old because we stop playing.” Laura L. Winn Florida Atlantic University See Also: Child-Rearing Practices; Leisure Electronics; Parental Controls; Sports; Video Games; Wii; Work and Family. Further Readings Association for Play Therapy. http://www.a4pt.org/ ps.index.cfm (Accessed November 2013). Brown, S. and C. Vaughan. Play: How It Shapes the Brain, Opens the Imagination and Invigorates the Soul. New York: Avery, 2009. National Association for the Education of Young Children. “Excellence in Early Childhood Education.” http://www.naeyc.org (Accessed November 2013). National Institute for Play. http://www.nifplay.org (Accessed November 2013). Steinberg, S. The Modern Parent’s Guide to Kids and Video Games (2013). http://www.ParentsGuide Books.com (Accessed November 2013).

Gated Communities Gated communities are residential areas that use barriers and gates to control access to a particular neighborhood. Although the term typically evokes images of protected luxury estates that are the enclaves of the elite in the 21st century, gated or walled communities are neither recent nor limited to the property of the rich and powerful. The Romans built gated communities in England around 300 b.c.e., and more than 1,000 years ago, residents of Chang’an, the imperial Chinese city of the Tang dynasty (618–906), lived in neighborhoods



that were walled in, with gates secured by guards. Whereas some gated communities have historically served to protect the privileged, others such as the Jewish ghettoes in Europe, the Japanese internment camps in the United States, and even some subsidized housing projects were created to segregate and control certain groups. In the United States, gated communities have been growing since the turn of the 21st century. These enclaves range from Tuxedo Park in Orange County, New York, located about 45 minutes from midtown Manhattan, where bluebloods and business titans have found sanctuary since the 1880s, to far more modest communities in some California counties, where as many as 20 percent of gated communities are in average- and lower-income Asian or Hispanic neighborhoods. Increase in Gated Communities In the 1970s, there were around 2,000 gated communities in the United States. Most of these were retirement villages or established compounds for the wealthiest Americans. Llewellyn Park in Eagle Ridge, New Jersey, founded in 1853, was the first gated community in the United States. Thomas Edison and members of the Merck and Colgate families had homes there. Pomander Walk, hidden on Manhattan’s Upper West Side—where in 2012 700 square foot, two-bedroom apartments sold for about $750,000—was built in 1921, and the Royal Palm Yacht and Country Club, Boca Raton, Florida—where estates range from $800,000 to $20 million—dates from 1959. It was not until the 1980s that the numbers began to climb and residents included significant numbers in the middle to upper-middle classes. By 1998, 16 million Americans lived in gated communities. In the wake of increased security concerns and high crime rates, the number grew to 50,000 by the early years of the 21st century. According to the American Housing Survey, conducted by the U.S. Census Bureau, the number of people living in gated communities rose to almost 11 million households in 2009. Between 2001 and 2009, the United States saw a 53 percent growth in occupied housing units in gated communities. Experts suggest that the actual number is probably considerably higher because the statistic does not include second homes. Some estimates state that one-third of new homes are built in gated communities, and this

Gated Communities

601

number does not include older communities retrofitted with fences and gates. The highest concentrations of gated communities are found in California, Texas, and Florida, but gated communities can be found across the nation. Safety and Nostalgia Proponents argue that gated communities increase security, reduce crime, and enhance tax revenues by raising property values. Communities that provide such features as guards and security cameras offer greater privacy and protection to entertainment and sports celebrities and high-profile business executives who are able and willing to pay for services that isolate them and their families from gawkers and stalkers. Studies indicate that residents of middle-class gated communities find the illusion of security comforting, even if the barriers do little to deter crime. For others, the gated community allows them the feel of returning to a saner, simpler, more homogenized time in an era where the threat of violence and chaos seems pervasive. Many property owners see the gated community as a means of protecting the value of their homes, which are generally their largest investment. Evidence suggests that homes in gated communities appreciate at a higher rate that those outside gates, but the increasing number of renters who reside in gated communities indicates that economic concerns are not the only motivating factor in choosing to live in these communities. Segregation and Social Fragmentation Critics of gated communities insist that they are exclusionary and polarizing. Some see them as disturbingly reminiscent of the neighborhood improvement associations and real estate agents’ redlining practices that prevented African Americans and other minorities from entering affluent white neighborhoods in the 1940s and 1950s. These critics insist that the preservation of property, security from crime, and sense of community that proponents claim as benefits of gated communities are mythic. These critics see fear of racial/ethnic diversity as the primary motivation. They point out that the first wave of increased gated communities in the 1980s occurred in California, Texas, Florida, and Arizona, the same areas that first experienced large groups of Hispanic immigrants.

602

Gatekeeping

The demographics of gated communities support the idea that residents are seeking to surround themselves with those like them, to protect themselves from “otherness.” Typically, middle- and lower-class Latinos/Latinas (homeowners and renters), middle-class Asian homeowners, and lowerclass Asian renters are more likely to live in gated communities than affluent whites. Lower-class residents often feel driven to protect themselves from those who are even poorer. Black middle and lower classes (homeowners and renters) are the least likely to live in gated communities. Wealthy African American homeowners, even in cities with large black middle-class populations such as Atlanta and Washington, D.C., rarely live in gated communities. Theorists suggest that blacks, conscious of their history of exclusion, may be reluctant to adopt the exclusionary symbol of walled communities. In the aftermath of the 2012 shooting of Florida teen Trayvon Martin in a gated community, editorial writers and columnists were quick to note that gates and walls can exacerbate fears and foster an us-versus-them mentality that threatens the health of a democratic society. Wylene Rholetter Auburn University See Also: Retirement; Segregation; Sun City and Retirement Communities; Wealthy Families. Further Readings Blakely, Edward J., and Mary Gail Snyder. Fortress America: Gated Communities in the United States. Washington, DC: Brookings Institute, 1997. Blandy, Sarah, and Diane Lister. “Gated Communities: (Ne)Gating Community Development?” Housing Studies, v.20/2 (2005). Morgan, L. Joe. “Gated Communities: Institutionalizing Social Stratification.” Geographical Bulletin, v.54/1 (2013).

Gatekeeping Because parenting is the primary means through which adults influence children, the ways in which caregivers/parents interact in the parenting

process affect children. Gatekeeping reflects one way that parents interact, and much of the literature addresses maternal gatekeeping as an element in understanding the nature of coparental interactions. In this entry, the history of gatekeeping is discussed, the current conceptualizations are described, and the ways in which gatekeeping influences families are examined. History of Maternal Gatekeeping The increased attention to fathering in family research, coupled with a growing interest in promoting paternal involvement, gave way to the exploration of factors that inhibited or enhanced such involvement. Traditionally, mothers were expected to be the primary caregivers in families, and fathers were expected to be the primary breadwinners. With shifts in gender role expectations and fathers increasingly expected to adopt more nurturing/caregiving behaviors, their failure to do so warranted study. The study of maternal gatekeeping emerged out of this changing historical context. The term maternal gatekeeping first appeared in 1999. Sarah Allen and Alan Hawkins defined maternal gatekeeping as a restrictive process consisting of beliefs and behaviors preventing collaboration in childrearing. This initial definition explained maternal gatekeeping as a negative process wherein mothers try to restrict fathers’ access to and involvement with children. Although some scholars challenged this notion, most of the research on gatekeeping used this definition. Increasingly, scholars see maternal gatekeeping as a two-dimensional phenomenon that includes both facilitation (encouragement) and restriction (discouragement) in parenting. Most recently, scholars identified a third dimension, control, which addresses the balance of power in parenting. Specifically, despite the body of literature supporting the restrictive nature of maternal gatekeeping, several scholars today agree that it is a more complex process with more varied execution in families. Daniel Puhlman and Kay Pasley drew from qualitative studies of maternal gatekeeping and identity theory to suggest a three-dimensional model. Using the encouragement, discouragement, and control, they argued that the intersection of these dimensions results in a typology of different forms of maternal gatekeeping that affect father involvement differentially. Their model opens the path for



greater understanding of the complexity and subtlety of this dynamic process. Scholars agree that gatekeeping is not a linear process, with mothers gatekeeping and fathers gatekept. Feminist scholars suggested that blaming mothers for father absence or lack of involvement was unfair and inaccurate, explaining that mothers often open the “gates,” and fathers fail to walk through them. Thus, the belief that gatekeeping is a bidirectional process is identified in much of the literature; however, researchers focus on maternal gatekeeping because mothers maintain a larger caregiving role in many households. Maternal Gatekeeping and Father Involvement Studies show that maternal gatekeeping influences father involvement, but the strength and direction of this influence remains unclear. Early research exploring the relationship between maternal gatekeeping beliefs and father involvement found a modest association. Following the work by Laurie Van Egeren and Dyane Hawkins in 2004, other scholars began exploring the facilitative and restrictive dimensions of maternal gatekeeping and found an association between facilitative mothers and more highly involved fathers, confirming early feminist scholars’ suggestion that mothers both open and close “gates.” Although this work broadened the conceptualization of maternal gatekeeping, many scholars continue to explicate only the restrictive nature of the phenomenon. Recent studies on maternal gatekeeping have found that (1) maternal encouragement toward fathers is linked with higher levels of paternal involvement; (2) maternal restriction of fathers is linked to less paternal involvement; and (3) traditional beliefs held by mothers about the maternal role and low satisfaction with the parental relationship are associated with father involvement decreases. At this point, studies have only tested early one- or two-dimensional models. No study to date has explored the three-dimensional model. These important findings illustrate that maternal gatekeeping (especially restrictive gatekeeping) is linked with paternal involvement. However, findings show only modest associations, and most use samples of families with young children. Few studies have included parents of older children or adolescents. Scholar suggestions that maternal

Gatekeeping

603

gatekeeping likely varies by age of child warrants additional research. Also, no research is available on the universality of maternal gatekeeping in other cultures because most studies focus on Westernized cultures that adhere to American ideals. Finally, no research has examined the directionality of links between maternal gatekeeping and coparenting processes; it is impossible to say whether gatekeeping affects coparenting or vice versa. Studies of maternal gatekeeping are new and there is much yet to learn about this phenomenon. There is current discussion among scholars about the defining elements of maternal gatekeeping, and confusion regarding the best way to measure the construct in research studies. Achieving scholarly consensus, researchers can pursue meaningful studies of the multidimensional nature of maternal gatekeeping, and more carefully delineate the degree to which this process influences family life in general and parenting and coparenting processes specifically. Daniel J. Puhlman Florida State University See Also: Coparenting; Gender Roles; Parenting. Further Readings Adamsons, Kari. “Using Identity Theory to Develop a Midrange Model of Parental Gatekeeping and Parenting Behavior.” Journal of Family Theory and Review, v.2 (2010). Allen, Sarah and Alan Hawkins. “Maternal Gatekeeping: Mother’s Beliefs and Behaviors That Inhibit Greater Father Involvement in Family Work.” Journal of Marriage and the Family, v.61 (1999). Fagan, Jay and Marina Barnett. “The Relationship Between Maternal Gatekeeping, Paternal Competence, Mothers’ Attitudes About the Father Role, and Father Involvement.” Journal of Family Issues, v.24 (2003). Ganong, L., M. Coleman, and G. McCaulley,. (2012). Gatekeeping after Separation and Divorce. In Parenting Plan Evaluations: Applied Research for the Family Court, L. Drozd and K. Kuehnle, eds. Oxford: Oxford University Press. Puhlman, Daniel and Kay Pasley. “Conceptualizing Maternal Gatekeeping.” Journal of Family Theory and Review, v.5 (2013). Schoppe-Sullivan, Sarah, Geoffrey Brown, Elizabeth Cannon, Sarah Mangelsdorf, and Margaret

604

Gay and Lesbian Marriage Laws

Sokolowski. “Maternal Gatekeeping, Coparental Quality, and Fathering Behavior in Families With Infants.” Journal of Family Psychology, v.22 (2008). Van Egeren, Laurie and Dyane Hawkins. “Coming to Terms With Coparenting: Implications of Definition and Measurement.” Journal of Adult Development, v.11 (2004).

Gay and Lesbian Marriage Laws Marriage is a sacred union joining two people in the eyes of their family, friends, and the law. Marriages are legally binding relationships that entitle the couple to certain protections and benefits on state and federal levels. In addition to the legal benefits associated with marriage, there may also be religious components. As public awareness of same-sex relationships has increased, debates surrounding the legal, traditional, and religious foundations of marriage have been questioned. The ability of researchers and government agencies to more accurately access the portion of the population that is lesbian and gay, as well as the implications for these couples and the larger population, should have an impact on these legal debates. A Brief Review of Restrictions on Marriage Current debates surround who should have access to marriage, and the benefits associated with it, but there have been other instances in which restrictions placed on marriage have been called into question. By definition, marriage is considered the legal binding of two individuals. Some definitions of marriage emphasize that they may only occur between one man and one woman; however, beyond the sex of the couple seeking to marry, there are certain criteria that must be met in each circumstance for the marriage to be considered legal. The legal requirements to obtain a marriage license are that both parties must be at least 18 years old, the union must be consensual (soundness of mind and of one’s free will), and neither party may have a current spouse. While there are some variations within states’ requirements to obtain a marriage license, the differences are not

significant in terms of age and familial relationships. The more stringent regulations placed on marriage can be seen in the antimiscegenation laws that were once in effect in the United States. While these laws were meant to prevent couples of different racial backgrounds from marrying one another, the emphasis was primarily on preventing whites and blacks from marrying. Following the abolishment of slavery, states slowly began to dispense with their restrictions on interracial marriage; however, some states were not quick to change. This was exhibited in the Supreme Court case of Loving v. Virginia (1967), in which the Supreme Court ruled that it was unconstitutional to restrict couples from marrying solely based on race. Loving v. Virginia marked a turning point in recognizing the diversity that occurs within relationships. While the law cannot rule based on emotions or love, it can set guidelines for what composition of relationships will be legally recognized and entitled to benefits. The Origins of Same-Sex Marriage Laws Although same-sex marriage may not directly compare to the experiences of interracial couples, it marks another chapter in discussions surrounding restrictions on who can marry. Interest in the rights of gay men and lesbians can be traced back to the 1980s, but it was not until the 1990s that there was a steady stream of discussions on samesex marriage. Hawaii sparked nationwide discussions concerning the marriage of gay men and lesbians in 1993. Under Hawaiian law, it would have been possible for marriages between gay men and lesbians to occur. Because there was no established precedent regarding the recognition of same-sex unions, there was growing concern among the states that they would be required to recognize same-sex unions that were performed in other states, regardless of their policies toward these unions. In response to this growing concern, the Clinton administration passed the Defense of Marriage Act (DOMA) in 1996. DOMA stated that the federal government would only recognize marriages occurring between one man and one woman as legal, and thus entitled to federal benefits. Additionally, states were able to determine what their stance would be on samesex marriage. They would also be able to determine



whether they would recognize same-sex unions performed in other states. DOMA was meant to allow states to determine what was best for their constituents. However, regardless of its original intent, DOMA became a means for a separate but unequal version of marriage to form for gay men and lesbians. Under DOMA, even if states permitted same-sex marriages to occur, the federal government would not recognize them. This meant that same-sex couples would be denied more than 1,000 federal benefits associated with marriage. The legal recognition of the relationships of lesbians and gay men has been tenuous, at best, and legal recognitions of relationships are often the stepping-stone to additional benefits (i.e., sharing of insurance, medical decisions, visitation in hospitals, retirement benefits, and parenting rights). So, when discussions are raised regarding same-sex unions, there is often more to the debate than marriage.

Gay and Lesbian Marriage Laws

605

The State of Same-Sex Marriage Policies Since the implementation of DOMA, there has been much back and forth regarding the status of same-sex unions. In 2011, President Obama stated that his administration would no longer support the precedent set by DOMA. DOMA was repealed and is no longer enforced by the federal government. States determine what, if any, forms of relationship recognition they are willing to offer (i.e., domestic partner benefits, civil unions, or marriages). Some states have placed restrictions on who may obtain a marriage license (i.e., one man and one woman). In 2004, Massachusetts became the first state to issue marriage licenses to same-sex couples. What slowly followed was the passage of favorable samesex marriage policies in a number of states. As of August 2013, 13 states (California, Connecticut, Delaware, Iowa, Maine, Maryland, Massachusetts, Minnesota, New Hampshire, New York, Rhode

A same-sex couple gets married in San Francisco’s City Hall in 2008, shortly after the state of California granted marriage licenses to same-sex couples. The passage of Proposition 8 put a halt to same-sex marriages in that state shortly thereafter, but in 2010 Proposition 8 was ruled unconstitutional and the state again began to issue marriage licenses to same-sex couples.

606

Gay and Lesbian Marriage Laws

Island, Vermont, and Washington) and the District of Columbia issued marriage licenses to samesex couples. Most of the momentum gained in states allowing same-sex marriages to occur in their jurisdictions occurred from 2008 onward. However, even as states began issuing marriage licenses to lesbian and gay couples, there were additional states that enforced policies that denied gay men and lesbians access to marriage. In reviewing the laws of the states that restrict marriage to one man and one woman, they fall into two categories: those that restrict access based on law, and those that restrict based on constitutional amendment. There are four states (Indiana, Pennsylvania, West Virginia, and Wyoming) that restrict unions to heterosexual couples based on law. These states differ from the 29 states (Alabama, Alaska, Arizona, Arkansas, Colorado, Florida, Georgia, Idaho, Kansas, Kentucky, Louisiana, Michigan, Mississippi, Missouri, Montana, Nebraska, Nevada, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, and Wisconsin) that have constitutional amendments restricting marriage to one man and one woman. While both law and constitutional amendment effectively block lesbians and gay men from accessing marriage licenses in these states, the ramifications of the means of restricting access are different. Typically, the process to change the constitution of a state is more rigorous than changing a law, which means that overturning a constitutional amendment will take more effort than it would to overturn a law. Regardless of what each state’s laws say about same-sex marriage, there is often much debate regarding the constitutionality of the issue. This can be more readily seen in whether states have passed statutes or constitutional amendments regarding the issue. California has presented one of the more unique cases regarding its same-sex marriage laws in that the state has made efforts to pass both law and constitutional amendments regarding samesex marriage. The most recent and notable example of this can be seen in the passage of Proposition 8 in 2008. Prop 8 was a proposed amendment to the California constitution that would restrict marriage to one man and one woman. The proposition

passed, causing many to question the status of the marriages of the lesbians and gay men who had been married earlier in the year. Ultimately, it was decided that the marriages would still be considered legal and, in August 2010, U.S. District Court Chief Judge Vaughn Walker ruled that Prop 8 was against the California Constitution and therefore could not be enforced. This reopened the doors for lesbians and gay men to marry in California. In March 2013, the Supreme Court of the United States heard two cases pertaining to the legality of certain restrictions that have been placed on lesbians’ and gay men’s access to marriage. The case of Hollingsworth v. Perry and United States v. Windsor, held before the Supreme Court, highlighted the changing meaning associated with marriage. These cases brought forth claims that California’s Proposition 8 (Hollingsworth v. Perry) as well as the Federal Defense of Marriage Act (United States v. Windsor) were unconstitutional in their restriction of marriage to one man and one woman, thus denying same-sex couples equal access to the institution of marriage. The decision by the Supreme Court to hear these cases followed on the heels of five years of debate in California regarding Prop 8, and President Barack Obama’s announcement in 2011 that he would not support the continued enforcement of DOMA. On June 26, 2013, the Supreme Court announced its decisions on the cases. In United States v. Windsor, the case pertaining to the constitutionality of the federal Defense of Marriage Act (DOMA), the court ruled 5–4 to overturn DOMA. In his closing remarks, Justice Kennedy noted the following: DOMA singles out a class of persons deemed by a State entitled to recognition and protection to enhance their own liberty. It imposes a disability on the class by refusing to acknowledge a status the State finds to be dignified and proper. DOMA instructs all federal officials, and indeed all persons with whom same-sex couples interact, including their own children, that their marriage is less worthy than the marriages of others. The federal statue is invalid, for no legitimate purpose overcomes the purpose and effect to disparage and to injure those whom the State, by its marriage laws, sought to protect in personhood and dignity. By seeking to displace this protection and

Gender Roles



treating those persons as living in marriages less respected than others, the federal statute is in violation of the Fifth Amendment. This ruling still permits states to make decisions regarding whether they will allow same-sex marriages to occur; however, any state that legally permit same-sex couples to marry will also see that those relationships receive the federal benefits that accompany marriage. On the same day, the Supreme Court also announced its ruling on the case pertaining to California’s Proposition 8 (Hollingsworth v. Perry). The court ruled that there was no standing for the case to be brought before the court by private parties. This meant that marriages could continue to be performed in California (they had been on a hiatus pending the Supreme Court decision). Immediately following this ruling, marriages licenses were once more issued to lesbian and gay couples. While there is still heated controversy over whether marriages should be legal in California, for the time being, they will continue to be performed for all couples who meet the legal requirements. The laws that govern who can or cannot marry have seen noticeable shifts over the last 25 years. States still have the ability to determine who can or cannot marry within their jurisdiction, within reason, which means that there will continue to be a range of laws pertaining to same-sex marriage. Melanie L. Duncan University of Florida See Also: Adoption, Lesbian, Gay, Bisexual, and Transgender People and; Civil Unions; Defense of Marriage Act; Domestic Partner Benefits; Same-Sex Marriage. Further Readings Polikoff, Nancy D. Beyond (Straight and Gay) Marriage: Valuing All Families Under the Law. Boston: Beacon Press, 2008. Supreme Court of the United States. Hollingsworth v. Perry. http://www.supremecourt.gov/opinions /12pdf/12-144_8ok0.pdf (Accessed November 2013). Supreme Court of the United States. United States v. Windsor. http://www.supremecourt.gov/opinions /12pdf/12-307_6j37.pdf (Accessed November 2013).

607

Gender Roles The term gender roles refers to gender-specific social expectations toward men’s and women’s behavior. Gender roles are often understood as internalized expectations that also shape female and male identities. They are defined as institutions in the sociological sense of forming stable patterns of social order. With varying definitions and meanings, role concepts can be related to different theoretical traditions. Over the years, the concept of gender roles has been substantially criticized and repeatedly rejected. Alternative, more dynamic approaches to understanding the processes of gender differentiation in society have been developed. It was, however, a highly influential concept that is still widely used, both in academic discourses and in a wider public. As such, the term gender roles is often not clearly defined when used, and the theoretical assumptions that it refers to in a given context vary. With regard to family, gender roles shape family structures and relations, socialization, parenting, familial distributions of labor, and, as a consequence, broader societal gender relations and inequalities. Theoretical Foundations Tole theories refer to different social positions that society members hold, and they deal with the expectations, behaviors, and identities that are linked to these positions. Approaches using the role concept have their roots in different theoretical traditions. Two important theories are the structuralfunctionalist tradition and the tradition of symbolic interactionism. The first strand of role theories is often linked to anthropologist Ralph Linton, sociologist Ralf Dahrendorf, and structural-functionalist theorist Talcott Parsons. In this tradition, a social role is defined as a bunch of behavioral expectations that a society member in a specific social position is confronted with by reference groups in society. The person holding the social position is expected to behave in accordance with the social norms applying to this position. Roles are internalized by learning and during socialization, and they are enforced by negative sanctions. The structural-functionalist understanding of roles as represented by Talcott Parsons was

608

Gender Roles

influential in the sociology of the family in the mid20th century. Parsons was interested in social systems and how they stabilize. He drafted social systems as the organization of status, roles, and norms, and he assumed that different roles have their function within social systems. This included gender roles, which Parsons perceived as biological in origin and learned in the process of socialization. Parsons distinguished between an expressive female role, suitable for caring for children and husband, and an instrumental male role, corresponding to the necessities of working in a profession. Parsons’s theories received strong criticism. Critical arguments include that roles are perceived as static, that the concepts of gender and nuclear family are normative and conservative, that women’s work and power relations in couples are neglected, and that the analysis focuses on the family and thus does not take the broader social effects of gender roles into account. Meanwhile, symbolic interactionism, interested in how society members attribute meaning to things (e.g., to words, gestures, and actions), interact, and define situations, has also used the concept of roles, albeit in a different way. It criticized the prevailing understanding of roles for being too mechanical, and for ignoring subjects’ agency. Hence, this approach emphasizes individuals’ active and interactive contributions, pointing out that they attribute meaning to roles and negotiate them through interaction. In the work of social psychologist George Herbert Mead, a prominent predecessor of social interactionism, role taking is also a major process in the course of socialization and the formation of identity. Sociologist Erving Goffman uses roles as metaphors lent from the field of theater to explain how people interactively display and manage their roles. From a social-interactionist point of view, then, roles in general and gender roles in particular are less prescribed and much more subject to negotiations and agency than in the structural-functionalist understanding. Gender Studies and Feminist Critique With the influence of interactionist and socialconstructionist approaches in the field of gender studies, understandings of “sex” (understood as the biological aspects) and “gender” (understood as the social aspects) were profoundly challenged. These

approaches focus on the microlevel of everyday interactions as the level for analysis (rather than, for example, the political system, the stratification of society, or patriarchy). Drawing on pioneering work by Harold Garfinkel and Erving Goffman, interactionist gender theories became a key approach in gender studies, particularly when Candace West and Don H. Zimmerman published their article “Doing Gender” in 1987. In the doing gender approach, the concept of gender roles was rejected. The authors argued that gender is neither a set of traits or a role nor a variable, but produced in interactions. The term role, thus, could not capture the interactive character of the processes of social construction of gender that the doing gender concept suggested. Summarized, gender role concepts have been criticized as essentialist, deterministic, and oversimplifying with regard to historical and contextual variations. Furthermore, feminist theory criticized them for shifting the attention from power relations and structural inequalities to individual problems of managing and combining different roles (e.g., working mothers). Methodological Approaches To make statements about perceptions of gender roles, scholars often rely on attitude surveys. In attitude surveys, women and men are usually asked about their attitudes toward male and female roles and tasks in life and society, for instance with regard to childcare, employment, or division of labor. The information provided by these quantitative data can be biased by social desirability, that is, by the methodological problem that respondents tend to say what they think is socially desirable, rather than what they really think. Moreover, there are well-documented differences between attitudes and actual behavior, for instance, with regard to active fathering, where men’s actual social practices often lag behind their stated intentions and attitudes. Nevertheless, data from attitude surveys are useful for researching gender roles in that they allow for international comparisons and comparisons over time. Qualitative approaches to investigating gender roles may take many forms, ranging from interviews to participant observation or the analysis of artifacts and mass media products. However, concept-wise and in terms of theory, they will often



work with other approaches rather than the gender role approach, for example, with doing gender and the social construction of gender. Gender Roles, Gender Stereotypes, and Discrimination The boundaries between the concept of gender roles and the concept of gender stereotypes can appear blurred. However, stereotypes focus on the cognitive aspects of assumptions about the characteristics of men and women, whereas gender roles focus on behavior and expectations. Gender stereotypes have been shown to be stable over time. They can be investigated by means of questionnaires in which respondents are asked to ascribe lists of characteristics to women and men. Alice Eagly’s social role theory of sex differences and similarities suggests that people tend to think that men and women have characteristics that are typical for their social roles (e.g., housewives or feminized low-status jobs for women; high-status jobs and breadwinner roles for men). That is, from observed role behavior, people draw conclusions about the characteristics of the person having the role. These patterns can also be the basis for gender-related discrimination. Historically, Karin Hausen has shown that ideas of polarized gender roles developed in the 18th and 19th centuries under the influence of the emerging human sciences such as psychology, anthropology, and medicine. These sciences created new images of the “natural,” or biological, but also the psychological differences between men and women. Hausen calls this the “polarization of sexual stereotypes.” This polarization formed an important background for the social division of labor into a male public sphere and a female private sphere. Stability and Change Since the second half of the 20th century, drastic changes have been taking place with regard to demographic structures, female education, and female labor market participation in industrialized countries. Within the sociology of the family, a crucial, ongoing topic of discussion is to what extent previously common family patterns are currently dissolving. It has been argued that institutions such as marriage or parenthood are losing their binding character in the United States, Europe, and

Gender Roles

609

other industrialized parts of the world. Theoretical approaches and empirical evidence have provided support for both points of view: the continuing prevalence of strong norms with regard to family forms and family life on the one hand, and new freedoms and choices on the other. A similar and related argument can be brought forward with regard to gender roles: Since the mid-20th century, behavior (e.g., female employment) as well as attitudes (e.g., with regard to gender equality) have profoundly changed. At the same time, gender roles prove to be remarkably persistent. Around the world, including the United States, women still do the main part of care work (for children and the elderly), the largest part of unpaid work in general (domestic work and care work), and they are still less integrated into the labor market than men. Women still have less power, money, and time free from paid or unpaid work than men. Particularly in couples with children, aspects and variations of the male-breadwinner and femalehousekeeper model continue to be a social reality for many, and an ideal for some. Reproduction and Parenting In family contexts, an important aspect of gender roles is their representation in ideas about mother roles and father roles. Because “fathers” and “mothers” are concepts that are similarly linked to binary concepts of gender as are the terms men and women, gender and parenting roles overlap. In other words, gender roles inform parenting roles, and parenting roles are a crucial aspect of gender roles because differences in genitals and reproductive functions are commonly regarded as the most constitutive difference between genders. Social expectations toward fathers and mothers have changed in the course of history. For instance, as the French philosopher Elisabeth Badinter has shown in her work on “mother love,” perceptions and norms regarding appropriate mothering behavior and “good mothers” have had many shapes. Scholars in contemporary family and gender studies also investigate changes in mother roles and father roles. With regard to father roles, concepts such as “new fatherhood” are discussed, suggesting a change toward men’s more active and direct involvement in childcare. However, the degree of the empirical manifestation and the

610

Gender Roles in Mass Media

precise contents of “new fatherhood” are contested. Moreover, as is the case for motherhood ideals, ideals and practices of new fatherhood differ between regions and social classes. Gender Roles and Inequality Gender roles, especially when they are traditional, binary, and polarized are not merely “value-neutral” differences also but are interlinked with hierarchies and inequality. Many of them are closely linked to the family. Within the socialization in families, gender differences are reproduced and internalized, as many different strands of theory would agree, even if they might draw different conclusions. Because gender inequalities are momentous and extremely persistent, families can thus be said to contribute to maintaining inequalities by reproducing gender differences. The intersections between families and gender roles, meanwhile, also play a part with regard to inequalities in many other respects. A case in point is violence against women, which primarily takes place in family contexts. Another example regards inequalities in the labor market, which are connected to factual or expected discontinuous female careers due to family obligations. Unequal distributions of unpaid work and income differences linked to women’s main responsibility for care and domestic work are further illustrations of the links between gender roles, families, and structural gender inequality. Karin Sardadvar FORBA–Working Life Research Centre, Vienna See Also: Breadwinner-Homemaker Families; Constructivist and Poststructuralist Theories; Feminist Theory; Functionalist Theory; Marital Division of Labor; New Fatherhood; Symbolic Interaction Theory. Further Readings Badinter, Elisabeth. Mother Love: Myth and Reality. Motherhood in Modern History. New York: Macmillan, 1981. Bahrdt, Hans Paul. Schlüsselbegriffe der Soziologie. Eine Einführung mit Lehrbeispielen. (Key Terms of Sociology. An Introduction With Examples.) München, Germany: C. H. Beck, 1984. Cheal, David. Sociology of Family Life. Basingstoke, UK: Palgrave Macmillan, 2002.

Eagly, Alice H. Sex Differences in Social Behaviour: A Social-Role Interpretation. Hilldale, NJ: Erlbaum, 1987. Goffman, Erving. The Presentation of Self in Everyday Life. New York: Doubleday, 1959. Mead, George Herbert and Charles W. Morris, ed. Mind, Self and Society From the Standpoint of a Social Behaviorist. Chicago: University of Chicago, 1934. Parsons, Talcott and R. F. Bales. Family: Socialization and Interaction Process. New York: Free Press, 1955. Ribbens McCarthy, Jane and Rosalind Edwards. Key Concepts in Family Studies. London: Sage, 2011. West, Candace and Don H. Zimmerman. “Doing Gender.” Gender and Society, v.1/2 (1987).

Gender Roles in Mass Media Mass media is a broad category of communication designed to be accessible to a large audience. Although the specifics of the forms of communication have changed with technological advances, messages have been sent with the intent of influencing the masses since the Gutenberg press was invented. Now, mass media is virtually inescapable. The ways in which companies and entertainers seek consumers’ attention and loyalty is constantly evolving, and their ultimate influence is immeasurable. Individuals and families willingly consume the media and often underestimate both the ubiquitousness and accumulative power of the messages about what it means to be male or female. Females are represented far less often in many forms of media, and they tend to be sexualized or minimized when they are. Men are often expected to be aggressive and decisive, with little support or input from others. Mass media includes all print media (e.g., books, magazines, and newspapers), radio and recordings (albums and CDs), movies, television (including broadcast news), computers and the Internet (including games, blogs, podcasts, and social media), and advertising (billboards, commercials, and print ads). Additionally, there are several subcategories of mass media that include sports



Gender Roles in Mass Media

611

entertainment, amusement parks, and cell phones. In 2013, 5 companies owned 95 percent of all mass media, allowing a relatively limited group of business leaders to decide how to deliver virtually all of the news, entertainment, and advertising around the globe. History Getting the same message to most, if not all, of the people in a particular society is an important element of civilization. The messages have varied in scope to cover everything from politics to sermons to plans for battle and protection. In fact, the earliest signs of propaganda, or material designed to influence group attitudes about a divisive subject, have dated to 515 b.c.e. More modern examples of the early use of mass media to influence gender roles are seen from the early 1910s,when the National Association Opposed to Women’s Suffrage distributed a pamphlet urging women to remember the benefits, their current roles had that and to reject the unknown detriments of voting rights. Although radio had been used as a means of communication since the late 1800s (and print media had been the only option before that), it was not until the sinking of the Titanic became a shared experience that the medium became popular. Gender-specific messages rose in popularity throughout World War I, and by World War II, images of Uncle Sam pointing and declaring “I Want You” was the message to men as they entered into the armed forces. Rosie the Riveter, with her “We Can Do It” motto, simultaneously became a symbol to encourage women to feel confident that they could take on men’s professional roles while the men were at war. The expectation that women would willingly return to home-based roles once they were no longer needed in the workplace was seen in other ads of the time. When the men returned, however, women had experienced a significant shift, and they were not as enthusiastic to return to their prior roles. In what many suggest was an effort to lure women back into the home, ads became focused on the appeal of household appliances that would allow a housewife to care for her family with less effort. Cinema and television had also joined print and radio as new forms of mass media. Shows such as Father Knows Best and Leave It to Beaver depicted women as fully satisfied with their roles in the home. These

J. Howard Miller’s depiction of Rosie the Riveter became an iconic image during World War II, encouraging women to feel confident about taking on the traditional roles of men in their absence.

shows also depicted men as somewhat bemused and tolerant of events in the household, often acting as if the wife was one of the children. Shows such as I Love Lucy did not shy away from showing Ricky reprimanding, or even spanking Lucy if he deemed her too mischievous, and one of the most famous catch phrases from American TV is from the Honeymooners, in which Ralph Kramden often raised his fist to his wife, comedically promising, “To the moon, Alice.” Toys While toys are not a form of mass media, they are prominently featured in multiple forms of it (e.g., books, advertisements, television shows, and movies), and are among a child’s first tools of socialization to cultural ideals. Since 1975, there has been a gradual increase in gender-specific marketing.

612

Gender Roles in Mass Media

Currently, most major toy stores have boys and girls sections separated. Even when the sections are not explicitly labeled, the distinctions are obvious because the girl’s section is filled with pink, purple, and glittery designs. The boy’s section is marked by many blue, black, and red packages. The toys offered in the boys section are a mix of weapons, vehicles, and toys that emulate professions such as police officer or firefighter. The toys in the girls section are often symbols of household or personal care, such as baby dolls, toy kitchens, and pretend makeup kits. When commercials and print ads for toys have been analyzed for language, it has been repeatedly demonstrated that commercials for boy toys often use words such as battle, launch, action, and weapon. The commercials for girl toys use significantly more words that suggest kindness and cooperation, such as cute, perfect, and friendship. The majority of mass media outlets are owned by only a few companies. This allows a company to market a toy through traditional print and commercial advertising, and to create television shows, movies, and books about the toy. Additionally, licensed characters affiliated with a toy are commonly found on everything from diapers and underwear to vitamins, shampoo, and even musical rectal thermometers. Some toys that began as less gender specific have shifted over the years to become more stereotypical in their appearance or marketing strategies. My Little Pony, introduced by Hasbro in 1981 as Earth Ponies, included male and female adult ponies, as well as baby versions of the adults. Over time, the My Little Ponies have become more traditionally female looking, with large, doe-like eyes and a slimmer body. My Little Pony was eventually developed into a television series, multiple movies, video games, comic books, and Web sites. By 2013, Hasbro had introduced a new series of characters, the Equestria Girls, who are some of the ponies as slim, human, teenage girls. Many other toys followed similar paths, including G.I. Joe and LEGO building blocks. There is no toy, however, that has taken as much blame for distorting society’s vision of an ideal woman’s body as Barbie. The Mattel toy has been analyzed and judged for evolving into a figure that would be physically impossible for a human woman to emulate. In addition to Barbie’s physical appearance, Mattel has been sharply criticized for the messages that Barbie has explicitly or implicitly offered.

For instance, a 1991 talking version of Barbie stated “Math class is tough” as one of her phrases. Barbie’s male counterpart, Ken, has also been criticized for his unrealistic physical proportions, and there has been concern expressed that little boys could compare themselves unfavorably to him. Petitions have been filed (some by minors) to ask companies to consider their responsibility around gender-based images and advertising. In 2012, Hasbro began producing a black-and-silver version of their Easy-Bake Oven after a 13-year-old girl garnered support with her petition. She had written the petition on behalf of her 4-year-old brother who wanted an Easy-Bake Oven, but he did not want it in pink and purple, which was the only model made. Newspapers and Newsmagazines In major newspapers such as the New York Times, USA Today, and the Washington Post, men are quoted as sources in stories approximately three and half times as often as women. Some have argued that men are more available as sources for stories because of their higher representation in positions of power or authority (e.g., men account for 82 percent of U.S. representatives, 80 percent of U.S. senators, and 96 percent of chief executive officers of Fortune 500 companies), so they are often the more logical choice. Others, however, have tracked that the ratio of quotes is roughly the same for stories focused on issues such as women’s rights, crimes against women, and reproductive rights, and that journalists simply are not using the resources available to help them identify appropriate female sources. In the past 25 years of declaring a Person of the Year—although it was Man of the Year until 1994—Time magazine has not featured a woman by herself. On two occasions, 2002 and 2005, women were included as part of a trio that was highlighted (as the “Whistleblowers” and the “Good Samaritans,” respectively). Roles of wife and mother are often given significant attention in news stories that are about women, even when the article is ostensibly focused on a different role. In May 2013, an American astronaut, Karen Nyberg, along with Italian astronaut Luca Parmitano and Russian cosmonaut Ryodor Yurchikhin, traveled to the International Space Station, where they would remain for the next six months. In a Reuters article describing the trip, the two men were described with their professions, educational backgrounds, and a bit



about their previous missions. Despite having her doctorate in mechanical engineering, several years of experience, and having received several awards, Nyberg was described as an American mother who had left her husband and 3-year-old son for the mission. Neither man’s family was mentioned, although both had children. That article and others also described the sewing and crafting supplies that Nyberg was bringing with her into space. Also in 2013, the New York Times’s obituary of Yvonne Brill, a rocket scientist who invented a propulsion system that kept communication satellites in orbit, was described as a woman who followed her husband when they moved for his job, raised three children, and could make delicious beef stroganoff. Her professional accomplishments were noted later in the obituary. In general, obituaries about men far outnumber those about women in major market newspapers. Movies In recent family films (rated G, PG, or PG-13), 72 percent of all speaking roles were filled by men. The women were more likely to be shown as parents or caregivers and for their character to be in a committed relationship. Male characters accounted for 80 percent of the employed characters and the careers were most often armed forces or some type of crime. Both sexes were stereotyped. The men tended to be tough and aggressive while not showing any emotional reaction to their circumstances. The women, on the other hand, were often valued for their youth and physical appearance and were noted to be seeking romantic relationships with men who had deceived them. The women also showed bravery and sacrifice, often rescuing family or friends from danger. As in the other forms of business, men are heavily overrepresented in the creative and administrative leadership of projects. Magazines and Magazine Ads In a 1944 issue of Ladies’ Home Journal, an article reminded women to try to appear as attractive as possible because they remain the “weaker sex.” Contemporary women’s magazines do not give the same overt message, but they do emphasize physical attractiveness and ways to be more pleasing to men. In 2012, an eighth grader’s petition eventually convinced Seventeen magazine to publicly vow never to use digital alteration to change the bodies or faces

Gender Roles in Mass Media

613

of its models. Gender-specific magazines continue to focus on physical attributes and typically offer articles that will tell the reader how to achieve a different body. The weight gap between the average American woman and the average fashion model has considerably widened over the past 60 years. Women have been demonstrated to have a drop in satisfaction with their body after as few as three minutes of exposure to a women’s magazine. There has been no evidence that men experience the same effect with exposure to men’s magazines. The ads in magazines frequently have women in literally or figuratively submissive roles to a man, further reflecting gender role stereotypes. In the 1960s, ads had images of men stepping on a kneeling or prostrate woman with copy describing the effect that their product would have on women, or the power men would feel with the product. In the 2010s, controversies periodically arise from magazine ads that feature men overpowering or assaulting women with copy that is not significantly different from the 1960s. Even ads that do not feature men and women together or are for goods that may be considered gender neutral have different standard poses for each. Men are often positioned in a solid stance and if the object is included in the ad, are typically grasping or otherwise manipulating it. Women are often shown with legs apart, positioned to be either lying down or leaning (their weight shifted to one foot if they are standing), and caressing or lightly touching the advertised object. Diana C. Direiter Lesley University See Also: Barbie Dolls; Cult of Domesticity; Internet; Magazines, Women’s; Mothers in the Workforce. Further Readings Gauntlet, David. Media, Gender and Identity: An Introduction. New York: Routledge, 2008. Klos, Diana Mitsu. The Status of Women in the U.S. Media 2013. Women’s Media Center, 2013. Orenstein, Peggy. Cinderella Ate My Daughter: Dispatches From the Front Lines of the New GirlieGirl Culture. New York: HarperCollins, 2011. Smith, Stacey, Marc Choueiti, Ashley Prescott, and Katherine Pieper. “Gender Roles and Occupations: A Look at Character Attributes and Job-Related Aspirations in Film and Television.” Los Angeles:

614

Genealogy and Family Trees

Annenberg School for Communication and Journalism, University of Southern California and Geena Davis Institute on Gender in Media, 2011.

Genealogy and Family Trees Genealogy has become one of America’s favorite pastimes. It is increasingly conducted not only by professionals but also by laypeople interested in their family history. It has become a profitable business in the United States, with companies advertising and selling their services to an ever-increasing audience. The genealogical impetus has been present in the United States since its beginning. In the recent past, there have been important shifts with regard to the meaning of genealogy in American culture; the increased exploration of the intersections of race and genealogy is at the root of these revisions. Genealogical practices are tools in the creation of imagined communities. They contribute to the building of families but also to the interruption of connections. In American culture, genealogy and family trees shape conceptions of the family, kinship, and inheritance. Development in the United States The first comprehensive publication of an American genealogy is Luke Stebbins’s The Genealogy of Mr. Samuel Stebbins and Hannah His Wife, From the Year 1701 to 1771, tracing the author’s New England family connections. In early America, there was skepticism and hesitancy toward genealogies because they fueled fears of elitism and the establishment of new ruling classes, especially with regard to the British empire. Genealogy was systematized as a field by John Farmer, a historian and genealogist, in the early 19th century. His achievement is the establishment of “antiquarianism,” the practice of recognizing and honoring the achievements of early Americans through genealogy. This led to the foundation of the New England Historical and Genealogical Society, one of the earliest such societies. When new immigrant groups came to the United States following the Civil War, established

U.S. families used genealogy as a reactionary tool, fueling prejudices against newcomers. This reemergence of genealogy in the public discussion coincided with the popularization of the eugenics movement. This association diminished public interest in the field of genealogy in the long run. In the 1930s and 1940s, a new school of “scientific” genealogists established itself, aiming to reestablish genealogy as a serious field of research and study. Since the 1970s, there has been a resurgence of public interest in genealogy. This intersects with a general curiosity of the baby boomer generation in exploring the past and its multiple meanings. The rising interest in, and following democratization of, genealogy allowed laypeople to participate in it was and is not always welcomed by professionals in the field, even though the strengthened engagement of the public contributed to the opening up of resources and the investment of more finances into archives and other potential sources for genealogists. Genealogy is neither a subdiscipline of history nor of archival science. It remains a separate field, with distinct conventions and resources. Meaning and Importance of Genealogy and Family Trees in the United States Family trees and family genealogies map an individual’s belonging to a certain group of people through visualization and narrative. This process involves taking critical decisions, such as which lines of ancestry matter. Genealogy marks and establishes difference. A family tree is a type of diagram looking back in time. It lists basic information about family members, such as their names and dates or years of birth. There are two different approaches with regard to family trees. The first is to trace down, starting with one ancestor and following this ancestor’s many descendants. The second is to trace up, following the family in reverse direction starting with one descendant and including multiple ancestors. The former, with its focus on the progenitor, is more traditional. A complete genealogy is usually a narrative that includes and refers to documents and data that the genealogist has collected. Earlier forms of genealogy refer to data collections and not so much to the investigation of historical circumstances. Today, the



process of investigation can become a genealogical narrative in itself. Genealogies and genealogical practices are located at the interface of race, nation, gender, and family. Genealogy connects biology to culture by establishing a connection between a supposedly verifiable biological heritage and personal characteristics and traits. Ancestry becomes an important source of personal and collective identity. Genealogical practices establish an order according to generation and according to different degrees of relatedness. Through genealogy, origin is specified in terms of time, place, and perspective. Genealogy argues that a family’s past, social class, and supposed race matter into the present. Family histories and genealogies become cultural capital among select groups of middle-class white Americans, enabling them to celebrate their ancestry and keep their social status. While genealogical practices are traditionally associated with white, upper-middle-class males, today it has become much easier and more popular for others—such as immigrants and the African American descendants of slaves—to engage with their genealogy. Current genealogical attempts focus more on previously hidden and silenced aspects of U.S. society and history, such as interracial relationships. While genealogy has a conservative impetus, it also has reactionary potential and can be used to challenge assumed pure identities and counteract ideas of cultural and biological purity. Such genealogical practices of the latter kind—critical of established hierarchies of power and its distribution—are relatively new and rare by comparison. Resources of Genealogists Genealogy as a professionalized practice has developed a specific apparatus, making resources available. There are societies focusing on the practice of genealogy, such as the National Genealogical Society, the American Society of Genealogists, and the New England Historic Genealogical Society; there are specialized journals such as NGS Quarterly, The Genealogist, or Ancestry Magazine, and several indexes, for example the International Genealogical Index. There are guides to genealogy and to researching specific genealogies, such as African American genealogical efforts. Genealogists make extensive use of such facilities as

Genealogy and Family Trees

615

archives, libraries, and other data collections on the Internet. Genealogy focuses on detail and on individual lives. It requires accurate study of documents and detailed knowledge about the documents that are used, when they being issued and how, and which historical sources can be trusted. Genealogy often uses the detached language of science, but it is not a science. While it claims to be “true” and objective, genealogy also helps suppress stories that cannot be made to conform to its tools and established formats. Rather than a science, it is an interdisciplinary cultural practice. Family genealogy often interweaves folk tradition, archival research, and family stories. Tools commonly used to compose a genealogy and a family tree include family artifacts, family documents such as birth and death certificates, marriage and divorce records, census documents and church records, contracts, immigration and naturalization records, oral histories and stories passed on for many generations, and family photographs and other personal documents such as letters, diaries, or family Bibles. Genealogists today also make use of population genetics and its findings. However, the admixture tests available today are typically not able to answer all questions regarding a family’s genealogy. Today, much genealogy is conducted with the help of the Internet, and is supported by specific genealogical computer software for collecting and saving important information and arranging it according to different factors. These programs help to generate accurate family trees. Even though much work can be done from the home, genealogists are often interested in travelling to the places where their ancestors lived. This creates a connection between location and heritage. Genealogy and the Church of Jesus Christ of Latter-day Saints One group strongly associated with genealogy is the Church of Jesus Christ of Latter-day Saints. This community has a strong interest in genealogy because its members believe that their ancestors, even if posthumously baptized, will be saved. This process of saving requires the members to establish a strong link to the forebear. The community has collected genealogical material since the late 19th century. This is the

616

Genealogy and Family Trees

largest genealogical archive in the world. Today, the database can be accessed online. It has become an important resource for people from all over the world in search of their family. Genealogy and the Immigrant Experience Within American culture, genealogical practices stand in a vexed relationship to the American melting pot. While the idea of the melting pot is overall egalitarian, and refers to the idea that immigrants can reinvent themselves, the genealogical impetus in American culture stands for origins, hierarchy, and descent. At the same time, genealogy is used by immigrants to make sense of their identity and experience as immigrants. Descendants of immigrants from Europe can make use of the extensive documentation available at Ellis Island. The documents often make it possible to find out when an ancestor arrived in the United States, and which ship the ancestor boarded. This archive is also available online, and it records the names of about 22 million immigrants who passed through the famous incoming port between 1892 and 1924. Many descendants of European immigrants follow their ancestors’ path back to countries such as Ireland, Italy, or Germany, oftentimes also with detailed information about the community where the ancestors lived. Genealogy and the African American Experience Genealogical practices validate experiences and family stories. This is especially important for groups having suffered oppression. Slaves were cut off from their family members and heritage. For some, genealogical undertakings can help reestablish these lost connections. There has been a surge of interest among African Americans in exploring their roots following the 1976 publication of Alex Haley’s Roots: The Saga of an American Family, in which an African American is able to follow his roots all the way back to Africa. Symbolically, this genealogy, which was later declared unserious, stood representative for the experience of all African Americans. It encouraged other descendants of slaves to investigate their origins. Because of the scarcity of documents from the time before Emancipation, finding information from before the 1870 census forces African

Americans to study the family histories of their ancestors’ former owners. The 1870 census is the first listing African Americans by name, age, birthplace, and occupation. The 1880 census provides even more detailed information with regard to the social conditions of African American life at that time. Twentieth-century genealogy is significantly easier for African Americans because increasingly more comprehensive data are available. Exploring African American genealogies often means exploring mixedness. While interracial unions have always been common in the United States, this knowledge and dealing with its consequences has been suppressed within the dominant culture. People of mixed heritage, following the rule of hypodescent, have usually been perceived and constructed as black by society. Genealogy can help reestablish suppressed connections and can potentially initiate new dialogue between the descendants of slaves and slaveholders. Undertaking genealogical research can also lead to heretofore unknown results: if a family member of mixed heritage has “passed” into white society a long time ago, a genealogist may be surprised to find out from birth certificates, for example, that she is also part African American. Genealogy and Popular Culture In recent years, genealogy has received much media attention. The Jefferson genealogy was significant to the public as the claim that the president had fathered at least one child with Sally Hemings, a slave in Monticello, could finally be verified with the help of DNA testing, extending the Jefferson family tree by several more members identifying as African American. Genealogy is also present in U.S. television culture. One prominent show from the 2000s that made genealogy its subject was African American Lives on PBS, hosted by Henry Louis Gates Jr., a literary critic, scholar, and educator affiliated with Harvard University. This miniseries introduced the audience to the family histories of well-known African Americans. It followed ancestral lines, visiting sites, researching in archives, conducting DNA searches, and confronting the celebrity with the findings. The miniseries was continued as African American Lives 2 in 2008. Similar series contributing to the popularity of the format of the quest-asseries were Faces of America on PBS in 2010, and Finding Your Roots on PBS in 2012.

Generation Gap



Genealogical documents and family trees have also become a conventional part of mixed-race autobiographies. In this literary genre, the narrators establish their connection to America by intertwining their family history and American national history. They use genealogical tools to point to the American-ness of their heritage, and their connectedness to American history. Genealogy and family trees have become part of middle-class American life: to be American is to have a certain kind of (immigrant) genealogy, and a family tree that fits the established graphic schemes. Genealogical practices and family trees contribute the “normalization” of some family stories and not others. They can also serve as tools of liberation for those whose family history has so far been unclear. Julia Sattler Technical University of Dortmund See Also: Genetics and Heredity; German Immigrant Families; Human Genome Project; Immigrant Families; Interracial Marriage; Irish Immigrant Families; Italian Immigrant Families; Melting Pot Metaphor; Miscegenation; Multiracial Families; Slave Families. Further Readings Gates, Henry Louis, Jr. Faces of America: How 12 Extraordinary People Discovered Their Pasts. New York: New York University Press, 2010. Kennett, Debbie. DNA and Social Networking: A Guide to Genealogy in the Twenty-First Century. Stroud, UK: History Press, 2011. Osborn, Helen. Genealogy: Essential Research Methods. London: Robert Hale, 2012. Weil, François. Family Trees: A History of Genealogy in America. Cambridge, MA: Harvard University Press, 2013.

Generation Gap During the 1960s, much was written about a purported divide between the parental generation and their children, the product of the “baby boom,” those born in the years immediately following

617

World War II. The popular press and news media were replete with articles highlighting the ideological, political, and lifestyle dichotomization between generations, and television shows such as All the Family were predicated on that purported generational gulf. A high birthrate and the events of the 1960s were transformational; “something was in the air” in the late 1950s. It should be noted that at that time media attention to the colorful “beats,” such as Jack Kerouac, Richard Fariña, and Allen Ginsberg paved the way for the hippies. Kerouac’s On the Road (1957) was deeply influential in illustrating a generation’s search for meaning in postwar America. The book had examples of petty crime, promiscuity, and recreational drug use, and it seemed to endorse a lifestyle that explicitly rejected that of middle-class, “white shirt and tie,” “straight” America. Many older, traditional critics dismissed it as scribbling, or mere “typing,” but it became canonical to many intellectually inclined younger people. It gave rise to the “beat phenomenon” where beat locales such as coffee shops and bearded and philosophical young characters were featured in popular film and television. Shows such as Route 66 featured Kerouac-type young men who traveled from town to town, working odd jobs, searching for meaning in Eisenhower-era America. It was a real harbinger of change. This search for something meaningful in a bland, conformist, consumer-oriented society and contemporary events led to a serious crisis of legitimacy, as far as young people were concerned. The receptivity of the young to the message of the beat generation suggested that American society was sadly lacking and that alternative lifestyles were worth pursuing. Older people, the so-called greatest generation, were befuddled and at times angered by the seeming ingratitude of young people living in a time of economic prosperity and in a land of plenty. Civil rights, the Vietnam War, the psychedelic revolution and recreational drug use, feminism, environmentalism, and campus unrest were divisive issues and the dichotomy was felt particularly strongly between generations. One of the primary modes of expression for the young was found in popular music. Although the 1960s began with bland and inoffensive pop music, early in the decade, a shift in that medium briefly

618

Generation Gap

featured folk music, which had many topical and political manifestations. Bob Dylan’s early oeuvre was in this genre, and many of his works from that period have explicitly liberal-radical themes that were previously unexplored in pop music. Dylan was touted as the “voice of his generation” and other “folkies,” such as Peter, Paul and Mary and Joan Baez earnestly echoed themes advocating fundamental and radical political change. The folk movement quickly merged into folk rock, which moved into full-scale psychedelic rock. Pop music went from songs about class clowns to obsessive interest in interior and solipsistic themes: the words in my mind, referencing drug use, became featured more than syrupy romantic lyrics. A group that emblemizes this metamorphosis is the Beatles, whose early inoffensive and catchy Liverpool sound began to incorporate sitars and explicit drug references. When confronted with the idea that “Lucy in the Sky With Diamonds” was about LSD, a powerful psychedelic drug, the group denied it. The later classic concept album, Sgt. Pepper’s Lonely Hearts Club Band (1967), though heavily influenced by the British music hall tradition, was a psychedelic tour de force. Other bands showed parallel evolution through the decade. The Rolling Stones moved from covers of American black blues into psychedelia, albeit briefly, and later evolved a hard-rocking, unique, and long-lasting sound. Other groups were explicitly drug oriented. Jefferson Airplane started with a folky style, but their classic album Surrealistic Pillow, released in early 1967, defined the druginfused San Francisco sound. The changes in pop music were deeply disturbing to the older generation, and appearances by these groups on national television were often censored. Nonetheless, adults found images of Mick Jagger, gyrating on the Ed Sullivan Show wearing Wehrmacht uniforms, most upsetting. Later in the decade, the “moptop” hair featured by the groups and the young people who copied them evolved into shoulder-length locks and beards. This led to schools imposing grooming rules and suspending male students for distracting others in the learning environment. Girls were suspended for wearing their miniskirts too short. The hair, miniskirts, and eccentric outfits were all manifestly visible markers that served to identify hip young people to each other. As such, they were initially deeply disturbing to the parental generation. The Woodstock festival

Freshman girls wear miniskirts on a college campus in Memphis, Tennessee, in 1973. Long hair on men, “moptops,” miniskirts, and eccentric outfits were visible markers to identify hip young people to each other. These fashion statements were deeply disturbing to the many members of their parents’ generation.

in 1969 seemed to define a situation where, in the words of a notably solipsistic and self-important Jefferson Airplane song, “one generation got old, one generation got soul,” as it was widely touted as a time of “love and music,” seemingly unmarred by violence and tragedy. The awful events of the Altamont Concert, later in the same year, were to bring the optimism of the decade to a sad climax. The popular perception is that this so demoralized the Woodstock generation that it gradually ceased to exist. Reality itself intruded, and young people, like previous generations, had to finish educations, get jobs, and become, like their predecessors, parents, fated to be befuddled, disturbed, and perhaps angered by the antics of Madonna, Michael Jackson, Lady Gaga, and Miley Cyrus. Fathers and Sons The purposed singularity of the 1960s generation gap is belied by numerous historical literary examples. Perhaps the most apposite is Turgenev’s



(1818–83) Fathers and Sons; alternately, Fathers and Children. Turgenev was a writer interested in bringing European-style reforms to Russia, who was vilified by more traditional and religious writers as a Westernizer. Attacking cruelties of serfdom in particular, he spent most of his later life in Europe. In Fathers and Sons (1862), he presents Bazarov, the first explicitly radical figure in Russian literature. Bazarov, a nihilist who believes that the collapse of the current social order, characterized in the book by a sterile, anachronistic, and oppressive nobility, could only bring improvement, represents an entire generation of young Russian intelligentsia. These “narodniks” took inspiration from European and homegrown radicals and took their idealistic views to “the people,” a move that was not well received by the Tsarist regime, the nobility, and perhaps, most surprisingly, the peasantry. They were trying to educate, sensitize, and radicalize the serfs by working with them and spreading their ideas by example. The peasantry, however, saw them correctly as irreligious, impious, and threatening to their way of life, notwithstanding the daily oppression they endured. The stolidity and essential conservatism of the peasantry was portrayed by Turgenev, but even more insightfully by his contemporary, Dostoevsky. Bazarov is in continual conflict between his emotions and his political radicalism. Moreover, his radical nihilism, which is misunderstood as “a belief in nothing,” upsets the older generation and an older noble in particular. One thing leads to another and a duel ensues. This violent incident ends in the old man’s wounding and Bazarov’s disgrace. Bazarov later dies as a result of an unrelated laboratory accident. The entire work concerns generational conflict that flows out of the changes occurring in Russian society in the era. It could be applied to almost any society undergoing fundamental changes. Although the book was ill-received in Russia when published, it was later recognized as a classic and enjoys a place in the canon of world literature. It is generally used in classrooms to illustrate the universality of generational conflict. The alienated idealism of the younger generation wounded the elder and ended tragically for the symbol of the younger cohort. A “Lost Generation?” In the 1920s, Gertrude Stein told Ernest Hemingway that he and other young intellectuals who flocked to

Generation X

619

Paris were part of a uniquely “lost generation.” Seemingly purposelessly hanging around bars and cafes, making smart conversation, drinking heavily, Ernest Hemingway, F. Scott and Zelda Fitzgerald, and John Dos Passos were seen as a generation, scarred by war, that was somehow different. This, and any characterization of any generation as unique, ignored the fact of succession of generations. As Hemingway himself says in the epigram to The Sun Also Rises, quoting the Bible, “one generation passes away, and another generation comes: but the Earth abides for ever.” The only thing that never changes is change itself. Though the events of the 1960s were dramatic and seemed unique, generational differences and conflict in modern societies are not. Francis Frederick Hawley Western Carolina University See Also: Baby Boom Generation; Generation X; Generation Y. Further Reading: Hemingway, E. The Sun Also Rises. New York: Scribner’s, 1927. Kerouac, J. On the Road. New York: Penguin, 1957. Leland, J. Hip: the History. New York: Harper, 2004. Turgenev, I. Fathers and Sons. New York: Signet, 2005.

Generation X Born between 1965 and 1980, Generation Xers came of age in an era containing a host of problems—economic, political, and social—for which they held baby boomers accountable. Sometimes described as the “postponed generation,” they married later than earlier generations, delayed political engagement in large numbers, left the nest more slowly and returned to it more frequently after such crises as unemployment and divorce. Their unemployment rates were high, and they were twice as likely as their parents to be the children of divorce. Survivors of three recessions, they hit midlife as the least financially secure generation and the most likely to face downward mobility as they retire. Time magazine described them in the early 1990s as a generation with a short attention span, lacking

620

Generation X

ambition, heroes, and style; but less than a decade later, Time decided they were independent pragmatic go-getters. As they enter their middle years, Generation X (82.1 million strong, according to the 2010 census) still defies simplistic tags that define them as a generation sociologically, politically, and culturally. Although the term Generation X (or Gen X) was first used by photojournalist Robert Capa in the 1950s to refer to a group of post–World War II 20-year-olds, it was not until Douglas Coupland chose it as the title of his 1991 novel, Generation X: Tales for an Accelerated Culture, and tagged his main characters and the generation to which they belonged with the term, that it entered popular culture as the most common designation for the generation that followed the baby boomers. If the characterization of an entire generation as overeducated, underemployed, disaffected whiners with “small lives” marked with irony and ennui was simplistic and unfair, it nevertheless gained credence as a representative portrait of the cohort. Coupland’s reminder that his characters were not real, and that he intended the tag to describe a point of view rather than a chronological age, had little effect. Gradually, Generation X won out over other tags such as “Baby Busters,” “Thirteeners,” “Boomerangers,” and “the Peter Pan Generation.” Blame the Economy Many Gen Xers entered the job market in the aftermath of extensive corporate downsizing. Automation, foreign competition, and the shifting of jobs to countries with lower labor costs meant that approximately 43 million jobs were terminated between 1979 and 1995. The McJobs of Coupland’s fictional characters were the reality faced by a significant number of new college graduates who found that their degrees were worth little when they were forced to take jobs as pizza deliverers, shelfstockers, health-care trainees, and other, similar work in a low-wage, low-benefit service economy. Those who chose to continue their schooling faced rising tuition costs and cutbacks in federal grants. The future did not look any brighter. In the early 1990s, figures released by the U.S. Labor Department indicated that nearly a third of new college graduates over the next dozen years could expect to be unemployed or underemployed. Adding to the growing resentment among Generation X was

the fact that they were paying Social Security taxes at a rate far exceeding that paid by earlier generations, as much as 20 times more when adjusted for inflation. As Gen Xers began to acquire families and mortgages, they added to a debt load from student loans and credit cards that was already higher than that of previous generations. In the two decades between 1977 and 1997, the median student-loan debt grew from $2,000 to $15,000. During the first five years of the 21st century, housing prices were increasing, and many Gen X families, feeling secure with their two incomes, were trading up for more expensive houses. The median value of homes owned by those ages 35 to 44 during this period rose by 20 percent. The generation’s homesecured debt rose by 30 percent. During the global recession of the late 2000s, Gen Xers lost almost half of their wealth (about $33,000 on average) and confronted a debt total that averaged more than $46,000. With significant numbers of Gen Xers delaying marriage and children, many of them found themselves squeezed between the economic and emotional stresses of dependent children and aging parents, just at the time when their own financial picture seemed bleakest. With boomers delaying retirement because of their losses, career advancement for Gen Xers slowed. This generation had ample reason to complain. Political Dropouts Turned New Progressives One of the complaints most often aimed at Gen Xers is their lack of political engagement. As members of this generation turned 18 and became eligible to vote, they seemed less inclined than earlier generations to become involved in the political process, absenting themselves from the polls in unprecedented numbers. Cynical about government, with an identity more global and less national than earlier cohorts, they showed scant political allegiance to any party. They tended to view Democrats and Republicans as remarkably similar, both mired in corruption and more interested in partisan battles than in good government. In 1999, 44 percent of those ages 18 to 29 identified as independents, more than twice the number who voted a straight party ticket. The generation’s harshest critics lambasted Gen Xers, claiming that they could not even be legitimately called dropouts because that word connoted



involvement at some point. Those more sympathetic to the crises of their particular historical era argued that cynicism toward society’s institutions was hardly surprising from a generation reared in an age of splintering families, failing schools, and scandalridden government and coming of age at a time of economic insecurity, environmental disasters, and failed leadership. Bill Clinton earned 52 percent of the under-25 vote in 1992, the highest election turnout among the young in 20 years, but many of the Gen Xers who supported Clinton in 1992 expressed their disillusionment and distrust four years later when the number of voters under 30 dropped. Researchers generally have found Gen Xers to be more politically conservative than their parents’ generation, but they are a generation who combines fiscal conservatism with social liberalism. More than a decade ago, Ted Halstead, writing for the Atlantic Monthly, called this blend “balancedbudget populism.” Gen Xers are likely to support economic caution and balanced budgets, but at the same time, a majority of them favor gay marriage, legal abortions, and gun control laws and question the value of U.S. involvement in Iraq and Afghanistan. As a group, Generation X has voted Democratic in every presidential election, with the exception of 2004. Gen Xers leaned Democratic in the 2008 elections, voting for Obama over McCain by 52 to 46 percent, but they leaned Republican in the midterm elections of 2010. They helped give Obama his second win in 2012, although polls just a few days before the election showed Obama and Romney splitting the Gen X vote evenly. Some Gen Xers point to their generation’s championing of political satire as proof that their disdain for political partisanship does not equate to apathy. The Daily Show and its spinoff The Colbert Report with their mix of entertainment and genuine critiques of the lack of substance in political discourse have been called the voice of Generation X by angry critics of the irreverent tone of the shows and by admirers who suggest such humor may be among Generation X’s finest achievements. Other Gen Xers such as Jerome Armstrong, founder of MyDD. com and formerly an independent wary of political parties and now a self-proclaimed progressive Democrat, have helped to create netroots, political activism channeled through blogs, wikis, and other online media. Other Gen Xers among the “new progressives” include Andrea Batista Schlesinger,

Generation X

621

the executive director of the Drum Major Institute, a progressive public policy organization with roots in the civil rights movement, and David Callahan, cofounder of think tank Demos. A New Portrait of the Slackers The Longitudinal Study of American Youth (LSAY) at the University of Michigan issued a report in 2011 based on a national sample of almost 6,000 Gen Xers, whom researchers tracked from 1987– 1993 and again from 2007–2011. The study found a Generation X substantially different from media profiles. It found Gen Xers engaged in their communities, happy with their work, and doing a better job than their parents did at balancing the responsibilities of work and family life. Members of Generation X may not be joining organizations such as Elks, Moose, and Knights of Columbus, but they are active in parent-teacher organizations, local youth sports clubs, book clubs, and other community organizations. Thirty percent are active in professional, business, or union organizations, and one-third are active members of a church or religious organization. They participate in recreational activities at an even higher rate, with almost 90 percent hiking, swimming, boating, or fishing at least once a month. Contrary to the stereotype that presents Gen Xers as uninformed, 72 percent read a newspaper, in print or online, at least once a week, and nearly half read six or more books in the last year. Compared to a national sample of all adults, Gen Xers are more likely to be employed, with 86 percent of Generation X working part time or full time. Most of these workers are happy with their jobs. Two-thirds indicated that they were satisfied with their current job, and almost a fourth rated their job satisfaction at 9 or 10 on a 10-point scale. Fewer than 8 percent ranked their job satisfaction as 3 or lower. A highly educated group, half of Gen Xers have postsecondary degrees; 43 percent have a baccalaureate degree, with women outnumbering men 46 to 40 percent. Nine percent are currently enrolled in a program leading to a degree, ranging from associate degree programs to graduate and professional degree programs. Two-thirds of Gen Xers are married, and 71 percent have minor children at home. As children, many of them were latchkey kids, and they are committed to creating the family life that they missed.

622

Generation Y

While their work record affirms the importance that they attach to career goals, they give higher priority to balancing work and home. They are more likely than earlier generations to leave work that they find uncongenial, and career movement is often lateral. Flexible work schedules are important for this group, second only to salaries. More than a third of Gen Xers indicated that they would leave a job that failed to offer day-to-day flexibility. Men ranked flexibility higher than women, with 40 percent saying that its lack would be a sufficient reason to leave, compared to 37 percent of women. Gen Xers are adaptable, and a strong entrepreneurial bent, most famously seen in individuals such as Google founders Larry Page and Sergey Brin and Facebook Chief Operating Officer Sheryl Sandberg, is also a prime characteristic. Eighty-three percent of those in the LSAY study said that marriage to the right person and a happy family life were very important to them. Generation X parents also spend a good deal of time with their children. Percentages of parents of preschool and elementary-school children who regularly help their children with homework, read to them, accompany them to zoos, museums, and public libraries, and attend school events range from 72–91 percent. Numbers are comparable for parents of secondary school students in all areas except visits to zoos and museums. Gen Xers also highly value contact with extended family and friends; 95 percent of Gen Xers talk on the phone at least once a week to friends or family, and 29 percent say they do so at least once a day. Slightly more than 80 percent report visiting a friend or relative once a week, and 29 percent visit friends or family three times that often. Perhaps most significant is that Gen Xers, despite their reputation as whiners and the real economic challenges they face, report that they are happy with their lives. On a 10-point scale, with 10 meaning very happy, the average score was 7.5. Wylene Rholetter Auburn University See Also: Baby Boom Generation; Boomerang Generation; Parenting. Further Readings Craig, Stephen C. and Stephen Earl Bennett. After the Boom: The Politics of Generation X. Lanham, MD: Rowman & Littlefield, 1997.

Erickson, Tammy. “Gen X Hits Another Bump in the Road.” Harvard Business Review Blog Network. http://blogs.hbr.org/2012/04/gen-x-hits-another -bump-in-the-1 (Accessed September 2013). Halstead, Ted. “A Politics for Generation X.” Atlantic Monthly, v.284/2 (1999). Miller, John D. Active, Balanced, and Happy: These Young Americans Are Not Bowling Alone: The Generation X Report 1.1 (2011). http://lsay.org/ GenX_Rept_Iss1.pdf (Accessed September 2013). Rapoza, Kenneth. “Approaching Mid-Life: Are Gen-Xers Doomed?” Forbes (February 28, 2012). http://www .forbes.com/sites/kenrapoza/2012/02/28/approach ing-mid-life-are-gen-xers-doomed (Accessed September 2013). Rosen, Bernard Carl. Masks and Mirrors: Generation X and the Chameleon Personality. Westport, CT: Praeger, 2001. Strauss, William and Neil Howe. “The Millennial Cycle.” Generations: The History of America’s Future, 1584– 2069. New York: Quill, 1991.

Generation Y Generation Y (or Gen Y) is one name used for the people born during the mid-1980s and early 1990s. The exact beginning and ending dates of this generation are, however, much debated (and defining people by their birth years may seem like an arbitrary oversimplification). The name is based on Generation X, the generation that preceded them (most demographers agree that Generation Xers were born between 1964 and 1984). Members of Generation Y are often referred to by several names: “Echo Boomers,” because they are the children of parents born during the baby boom (the “baby boomers”); Generation iGen or Net Generation because children born during this time period have had constant access to technology (computers and cell phones) in their youth; and the Millennial Generation, coined by Neil Howe and William Strauss, who in Millennials Rising, settled on “Millennials” rather than “Generation Y” or “Echo Boomers” because they found that the youths preferred it. Compared to those other names, the term millennial did not put them in the shadow of a previous generation.



In the last years, the Generation Y–Millennial generation has emerged as a powerful political and social force. Characteristics that researchers deem typical of this generation include optimism, techsavvy, and a solid educational background. The Millennial generation has been defined as one that is competent, qualified, technological, and in search of a new form of citizenship. They also exhibit a genuine concern for people and the environment. Following Eric Greenberg and Karl Weber, they are politically and socially independent, and they are thus spearheading a period of sweeping change around the world. Characteristics Generation Y are the most ethnically and racially diverse generation in U.S. history. A 2010 study, edited by Paul Taylor and Scott Keeter of the Pew Research Center, was based on the results of a telephone survey conducted January 14 to 27 on landlines and cell phones with a nationally representative sample of 2,020 adults living in the continental United States, who were 18 years of age and older. The study shows that among those ages 13 to 29, 18.5 percent are Hispanic; 14.2 percent are black; 4.3 percent are Asian; 3.2 percent are mixed race or other; and 59.8 percent, a record low, are white. Generation Y is thus starting out as the most politically progressive age group in modern history. In the 2008 U.S. election, Generation Y voted for Barack Obama over John McCain by 66–32 percent, while adults ages 30 and over split their votes 50–49 percent. In the four decades since the development of Election Day exit polling, this is the largest gap ever seen in a presidential election between the votes of those under and over age 30. In 2012, President Obama primarily won reelection because of the support coming from two key and expanding, constituencies: Hispanics and members of generation Gen Y. Generation Y feels empowered, they have a sense of security, and they are optimistic about the future. Unlike generations that came before them (baby boomers and Gen Xers), these children are not left to make key decisions alone; the parents are involved in their daily lives. Their parents helped them plan their achievements, took part in their activities, and showed strong beliefs about their children’s abilities. Generation Y members are on course to become the most educated generation

Generation Y

623

in American history, a trend largely driven by the demands of a modern knowledge-based economy, but most likely accelerated in recent years by the millions of young people enrolling in graduate schools, colleges, or community colleges (in part because they cannot find a job). According to the study edited by Paul Taylor and Scott Keeter, 40 percent of Generation Y are still in school, and of those who are of college age but not in school, 30 percent say they plan to go back at some point to get their degree. Furthermore, 90 percent of today’s high school students say they plan to pursue some sort of education after high school. Generation Y has a higher propensity to trust others, and they value authentic relationships. A book by Eric Greenberg and Karl Weber that presents the results of a major research study into the values, dreams, and potential of Generation Y, including an in-depth survey of 2,000 individuals and a series of focus groups, shows that they (and their supporters from other generations) are poised to change the world for the better, and it lays out a powerful plan for progressive change that today’s youth are ready to implement. Members of Generation Y are extremely independent because of individual and family change (e.g., divorced families, lone parenting and living apart together [LAT] couples) and the revolution of advanced Internet technologies. They grew up with Web 2.0 technologies: Web 2.0 is the move toward a more social, collaborative, interactive, and responsive Web. Learning online is “natural” to them—as much as retrieving and creatively creating information on the Internet, blogging, communicating on smartphones, downloading files to iPods, and instant messaging. Smartphones and tablets are the communication devices of choice for Generation Y; therefore, they are a challenging bunch to communicate with. They are the first generation in human history who regard behaviors such as tweeting and texting, along with Web sites such as Facebook, YouTube, Google, and Wikipedia, not as astonishing innovations of the digital era, but as everyday parts of their social lives and their search for understanding. This has a considerable impact on their daily lives. For example, the results of a national survey released by PGAV Destinations—based on a nationwide online survey launched to compare a Generation Y sample group with visitors ages 30

624

Generation Y

and over−reveal Generation Y motivations and behaviors that have significant implications for travel and tourism. According to the study, nearly 6 in 10 Gen Ys (58 percent) say that they travel for leisure with friends—nearly 20 points higher than older generations. Relationships are vital to Generation Y members, and they are highly influenced by others who help to select places to visit and things to do. Through social media, they tell stories to one another and make recommendations and assessments, often in the form of real-time descriptions of their experiences. Millennials also use technology to make quick decisions. They plan trips in far less time (75 days) than older generations (93 days). This highly educated and diverse generation has a real appetite for learning. Seventy-eight percent say that they prefer to learn something new when they travel. They look for places that are fun and entertaining (78 percent), and interactive and hands on (68 percent). Millennials are a powerful segment of today’s travelers, and their preferences and habits will help shape the future of travel preferences going forward. Reshaping Family and Work This generation has grown up with the Internet, smartphones, and social media. It is easier than ever to call on a smartphone or send a text to members of one’s extended family. Posting pictures on Facebook allows family members to immediately see what is happening to their children and grandchildren. Thus, Gen Ys are introducing their families to a variety of ways to stay connected. A book—based on 1,200 interviews that aim to better understand Gen Ys and their attitude toward family life, work and career, money, the media, technology, the environment, and religion—by Thom Rainer and Jess Rainer, sheds light on the relationship between Generation Y, intimacy, marriage, and family formation. When it comes to marriage, Gen Ys are optimistic about it even though they grew up in a world where divorce was common. It is also worth noting that Generation Y are marrying much later than any generation that preceded them. They also view marriage differently from their parents, in part because of the political battles concerning same-sex marriage and the definition of marriage.

In the survey, they were asked to respond to the statement: “I see nothing wrong with two people of the same gender getting married.” A total of 6 in 10 agree with the statement (40 percent strongly agreed, 21 percent agreed somewhat). It is also likely that the pluralization of family forms and arrangements will further increase: Gen Ys are continuing the prior generational trend of being increasingly in favor of new family forms, but by a higher margin. Members of Generation Y are more tolerant than adults in other generations of a wide range of “nontraditional” behaviors related to marriage and parenting. Millennials are more accepting than older generations of nontraditional family arrangements, from mothers of young children working outside the home to adults living together without being married. And the Gen Ys are also distinctive in their social values; they stand out in their acceptance of homosexuality, interracial dating, and expanded roles for women and immigrants. Generation Y members may also reshape work. They desire meaningful, stimulating work and show a lack of interest in traditional career paths that promote slowly. Millennials are fast movers. They will change jobs, and perhaps even entire careers, many times in their working lives. They tend to be uncomfortable with rigid corporate structures and turned off by information silos; they also expect rapid progression, a varied and interesting career, and constant feedback. In other words, Gen Ys want a management style and corporate culture that is markedly different from anything that has gone before—one that meets their needs. This is a stark contrast with the “job-forlife” career pattern of their baby boomer parents. The PwC report “Millennials at Work. Reshaping the Workplace”—based on an online survey of 4,364 graduates across 75 countries between August 31 and October 7, 2011: all were ages 31 or under and had graduated between 2008 and 2011—shows that Gen Ys say they are comfortable working with older generations and value mentors in particular. However, there are signs of tensions, with 38 percent saying that older senior management does not relate to younger workers, and 34 percent saying that their personal drive was intimidating to other generations. Almost half felt that their managers did not always understand the way that they use technology at work.

Genetics and Heredity



This tension has been a subject of intense discussion because it may bring intergenerational conflicts, as well as uncertainty about what the future of the workforce will be. The PwC report shows that Gen Ys matter because they are not only different from those who have gone before, but they are also more numerous than any since the soon-to-retire baby boomer generation—Millennials already form 25 percent of the workforce in the United States and account for over half of the population in India. By 2020, Gen Ys will form 50 percent of the global workforce. Managing the often conflicting views and needs of a diverse workforce that may cover a wide range of generations—from the baby boomers to Generation X and Generation Y—is a challenge for many organizations: the particular characteristics of Gen Ys require a focused response from employers. Elisabetta Ruspini University of Milano–Bicocca See Also: Baby Boom Generation; First Generation; Generation Gap; Generation X; Me Generation. Further Readings Anderson, Kerby. “The Millennial Generation.” Probe Ministries (2011). http://www.probe.org/site/c.fdKEI MNsEoG/b.6601055/k.7A91/The_Millennial_Gener ation.htm (Accessed July 2013). Benckendorff, Pierre J., Gianna Moscardo, and Donna Pendergast. Tourism and Generation Y. CABI Publishing (2010). http://bookshop.cabi.org/Uploads/ Books/PDF/9781845936013/9781845936013.pdf (Accessed July 2013). Greenberg, Eric H. and Karl Weber. Generation We: How Millennial Youth Are Taking Over America and Changing Our World Forever. Emeryville, CA: Pachatusan, 2008. Howe, Neil and William Strauss. Millennials Rising. The Next Great Generation. New York: Vintage, 2000. PGAV Destinations. “Meet the Millennials. Insights for Destinations.” Destinology, v.1/1 (2011). http://www .pgavdestinations.com/images/insights/eDestinology _-_Millenials.pdf (Accessed July 2013). PwC. Millennials at Work Reshaping the Workplace (2011). http://www.pwc.com.tr/tr_TR/tr/publications/ hrs-publications/pdf/mtp.pdf (Accessed July 2013). Rainer, Thom S., and Jess W. Rainer. The Millennials: Connecting to America’s Largest Generation. Nashville, TN: B&H, 2011.

625

Stanton, Glenn T. and Andrew Hess. “Generational Values and Desires.” Focus on the Family Findings, June 2012. http://www.focusonthefamily.com/about _us/focus-findings/family-formation-trends/gener ational-values-desires.aspx (Accessed July 2013). Taylor, Paul and Scott Keeter, eds. Millennials: A Portrait of Generation Next. Confident, Connected, Open to Change, Pew Research Center (February 2010). http://www.pewsocialtrends.org/ files/2010/10/millennials-confident-connected -open-to-change.pdf (Accessed July 2013).

Genetics and Heredity Genes are biological structures that contain instructions for creating specific traits in organisms. The building block of life, genes function by providing the instructions for cells to create specific proteins in specific amounts, which in turn perform actions such as altering other molecules. Every time a cell divides, it creates a copy of its genes through DNA replication. The possibility of a replication error, or of damage to genes that results in instructions being “garbled” or “misread,” is one of the driving factors behind mutation. Because genes are heritable, certain genetic traits, ranging from vulnerability to certain diseases to musical ability and blue eyes, become associated with certain families. Genes are inherited from both parents; each provides two copies of their genes. One copy of each gene is contributed to either the sperm or the egg, which between them then contain a complete set of genes. The interaction between these genes determines which traits are inherited from each parent. For instance, depending on whether a gene is dominant or recessive, conflicting genes for a given trait such as eye color or skin pigment are resolved in different ways. Because each person has two copies of each gene, they possess genes for traits they do not evince, and are capable of passing these genes on to their children. A child may have a trait in common with a grandparent for this reason—the gene was passed down to the middle generation but overridden by some other gene not found in either the grandparent or the child. This mixing process of genes accounts for significant variation within

626

Genetics and Heredity

a species, which is precisely why sexual reproduction—which mixes genes from two parents, instead of simply making a clone of the organism, as with asexual reproduction—is such a strong evolutionary advantage. Though genes contain the blueprint for an organism, not every aspect of that organism is encoded in genes. Its thoughts, experiences, and memories, reside in the brain, whereas physically acquired characteristics such as injuries or learned physical skills do not change the genetic code. Even in the womb, the organism is subject to environmental factors that may result in traits with no genetic component. Traits coded by genes include not just obvious external features such as hair and eye color, but also sensitivity to pain, the ability to detect the distinct smell of asparagus urine, and whether cilantro tastes herbaceous or soapy. Many traits are the result not of a single gene but the interaction of many genes, or the accumulation of many genes that encourage a tendency toward certain traits. While there is a specific gene for eye color, for instance, there is no such gene for height or weight. While weight is influenced by the environment (i.e., the person’s eating and exercise behaviors), numerous genetic factors affect it, from musculature to metabolism to sensitivity and certain foods. Height is less affected by the environment—diet may affect growth during adolescence, and malnutrition retards growth—and it is not subject to the fluctuations of weight, but on the genetic level, height still results from many different genes, and while tall parents tend to have tall children, and short parents have short ones, the reverse is not unusual. It is not possible to predict all of the traits of a grown organism based on studying its genes, nor will it become possible when genetics are better understood; there are simply too many traits without a genetic basis, or that result from both genetic and environmental factors. Not all traits are heritable, and the heritability of traits is an ongoing area of study. The model of inheritance used by the synthetic theory of evolution, hard inheritance, rejects Lamarckian notions of acquired traits being hereditary because acquisition does not affect DNA. In other words, a population that moves to a mountainous location does not evolve stronger legs and lungs in response to the population frequently working its legs and lungs, because those

workouts do not impact genes; but over time, natural selection may favor those with stronger legs and lungs. Hard inheritance is a contrast from soft inheritance, a discredited school of thought that proposed various mechanisms by which acquired traits were inherited. The human genome is organized into 23 pairs of chromosomes—pieces of coiled-up DNA and DNA-bound proteins. One pair of chromosomes are the sex chromosomes, or allosomes, which define the person’s sex and contain hereditary information that is sex-linked. Males have one X chromosome and one Y chromosome, whereas females have two X chromosomes; there are also intersex people who do not fit this generalization. The X chromosome carries about 1,500 genes, which was once thought to be the most of any human chromosome; the Y contains about 450. These genes include not just genes related to sexspecific traits such as sperm production (which is governed by nine separate genes) but also mutations that are carried by one of the sex chromosomes. The most common kind of color blindness, for instance, is caused by damage to genes on the X chromosome, making it more common among men because women have two sets of the genes. Like other genes, genetic disorders may be dominant or recessive. Dominant disorders affect a person when only one copy of the gene is present, and each child has a 50 percent chance to inherit the gene. (In some diseases, not everyone with the gene is symptomatic.) In recessive disorders, the affected person inherited two mutated genes, one from each parent (who are usually unaffected, and are therefore known as carriers). Sickle-cell anemia is a recessive genetic disorder. However, there are many areas where it is not yet clear whether a trait is acquired or not. Genetic predispositions for various diseases have been found, for instance, and caffeine sensitivity is genetically linked, but work continues on the genetic factors that may be behind various traits, mental and physical illnesses, and competencies. Modern Developments Modern understandings of heredity stem from the modern evolutionary synthesis, which was developed in the 1940s when a consensus developed that unified the work of various subfields within biology. The synthesis combined work in genetics and



heritability, for instance, with understanding about natural selection and the development of species, as well as marrying ideas about the macroevolution observed by paleontologists with the microevolution observed on smaller scales. The synthesis has since incorporated a better understanding of geological history and continental drift, atmospheric sciences, and new fossil discoveries. Evolution is a gradual process caused by small genetic changes. The four main evolutionary processes (which are not equal in their influence) are genetic drift, gene flow, mutation, and natural selection. Genetic drift is a change in allele frequency as a result of random sampling. Allele frequency means the commonality of a given trait in the population. Naturally red hair has a low allele frequency, for instance, whereas wet ear wax is a high allele frequency. Allele frequency changes as population members are born and die off. The less frequent the allele is, the greater the effects of genetic drift. The extent of the role of genetic drift in evolution is a matter of debate. Gene flow is the transfer of alleles from one population group to another, which impacts allele frequency. Gene flow is usually caused by migration.

A child with blonde hair and blue eyes, two traits that are genetically inherited. Hair and eye color are two of the more obvious traits coded by genes; other traits include sensitivity to pain or certain odors.

Genetics and Heredity

627

Genetic studies of the United States have found evidence of gene flow as a result of the admixture of the white European population and black West African population, the latter of which hosts the Duffy antigen (conferring malaria resistance). Gene flow is also possible between separate species, especially when viruses transfer genes across species boundaries, but this does not happen on a regular basis. Mutation is an alteration to the DNA due to a replication error, the insertion or deletion of DNA segments, or damage to DNA or RNA. Mutation is an important force, not only in evolution but also in the appearance of cancer. In conjunction with natural selection, the process by which alleles become more or less common in accordance to their selective utility with respect to the environment (because individuals with less helpful alleles are more likely to die young or be rejected by mates, such that individuals with more helpful alleles become more likely to pass those alleles on to the next generation), mutation is one of the key driving forces of evolution. Though evolutionary psychology is a valid and vibrant field, among laypeople, science reporters, and even scientists in other fields it has become too common to ascribe behavioral traits to genetic causes. Particularly once filtered through the lense of journalism, these too often become “just-so stories,” a disparaging term in the philosophy of science for appealing but unfalsifiable narrative explanations for a particular biological or cultural trait. The extreme case of this thinking leads to genetic determinism, the argument that genes alone determine how an organism develops, and which traits it has. The study of changes in gene expression other than those caused by changes to DNA is epigenetics (epi is Greek for “above” or “outside of ”). Some such changes are heritable, and the mechanisms by which they are inherited are usually collectively referred to as epigenetic inheritance to differentiate from traditional genetic inheritance. Epigenetic inheritance is an especially vital area of research in 21st-century biology as understanding of the human genome and the forces that make the individual progresses. Some of the mechanisms by which epigenetic inheritance transpires involve the proteins associated with DNA, or DNA methylation, which can impact the development of the

628

Genetics and Heredity

organism without altering its genes. In humans, one of the areas of research is transgenerational effects of epigenetic inheritance. For instance, a study of men and women in Overkalix, Sweden, discovered several results that can’t be explained simply by the action of genes: among men who were smokers from an early age, their sons had a greater average body mass index (BMI) than those of men who did not, but no such effect was seen in their daughters. Similarly, among men who endured famine conditions before their adolescence, their paternal grandsons (the sons of their sons) had a higher mortality rate from cardiovascular disease, but their maternal grandsons (the sons of their daughters) did not. Epigenetic effects like these seem to be caused by sex-linked differences in responses to environmental factors. An individual’s genetic makeup affects his or her response to pharmaceuticals, which is the focus of study of pharmacogenetics. While only rarely will a patient’s genes cause an ordinarily therapeutic drug to act as a toxin, far more often those genes will result in one treatment being more or less effective than another. Patients are used to preferring one treatment to another—when dealing with an ordinary headache or sore muscle, for instance, an individual will often have a preference for acetaminophen over ibuprofen, or naproxen sodium over aspirin, the common over-the-counter pain relievers (which is not to say that these preferences are genetically determined). However, mapping out a patient’s metabolism and responses to drugs based on genetic information is a complicated process, and the science is still young. The future it works toward is that of personalized medicine, rather than an off-the-rack “one size fits all” approach. At present, it is especially key in the long-term treatment of conditions such as cardiovascular disease, cancer, HIV, asthma, and diabetes, but in theory it will also help to treat acute illnesses. Genetic testing examines the patient’s DNA to test for various genetic information, especially genetic disorders or a genealogical DNA test. Genetic testing can reveal a subject’s risk of various inherited conditions, as well as reveal recessive genes that could be passed on to offspring. The test is easier but more time-consuming than a blood test, requiring only a swab of saliva or other genetic material. Often used by doctors, particularly when circumstances dictate the sensibility of a genetic test as part of the diagnostic exam, in recent years,

various companies have begun offering genetic testing directly to consumers. 23andMe, for instance, is a private rapid genetic testing company in California that has recently begun a national advertisement campaign after a 2012 venture capital round allowed them to lower the cost of their service to $99 (having charged $999 at launch). Founded in 2006, and including Esther Dyson on the board of directors, 23andMe delivers genetic information to users in the form of explorable online profiles that list information about probable ancestral heritage (separated by world region); 120 different health risks influenced by genetic factors, ranging from cancers and Parkinson’s disease to high blood pressure and asthma; 50 inherited conditions such as cystic fibrosis and maple syrup urine disease; 60 genetically determined traits like alcohol flush reaction, bitter taste perception, earwax type, lactose tolerance, malaria resistance, male pattern baldness, and hair curliness; and 24 drug sensitivites with genetic links. The service also allows users to discover other users within the database to whom they are related, from close relatives to distant cousins, though the usefulness of this feature depends on widespread participation. 23andMe cofounder Anne Wojcicki is the wife of Google cofounder Sergey Brin, whose mother has Parkinson’s disease. Brin invested nearly $4 million in the company, underwriting its Parkinson’s project. In late 2013, 23andMe was forced to temporarily suspend new sales of genetic tests while awaiting FDA approval, but announced its cooperation with the government in that matter and continued to offer information about ancestry. Patients who are at risk of an inherited disorder may seek out genetic counseling. Genetic counselors are certified by the American Board of Genetic Counseling and possess a master’s of science degree in their field, which combines medical competence with psychological counseling techniques. They work as a patient advocate, consulting with a physician while helping the patient understand the ramifications of whatever condition they may face or face the possibility of passing on, as well as possible treatments. Many genetic counselors are associated with prenatal clinics and work with parents who have or are expecting children with an inherited condition or chromosomal abnormality. Genetic counselors have been certified since 1981, though programs were not accredited until the following decade, and medical genetics has only been

German Immigrant Families



formally recognized as a medical specialty since 1991. Genetic counseling is becoming an increasingly important subfield of medicine and psychiatry as genetic testing becomes cheaper and supported by more insurance companies, who stand to save money in the long run through preventative care. Genetic counseling adds a new dimension to how families relate to one another and think of the family unit, as it is sought not only by patients who are at risk, but also couples considering the genetic risks to which they may expose their potential children, parents of children with hereditary conditions, and orphans or adoptees who lack a formal connection to their biological family. Bill Kte’pi Independent Scholar See Also: Evolutionary Theories; Genealogy and Family Trees; Human Genome Project. Further Reading Bowler, Peter. Evolution: The History of an Idea. Berkeley: University of California Press, 2003. Dawkins, Richard. The Blind Watchmaker. New York: Norton, 1996. Gould, Stephen Jay. The Structure of Evolutionary Theory. Cambridge, MA: Belknap Press, 2002. Mayr, Ernst, ed. The Evolutionary Synthesis: Perspectives on the Unification of Biology. Cambridge, MA: Harvard University Press, 1980. Miller, Geoffrey. The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature. New York: Anchor Books, 2001.

German Immigrant Families German Americans make up one of the largest ethnic groups in the United States. According to the 2012 American Community survey figures, there are 49.8 million Americans who claim German ancestors. Unlike Hispanics, who receive much more attention in contemporary society, German Americans tend to come from families that have been in the United States for multiple generations.

629

The first Germans came to the United States in the late 17th century, mostly for religious reasons. The first large wave of immigration occurred in the 18th century. Between 1720 and 1775, 108,000 Germans arrived on American shores. Some 80,000 of them settled in Pennsylvania. Between 1785 and 1820, another 21,000 to 25,000 arrived. The Napoleonic Wars in Europe (1798–1815) sent a large number of Germans to the United States for economic reasons. Unlike British immigrants, most Germans brought their entire families with them. Because many early German immigrants were peasants, their greatest desire was to own land, and they settled where land was readily available. Those who could afford to do so bought surrounding lands, which were delegated for other family members. Social attitudes of German American families were closely tied to their religious beliefs, and German Americans tended to be Lutheran or Reformed (67 percent) or Catholic (33 percent). Before the 1910s, German American families tended to settle near one another, and they established churches, schools, organizations, and newspapers that promoted German American unity. However, the assimilation of German Americans was significantly affected by World War I when it became important for families to demonstrate that they were loyal to the United States, rather than Germany. Early Immigrants Since many early German immigrants were peasants, they began their lives in America as indentured servants. German parents earned a reputation as greedy and cruel because they often sold their children into service. In truth, both they and their children were put to work for up to three years to pay off the debt incurred by their passage to America. In Philadelphia, 35 to 40 percent of German immigrant children who arrived between 1727 and 1820 entered into service in this way. Demographic records indicate that during that period, from 52 to 72 percent of German American settlers in Philadelphia were either married persons or dependent children. Eighty percent of German American males were literate. German Americans who did not become farmers tended to be craftsmen, becoming brewers, bakers, blacksmiths, cabinetmakers, shoemakers, and printers. Females became domestic servants. German American immigrants were known for boisterous singing, beer gardens, and elaborate holiday celebrations at Christmas and Easter. The family

630

German Immigrant Families

unit and the church were the most important elements in their lives, and social activities took place within these groups. Families tended to be patriarchal, and wives and children were expected to concede to the male head. Wife-beating had been common in Germany, but it was not tolerated in the United States. To teach their children the values taught by their churches, German Americans sent their children to parochial schools. While German Americans initially tended to marry within the community, intermarriage became common as the community became more assimilated. Politically, German Americans were against slavery in the 18th century and against women’s suffrage and Prohibition in the early 20th century. Early Twentieth Century Between 1840 and 1900, as many as 6 million additional German Americans came to the United States. While new immigrant families usually lived in areas known as “Little Germany,” more settled families moved into multiethnic neighborhoods or bought farms in the west or northwest. After the failed German Revolution of 1848, some 6,000 “48ers” came to the United States in search of political asylum. Some 6.3 million German Americans immigrated to the United States during the first decade of the 20th century. By 1911, 11 percent of all American farms were owned by German Americans. Farm production was carried out by family members, and grown children tended to live nearby. German Americans were proud of their heritage, but they had assimilated well. Most spoke English at home. In 1892, there were 727 Germanlanguage newspapers published in the United States, but that number declined by 25 percent as the century drew to a close. The 1900 census revealed that German Americans were more likely than other immigrant groups to become citizens. Some 90 percent were either naturalized or were in the process of becoming a citizen. That citizenship became a major issue when war broke out in Europe in 1914. Despite the proBritish stance of most Americans, German Americans families generally favored American neutrality, and German American leaders lobbied for an arms embargo against Britain and France to prevent American weapons from being used against German families. The German American newspaper Fatherland, which had been established with funds

sent from Germany, announced that its circulation had reached more than 100,000. German Americans were faced with a moral dilemma in 1916 when New York World released German documents revealing that German officials had been actively attempting to manipulate American public opinion. When Germany resumed bombing of Allied freighters in 1917, President Woodrow Wilson finally took a stand and broke off diplomatic relations with Germany. On April 6, the United States declared war on Germany, and German Americans were seen as potential traitors. Members of the clergy were tarred and feathered. Thousands of German American societies closed their doors. The German language was no longer spoken in German American churches. Community leaders called on all German Americans to support the war effort. Throughout the country, community groups organized activities designed to Americanize German Americans. Those who exhibited loyalty to Germany were deported. Postwar Period While immigration declined during the Great Depression and World War II, many German immigrants continued to come to the United States. In the 1940s, the number of immigrants reached 1.4 million That number doubled after World War II, and 2 million German immigrants came to America in the 1950s and another 2 million in the 1960s. The number peaked in the 1980s with 5.2 million new immigrants, and the 1980s census revealed that 17.9 million Americans acknowledged German ancestry. Another 31.2 percent reported that they were of mixed ancestry that included German. As German economic and political conditions stabilized in the 1990s, the number of new immigrants fell to 3.7 million. The number of German immigrants increased by 6 million between 2000 and 2010. While German American can be found throughout the United States, the largest numbers are still found in Pennsylvania, Michigan, Illinois, and Missouri. German Americans tend to be slightly older and better educated than the general population. One-third of German Americans have a college degree, and 40 percent are employed in management, business, science, or the arts. Elizabeth Rholetter Purdy Independent Scholar

See Also: Catholicism; Ethnic Enclaves; Immigrant Families. Further Readings Bass, Frank. “U.S. Ethnic Mix Boasts German Accent Among Surge of Hispanics.” http://www.bloomberg .com/news/2012-03-06/u-s-ethnic-mix-boasts -german-accent-amid-surge-of-hispanics.html (Accessed September 2013). Carlson, Allan. “The Peculiar Legacy of German Americans.” Society, v.40/2 (January/February 2003). Fogleman, Aaron Spencer. Hopeful Journeys: German Immigration, Settlement, and Popular Culture in Colonial America, 1717–1775. Philadelphia: University of Philadelphia Press, 1996. Grubb, Farley. “Babes in Bondage: Debt Shifting by German Immigrants in Early America.” Journal of Interdisciplinary History, v.37/1 (Summer 2006). Helbich, Wolfgang, and Walter D. Kamphoefner, eds. German-American Immigration and Ethnicity in Comparative Perspective. Madison, WI: Max Kade Institute for German-American Studies, 2004. Max-Kade German American Center, Indiana University. “The German Americans: An Ethnic Experience.” http://maxkade.iupui.edu/adams/toc .html (Accessed September 2013). Niemöeller, Sybil von Sell. Crowns, Crosses, and Stars: My Youth in Prussia, Surviving Hitler, and a Life Beyond. West Lafayette, IN: Purdue University Press, 2012. Pickle, Linda Schelbitzki. Contented Among Strangers: Rural German-Speaking Women and Their Families in the Nineteenth-Century Midwest. Urbana: University of Illinois Press, 1996. Schulze, Mathias, et al. German Diasporic Experiences: Identity, Migration, and Loss. Waterloo, ON, Canada: Wilfrid Laurier University Press, 2008.

Gesell, Arnold Lucius Arnold Lucius Gesell was an American psychologist and pediatrician who pioneered the use of film to study the physical and mental development of normal infants and children. His research influenced childrearing in the United States. Gesell was the first director of the Child Study Center. He applied the rigorous criteria of scientific research to the

Gesell, Arnold Lucius

631

issue of growth and development in children and is widely considered the father of the field of child development. Gesell paved the way for contemporary research in motor development, fighting for the rights of physically and mentally handicapped children to receive special education that would enable them to find gainful employment. Additionally, he increased public awareness of and support for preschool education and better foster care and adoption. Gesell also gained fame and influence as a leader of the Child Hygiene Movement. His concerns focused on public health problems in slums, factories, schools, and at immigrant screening stations. In 1911, Gesell founded the Yale Clinic of Child Development. He served as the director from 1911 until 1948. Gesell, best known for research on normal child development and use of new approaches to research and observation, established developmental norms that are still the basis of most early assessments of behavioral functioning today. Biography Gesell was born in Alma, Wisconsin, on June 21, 1880. His father was a photographer with a strong interest in education and his mother was a successful elementary school teacher. Gesell was the eldest of five children. Watching his younger siblings learn and grow helped develop his interest in children. In his autobiography, Gesell discusses a number of traumatic incidents that he had witnessed growing up, such as funerals, sickness, accidents, drowning, quarantines, alcoholism, and seizures. These experiences, according to Gesell, possessed psychological significance for his clinical studies. Gesell’s hometown later became the focus of analysis in his work titled The Village of a Thousand Souls. Drawing on three decades of town news and gossip, he concluded that despite environmental advantages, many of the local families showed signs of insanity or feeblemindedness. Gesell then argued that social reform was needed. He supported the science of eugenics and attributed human vices to a combination of hereditary defect and the departure of “fitter” citizens to more challenging environments. Gesell realized the importance of nature and nurture. While at the Los Angeles State Normal School, Gesell met and married fellow teacher Beatrice Chandler. They had two children. Gesell died on May 29, 1961, in New Haven, Connecticut.

632

Girl Scouts

Education and Work With plans to become a teacher, Gesell attended Stevens Point Normal School after he graduated high school in 1896. Among the courses he took was a course in psychology under Professor Edgar James Swift, a man trained at Clark University. Gesell graduated from Stevens Point in 1899, and accepted a position in the Stevens Point High School as a teacher of U.S. history, ancient history, German, accounting, and commercial geography. However, this did not satisfy his intellectual drive. He resigned by the end of the year, and entered the University of Wisconsin–Madison, where he studied history with Frederick Jackson Turner and psychology with Joseph Jastrow, who had started a psychology laboratory at Wisconsin in 1888. After two years at Madison, Gesell received a B.Ph. degree in 1903. Gesell served as a teacher and principal at a high school in Chippawa Falls, Wisconsin, for just one year. He then decided to continue his education at Clark University, an early leader in psychology highly influenced by G. Stanley Hall, the founder of the child study movement. After receiving his Ph.D. from Clark in 1906, Gesell took a professorship at the Los Angeles State Normal School. There, he worked with Lewis Terman, a Clark colleague. They supported a genetic and psychometric approach to mental retardation and developmental change. Gesell decided that medical training was essential if he was to do research that was more thorough in the normative study of early development. Gesell studied at the University of Wisconsin Medical School and Yale University. He developed the Clinic of Child Development at Yale and received an M.D. in 1915. He accepted a full professorship at Yale and continued to work as a school psychologist for the State Board of Education of Connecticut, where he helped develop classes to aid children with disabilities. Initially concerned with retarded development, Gesell came to the conclusion that an understanding of normal infant and child development was indispensable to understanding childhood abnormality. He then began his studies on the mental growth of babies. He founded new methods and used the latest technology for observing and measuring behavior by controlling the environment and stimuli. Gesell used one-way mirrors, photography, and film.

After he retired in 1948, the Gesell Institute of Human Development was founded by his colleagues in 1950. Joanne Ardovini Metropolitan College of New York See Also: Adolescence; Adoption, Open; Child Labor; Disability (Children); Evolutionary Theories; Hall, G. Stanley. Further Readings Ames, Louise Bates. Arnold Gesell: Themes of His Work. New York: Human Sciences Library, 1989. Boring, Edwin G., H. S. Langfield, H. Werner, and R. M. Yerkes, eds. A History of Psychology in Autobiography, Vol. 4. Worcester, MA: Clark University Press, 1967. Fagan, Thomas K. “Gesell: The First School Psychologist, Part II: Practice and Significance.” School Psychology Review, v.16 (1987).

Girl Scouts The Girl Scouts marked their 100th anniversary in 2012. According to their Web site, more than 59 million American women have been a Girl Scout at one point in their lives. The Girl Scouts boast that 10 of 17 women (59 percent) in the U.S. Senate, 45 of 75 women (60 percent) in the House of Representatives, and 53 percent of all women business owners are former Girl Scouts. The Girl Scouts have left an indelible mark on families in the United States and internationally. Juliette Gordon Low founded the Girl Scouts in 1913, in direct response to the Boy Scouts’ exclusion of girls. The Girl Scouts were also a response to the increasing creation of social organizations meant to keep children off the streets, and these organizations were focused on boys and working-class/poor families. The needs of middle- and upper-class girls were not addressed. This is where the Girl Scouts came in. Low was keenly aware of the societal changes coming for girls and women. She was struggling with a marriage that was not working out for her, yet she was financially dependent on the relationship. Low wanted to ensure that the next



Girl Scouts

633

This troop of Girl Scouts helped collect more than 10,000 boxes of Girl Scout cookies to send to soldiers overseas in March 2010. Girl Scouts have always had a focus on philanthropy and community service, although badges in areas that emphasized traditional feminine skills such as cooking and sewing have been replaced by more gender-neutral badges.

generation of girls knew how to take care of their money, and even earn it. As a southern woman from Savannah, Georgia, Low also understood the need to maintain the appearance of traditional womanhood. This tension is constant throughout Girl Scouts history. They are continually balancing the desire to prepare girls for “modern womanhood” and respect “traditional gender roles,” whether it is 1960 or 2010. Girl Scouts learn about different aspects of life, and show proficiency by earning a badge. Traditionally, badges are then worn on a sash or vest. At the beginning, badges were fairly even in emphasizing traditional feminine skills such as cooking and sewing, those emphasizing masculine skills such as camping, and those deemed gender neutral such as music. By 1999, the balance was gone and replaced with almost two-thirds gender-neutral badges and less than 10 percent focused on feminine traits.

At the same time, the Girl Scouts have always had a focus on philanthropy and community service—two ideas seen as feminine. To earn the top award at each level, girls must not only complete a community service project, but also one that is sustainable. One-time feel-good projects will not do. Addressing Changes in Society Because the Girl Scouts have been walking the line between subversiveness and upholding traditional gender roles, they have been fairly agile when it comes to addressing issues of changing society, such as acceptance of lesbians and addressing the increase in the number of women who are incarcerated. The number of women with children who are in prison is increasing. Given this disruption to the family unit, if not destruction if the woman is the solo parent, the Girl Scouts were approached in 1992 by Maryland in an attempt to maintain a woman’s role as a mother. This program allows the Girl Scouts to

634

Godparents

be progressive in addressing the needs of incarcerated women and their daughters, but also maintains a traditional focus on family life. The details of the programs vary by location, but the overall idea is that girls visit their incarcerated mothers and work on a Girl Scouts curriculum. The issue over a woman’s role as a mother is related to her chance at recidivism. If the mother–daughter relationship can be maintained during incarceration, the chance of successful reentry into society is good. While the Boy Scouts have dominated media headlines with their ban on gay scout leaders, and until 2013, scouts themselves, the Girl Scouts have accepted lesbians as leaders and scouts. The organization walked a thin line between upsetting conservative supporters and progressive supporters by stating that they do not investigate the sexuality of troop leaders, and it leaned on its antidiscrimination rules. In 2011, the Girl Scouts made it policy to include a transgender Girl Scout in Colorado. The Girl Scouts prides itself as being an inclusive organization. Low was partially deaf and thus made sure to include girls with physical disabilities from the onset. This attitude is seen as the explanation as to why the Girl Scouts were the first to desegregate troops in the 1960s. The Girl Scouts have operated under a girl empowerment model that borrows from liberal feminism and republican motherhood. Girls are expected to be as crafty with glue and popsicle sticks as they are promoting their annual cookie sale. The cookie sale has also evolved. It has moved from merely a fundraiser to focusing girls on entrepreneurship. Today’s badge reflects a chief executive officer mentality as girls are asked to not only master skills in sales but also marketing and budgeting. The second wave of feminism dealt the Girl Scouts a blow to membership. The girls were not signing up to sell cookies and go camping. The patriotic and cheerful nature of Girl Scouts did not align with the radical politics and antiwar movement of the 1970s. By the 1980s, the Girl Scouts had adjusted their message and curriculum that allowed for a huge membership increase. They were giving their audience what they were seeking—a program that sought to empower girls and create tomorrow’s leaders. The Girl Scouts continue to evolve with society. Today’s curriculum reflects the national priority of exposing girls to science, technology, engineering,

and mathematics (STEM) careers through STEM badges, camps, and special events. The Girl Scouts have partnered with the Society of Women Engineers and some troops sponsor FIRST Robotics teams. The enigma over how to increase women in STEM careers is often multifaceted and increasingly focused on younger and younger girls. The Girl Scouts fit into any model to target elementary girls and their parents about the importance of STEM education and careers. The story of girlhood in the United States can be seen through the lens of the Girl Scouts. As national priorities have moved toward inclusivity and respect for race and class divides, the Girl Scouts have sought to address those challenges. Veronica I. Arreola University of Illinois at Chicago See Also: Adolescent and Teen Rebellion; Boy Scouts; Feminism; Gender Roles; Middle-Class Families. Further Readings Anderson, E. K. and A. Behringer. “Girlhood in the Girl Scouts.” Girlhood Studies, v.3/2 (2010). Arneil, B. “Gender, Diversity, and Organizational Change: The Boy Scouts Versus Girl Scouts of America.” Perspectives on Politics, v.8/1(2010). Block, K. J. and M. J. Potthast. “Girl Scouts Beyond Bars: Facilitating Parent-Child Contact in Correctional Settings.” Child Welfare-New York, v.77 (1998). Denny, K. E. “Gender in Context, Content, and Approach Comparing Gender Messages in Girl Scout and Boy Scout Handbooks.” Gender and Society, v.25/1 (2011). Kleiber, Shannon. On My Honor: Real Life Lessons From America’s First Girl Scout. Naperville, IL: Sourcebooks, 2012. Taft, J. K. “Girlhood in Action: Contemporary U.S. Girls’ Organizations and the Public Sphere.” Girlhood Studies, v.3/2 (2010).

Godparents Becoming a godparent is a part of baptism that involves millions of American families across many different religions. Christianity is the main



religion that includes baptism as a religious event in current society, but godparenting has historical roots in Judaism. Godparents are sponsors, or spiritual guides, for the individual being baptized. Typically, the individual being baptized is an infant or child. Godparents tend to be adults who follow the same faith and are close to the parents of the child. Godparents can significantly influence their godchildren’s lives and families, with or without being related. Secularly, godparents can also be people who will take care of the child in situations where the parents cannot, but this must be legally arranged. Capturing information about godparents helps researchers understand other influential adults in a child’s life. Some researchers have considered godparents as part of the family network. Historical documents that record the names of godparents can be linked to genealogical information. There are also many popular media references to godparents. Godparents influence social history by supporting traditions and forming bonds with families. Historically, godparenting began because of the custom in ancient Judaism to have a sandek hold the baby boy during the brit milah, or circumcision ceremony. Sandek translates from Hebrew to mean companion of child, or an individual who is honored by the role. Traditionally, sandeks were older men in the family or a rabbi. When Christianity emerged, the ceremony of baptism incorporated some of this tradition from Judaism. Godparents emerged as sponsors for individuals who were preparing to be baptized. Baptism then evolved over the centuries to have different meanings for different denominations of Christianity. For example, some denominations believe in baptizing infants, whereas others wait for an individual to be old enough to make their own decision. Early in the history of baptism, parents could act as godparents for their children, but many denominations created rules about who could be a godparent and who could not. The requirements still differ across denominations in modern history, but there is a common theme in the title. In the United States today, godparents are almost always not the parents of the godchild, but are adults who have been baptized. The number of godparents has also varied throughout history, with some families designating up to 30 godparents for one child. It is now common to have

Godparents

635

one or two godparents per child, usually a godfather and a godmother. Godparents are traditionally at the baptismal event. Some denominations require godparents to be over a certain age, part of the same denomination, and in good standing with a church. Adults who meet some of the requirements, but not all, can be referred to as witnesses, rather than official godparents. Other names for godparents include spiritual father or mother, compater or commater, padrina or padrino, and guideparent. Being a Christian godparent can involve many different activities beyond the baptism. Christian godparents are asked to help raise their godchild in the church and be a spiritual guide, or sponsor. Godparents are asked to pray for their godchildren on a regular basis. Gift giving is common from godparents, particularly religious items such as a Bible and a cross. However, many Christian denominations clarify that gift giving is not part of the spiritual role of the godparent. Godparents can help mentor and guide children on many different aspects of life beyond religion. Knowing that a godparent exists as a resource can help comfort children. Godparents can be important adults that continue to participate in the child’s life as they grow up. Children who have mentors in their lives tend to be more resilient and godparents can provide that role of mentor. Godparents can be adults that children and adolescents turn to about issues that are difficult to discuss with their parents. Popular media has several examples of godparenting. The novel and movie The Godfather portrays a crime family who has a leader they call the “godfather.” The godfather is in charge of making decisions for the family, and gives orders. The role is passed on to the son in the movie, who becomes the next family leader, or godfather. While there are religious events in the movie, including a christening, the role of the godfather in the story is more about power and leadership. Godparenting is also associated with fairy tales. Fairy godmothers are typically older women who help guide younger women in the storyline. Cinderella is a classic example of a story about a young girl who has no parents to rely on, but a fairy godmother uses her magical powers to help the girl live happily ever after. This type of relationship contributes to the concept that godparents give gifts to their godchildren. Another cultural reference to

636

Grandparenting

godparenting is the origination of the word gossip. The word gossip is derived from the term godsib, which refers to godparents who are close friends to the parents of the godchild. Popular media and culture will continue to portray godparenting in different ways that may or may not be in line with the religious meaning. Godparenting is a complex role that can vary in meaning across denominations and secularly, but can have a large influence on families and traditions in social history. Caitlin Faas Mount St. Mary’s University See Also: Baptism; Catholicism; Christening; Christianity; Protestants. Further Readings DeLiso, M. Godparents: A Celebration of Those Special People in Our Lives. New York: McGraw-Hill, 2002. McLaughlin, N. A. and T. E. Herzer. Godparenting: Nurturing the Next Generation. New York: Morehouse, 2007. Paraclete Press. Everything a Catholic Needs to Know: Becoming a Great Godparent. Brewster, MA: Paraclete Press, 2013.

Grandparenting Throughout history, grandparents have been regarded as important members of families. In fact, grandparents are the most positively stereotyped family figures in U.S. culture. There are an estimated 65 million grandparents in the United States today. By 2020, this number is projected to reach 80 million, meaning that one in three adults will be grandparents. Though not all grandparents assume equally active roles in the lives of grandchildren, most grandparents choose to be involved with their grandchildren. Grandparents can serve many purposes in the lives of grandchildren, including that of caregiver, mentor, provider, and friend. Grandparents also serve as oral historians and are often considered to be the vehicles through which family traditions and values are transmitted. Both middle- and later-life adults can become grandparents and the transition

to grandparenthood can be both a complicated and rewarding experience. Serving in a grandparent role often requires consideration of cultural and societal symbols, social roles, emotional experiences, interactions with grandchildren, and family processes. Importance of Grandparent–Grandchild Relationships The grandparent–grandchild tie is second in emotional power and influence to the parent-child relationship, though it lacks the normal tensions that accompany parent-child bonds. Instead, grandparents are often viewed as peacekeepers of families, and in some cases grandparents can help to mediate a tenuous relationship between parents and children, especially regarding issues centered around family values (e.g., religion). Grandparents provide grandchildren with their first and most frequent interactions with older adults, thus making them the backdrop against which children form opinions about older generations. While grandchildren benefit from the active involvement of grandparents, grandparenting also provides grandparents with a unique sense of purpose and feelings of being valued during middle and late life, a time when older adults’ generative development needs are the greatest. The transition to grandparenthood results in necessary adaptations to the grandparent’s sense of self and identity within the family, as this transition requires grandparents to build and maintain a relationship with a new family member: the grandchild. Though grandparents can span a wide range of ages, the transition to grandparenthood often also denotes a life transition for grandparents, one that suggests that grandparents have moved into a new and later life stage. The adaptive tasks of grandparenthood and aging are likely to affect one another, as grandparents may deal with the reality of aging by focusing on new relationships with grandchildren and the novel ability to share information with new generations. Grandparents frequently report that they enjoy grandparenthood because it allows them to compensate for parenting mistakes made in earlier years. Common themes, including being more available to grandchildren (generally as a result of retirement from the work force) and being more financially stable, are frequently cited



by grandparents as explanations for differences in grandparenting and parenting roles. Positive grandparent-grandchild ties may also provide grandparents with greater support potential as they age, because grandchildren with whom they have developed close relationships may assist them in later life. Additionally, some families develop what is known as a grandculture, in which grandparenting traditions transcend multiple generations, creating high levels of continuity and consistency in grandparent-grandchild ties over time. Grandcultures are especially prominent when grandparents report having close relationships with their grandparents. History of Grandparenting In 17th- and 18th-century America, chronicles of grandparents’ roles and relationships with grandchildren were sparse; however, society’s notions of hierarchy and patriarchy during this time period suggest that grandfathers were likely well respected and revered by family members and served as the heads of their households. The influence of grandfathers continued into the 18th and 19th centuries, where grandfathers exerted considerable economic and social influence over families. This power was largely tied to land ownership. Because grandfathers were responsible for the distribution of land and property among kin, they had considerable authority over the family. Grandmothers also played important roles during this time period, as they were seen as the custodians of family rituals and were deemed responsible for developing and maintaining kin relationships. In the 19th century, industrialization brought about new technology that undermined the power and influence of older generations that was prominent in previous centuries. Improvements in technology, particularly safety and medical equipment, resulted in increases in the life expectancy and longevity of older Americans. Grandparents living longer allowed them to serve in family roles for extended periods of time; however, it also meant that grandparents were more likely to live into old age and require assistance from family members to meet their personal needs, including health care, financial planning, and activities of daily living (e.g., hygiene-related needs). Older generations’ reliance on family for daily support meant that many grandparents moved in with children or grandchildren,

Grandparenting

637

and the United States saw a substantial growth in the number of multigenerational households during this time period. As a result, family experts began to argue about the consequences of grandparents living in the same household as children and grandchildren and an emerging view of the elderly as burdensome and nonproductive was born. Aging was seen as a medical disease and older individuals became less valued in the family circle. In particular, grandmothers who lived with children and grandchildren were accused of exerting negative influences on the mother-child relationship, as they were often faulted for assuming the role of disciplinarian, though children or grandchildren did not always welcome their disciplinary input. During the Great Depression, the economic collapse meant that families were now living together as a result of financial necessity. Research continued regarding the influence of grandparents’ presence in their children’s and grandchildren’s homes. Out of the Great Depression, however, also came new legislation specifically tailored to aging populations: social security. Social security legislation produced a new era of aging in which grandparents became hopeful about the prospect of being financially and residentially independent. As a result, grandparents felt liberated and were able to assume new roles in families. Financial and residential independence meant that grandparents could serve as companions to grandchildren, instead of second parents. In essence, the role of grandparent switched from that of authoritarian to that of friend and confidant. In 1978, legislators enacted a formal, universal day of celebration titled “Grandparents Day” in an attempt to educate youth about the important contributions that seniors have made throughout history and to allow families to celebrate grandparents. Grandparents Day is celebrated on the first Sunday after Labor Day, and its purpose is to honor grandparents, give grandparents the opportunity to show love to their grandchildren, and to help children and families recognize the value of grandparents. Grandparenting Today The flexibility and celebration of the grandparenting role witnessed after the Great Depression largely explains the grandparent-grandchild dynamic present in contemporary grandparenting. Grandparents are commonly revered for the social and instrumental supports that they provide to families, including

638

Grandparenting

serving as babysitters and providing financial assistance to families in times of need. While most grandparents assume an active role in the lives of grandchildren, there are no universally recognized rules or guidelines that dictate how grandparents should enact their role with grandchildren, or how grandchildren should enact their role with grandparents. Instead, expectations for the grandparentgrandchild relationship are generally negotiated on a family-by-family, and even individual-by-individual basis. The nature of the role assumed by grandparents is often shaped by their experiences and by the meanings and symbols attached to the grandparent role in their particular culture. Expectations about family life and family values are also likely to shape how grandparents and grandchildren interact with one another. Social changes in families have allowed grandparents to serve in more important, extended family roles than ever before. Historically, family structures resembled pyramids, with few old members at the top and more young members at the bottom; however, these family structures now more closely resemble a vertical beanpole, with more equal numbers of individuals in each generation. With fewer family members in each generation, intergenerational relationships have the potential to become more meaningful and take on added significance. Other important social changes, including rises in the number of dual-earner families (i.e., both mother and father work outside of the home) and single parents (both as a result of divorce and nonmarital childbirth), decreases in family size, and improvements in the health and life longevity of seniors have also contributed to the increasingly important role of grandparents today. Improvements in the health and life longevity of grandparents means that the grandparent role is continuing into the young, and sometimes middle adult, lives of grandchildren. This continuity suggests that grandparents and grandchildren are able to forge relationships that last longer than ever before, providing both groups with added support throughout the life course. Unfortunately, grandparents who live longer are not always guaranteed to live lives free of illness, and thus the overall health of the grandparent has implications for the grandparent-grandchild relationship, regardless of the grandparent’s age. Grandparents who report

being healthy are more likely to assume active roles than those who report frequent or chronic illness. While improvements in health imply longer lives for grandparents, it also means that grandparents have greater geographic and residential mobility. In contrast to previous decades, grandparents are now more likely to live further away from children and grandchildren, resulting in families being spread out across the country. Because of the importance of proximity and its influence on grandparent-grandchild ties, this means that families may spend less time together than previously observed, and thus may experience decreases in relational closeness. One major shift in the role of grandparents includes the enactment of legal rights offered to grandparents in regard to their grandchildren. Given the suggested importance and prominence of grandparents in the lives of grandchildren, legal professionals have begun to recognize and implement laws that afford grandparents’ rights to grandchildren in a variety of situations. For example, in cases of divorce, grandparents may be afforded supervision or custody rights to grandchildren to ensure preservation of the grandparent–grandchild relationship. Factors That Influence the Grandparent– Grandchild Relationship Grandchildren and grandparents often report having close relationships with one another. Common reasons for closeness include enjoying one another’s personality, enjoying shared activities, and appreciating the individual attention and support. Grandchildren also frequently report relating to the grandparent as a role model, teacher, adviser, or source of inspiration as a reason for closeness. While desiring closeness helps foster the grandparent–­grandchild relationship, external (namely individual, social, and familial) forces and their impact on the grandparent–grandchild relationship are also relevant. Gender of both the grandparent and grandchild has often been noted as having influence over the grandparent-grandchild relationship, as grandfathers and grandmothers may define their grandparenting roles differently. Grandfathers are often regarded as sources of instrumental support (e.g., providing financial support to grandchildren),



whereas grandmothers largely assume social support roles (e.g., teaching grandchildren social skills). Grandmothers are also more likely than grandfathers to assume active roles in the lives of grandchildren and are more likely to initiate physical and nonphysical contact with grandchildren in an attempt to preserve relational bonds. Furthermore, research suggests that grandparent-grandchild pairs of the same gender are more likely to forge close relationships, often as a result of shared interests. Proximity may also play an important role in the grandparent-grandchild relationship, as family members who live closer together are likely to see one another on a more frequent basis, and thus may have a closer relationship than family members who live far away from one another. Age is also influential, as grandchildren often report that the younger their grandparents are, the more likely they are to be able to feel as though they can relate to them. Feelings of being able to relate to grandparents allows grandchildren to feel more comfortable talking to and interacting with grandparents, thus allowing grandparents and grandchildren to establish closer bonds. One’s culture also has implications for grandparent-grandchild ties. For example, Asian, Latino, and African American cultures are well known for the high value they place on elders, especially grandparents. These cultures are also more likely to live in collective family groups (i.e., multigenerational households), suggesting that grandparent-grandchild duos may be inherently closer as a result of proximity and frequency of daily interaction. The middle generation parent(s) also plays an influential role in the relationship between the grandparent and grandchild. Some scholars suggest that parents can act as bridges or walls between grandparents and grandchildren, implying that parents can either encourage or discourage relationships between grandparents and grandchildren. Given that parents are largely responsible for facilitating their children’s interactions, especially when children are young, they play an important role in dictating how frequently grandparents and grandchildren interact, and in what contexts or environments the interactions take place. Socioeconomic status of individuals and families as a whole also has implications for the grandparent-grandchild relationship. Research shows that

Grandparenting

639

grandparents are generally more integrated into daily family life in lower socioeconomic groups. In these cases, grandparents may be providing support to single mothers or fathers who lack the means necessary to provide for children. In some cases, grandparents may serve as caregivers to children in an attempt to lessen the financial burden of caregiving expenses for childcare provided outside of the home. Grandparents may also provide financial assistance to families to ensure that their children and grandchildren’s needs remain met. Demographic data show that African American grandparents are more likely to serve as surrogate or second parents to grandchildren than any other racial or ethnic group. Additions to the Modern-Day Grandparenting Role As mentioned previously, grandparents have gained a legal presence in families in recent decades. In some cases, grandparents may assume full-time caregiving responsibilities for grandchildren when their parents are unwilling or unable to continue providing needed care. Families in which grandparents serve as the primary caregivers for grandchildren are referred to as grandfamilies and individuals within these families are referred to as custodial grandparents and custodial grandchildren. Grandfamilies have increased by 75 percent over the past several decades. Current data suggest that 10 percent (5.4 million) of children lived in households that included grandparents, and grandparents were responsible for the basic needs of one or more grandchildren under the age of 18 in virtually one-third of these homes. Single, urban, low-income African Americans are more likely to experience grandfamily living than any other group. Custodial grandparents may acquire custodial grandchildren as a result of parents’ death, incarceration, drug use, history of abuse, mental illness, or military deployment. While some grandparents view the assumption of caregiving as a natural extension of their grandparent role, others struggle with transitions related to caregiving. The assumption of full-time caregiving responsibilities is often unexpected; therefore, custodial grandparents are often faced with transitional difficulties, including balancing personal lives with newfound caregiving duties. Grandfamilies are also at an increased risk

640

Grandparents Day

for poverty and financial hardship given that custodial grandparents are often unprepared to fund the additional expenses tied to caring for custodial grandchildren. As a result of caregiving, custodial grandparents are often at risk for experiencing declines in physical and emotional well-being; however, scholars suggest that access to social supports and community resources may help improve custodial grandparents’ overall health. In some cases, grandparent-grandchild connections transcend legal and genealogical ties and are formed as result of death or divorce and subsequent remarriage by either the grandchild’s parent or the grandparent. The addition of a stepgrandparent into a family system creates what is known as a multigenerational stepfamily, defined as an extended family system that contains intergenerational stepfamily relationships. One remarriage by each of a child’s parents may result in as many as four stepgrandparents. Similar to factors that influence grandparentgrandchild ties, stepgrandparent-stepgrandchild relationships are affected by social, cultural, and familial contexts, including age of the stepgrandchild and stepgrandparent at the time of the remarriage, geographic proximity, gender and age of the stepgrandparent and stepgrandchild, and the influence of the middle generation parent. In families in which the relationship between the middle generation parent and the stepgrandparent is positive, it is likely that parents will encourage a relationship between the stepgrandparent and stepgrandchild; however, in instances where this relationship is negative, stepgrandparents and stepgrandchildren may feel loyalty conflicts regarding their family allegiance to certain members, and thus may be less likely to form a relationship with one another. Ashton Chapman University of Missouri–Columbia See Also: Adoption, Grandparents and; Extended Families; Grandparents Day; Grandparents’ Rights; Intergenerational Transmission; Multigenerational Households. Further Readings Whitbeck, L. B., D. R. Hoyt, and S. M. Huck. “Family Relationship History, Contemporary Parent-

Grandparent Relationship Quality, and the Grandparent–Grandchild Relationship.” Journal of Marriage and the Family, v.55 (1993). Woodbridge, S. “Sustaining Families in the 21st Century: The Role of Grandparents.” The International Journal of Environmental, Cultural, Economic and Social Sustainability, v.4 (2008). Woodward, K. L. “A Grandparent’s Role.” Newsweek, v.129 (Spring/Summer 1997).

Grandparents Day Thanks to the baby boom of 1946 to 1964, the number of American grandparents has been rising and will continue to grow, to an estimated 80 million in 2020. Great numbers of grandparents are also becoming primary and secondary child caregivers. Thus, attention to grandparent issues (and all elder concerns) is increasing. Since President Jimmy Carter issued proclamation 4679 on September 6, 1979, Grandparents Day has been an American national holiday. Championed by West Virginia advocate for seniors Marian Lucille Herndon McQuade (1917–2008), the day is to bring families together to honor grandparents, enable them to express love for younger generations, and make youngest generations aware of the special knowledge and wisdom that grandparents have to offer. McQuade also advocated for “shut-ins” or those in nursing homes, encouraging people to “adopt a grandparent” if they did not have one. Grandparents Day occurs on the first Sunday after Labor Day every September—the month chosen to symbolize the autumn years of life. The name’s oversight of an apostrophe is seen as conveying that the day does not belong to grandparents alone but also to families. Still a grassroots effort, Grandparents Day is not old enough, nor is it publicized enough, to have achieved popular familiarity, nationally recognized traditions, or scholarly publications. This might be, in part, a result of the originator’s desire to prevent commercialization of the day. Various educational, religious, and family-oriented organizations, however, use the Internet to convey and establish customs, events, and best practices for Grandparents Day. Coal-miner’s wife Marian McQuade called herself “just a housewife,” made her 15 children’s



clothes, had many grandchildren and great-grandchildren, and claimed inspiration from her grandmother’s life. She was also, however, a political activist for the elderly, starting as early as 1956 with the founding of West Virginia’s Past Eighty Party. At various times, she served as a member or officer of the West Virginia Commission on Aging, Vocational Rehabilitation Agency, Health Systems Agency, Nursing Home Licensing Board, and attended the White House Conference on Aging. Supporters of McQuade’s campaign for a national holiday included Senator Jennings Randolph and Governor Arch Moore, but McQuade had to petition all of the nation’s governors and Congress, as well as lead a national letter-writing campaign to religious, political, and many other organizations. McQuade unsuccessfully ran for the West Virginia State Senate and the House of Representatives. In 1972, before generating Grandparents Day, she spearheaded the successful effort to cause President Richard Nixon to declare October 15 as National Shut-In Day. In 1989, McQuade, a stamp collector, appeared on a 10th-anniversary postage stamp to commemorate Grandparents Day. The organizations and their Web sites promoting Grandparents Day follow in McQuade’s footsteps by emphasizing that it should remain family-oriented and intergenerational. She envisioned private whole-family giving-and-sharing celebrations and visitations—perhaps family reunions. Suggested activities also promote family history and genealogy such as storytelling time (or, more formally, oral history interviewing), identifying photographs in old family albums, constructing a family tree, and passing on family traditions such as recipes, skills, and social customs. The Web sites also offer kits for educational and community groups to create Grandparents Day events. Fostering greater public awareness, the National Grandparents Day Council offers press releases, poster contests, YouTube video contests, Facebook and Twitter campaigns, and selects National Grandparents of the Year and National Forget-Me-Not Winners (the forget-menot is the flower of Grandparents Day). Since 2004, there has been an official Grandparents Day song, “A Song for Grandma and Grandpa” by Johnny Prill, a singer-songwriter in the folkpolka traditions and a lifelong volunteer performer at nursing homes. The organizations also encourage visitation programs and volunteer bands for

Grandparents’ Day

641

nursing homes, and that everyone “do something grand” for Grandparents Day by initiating action to create year-round intergenerational activities. Grandparents Day (sometimes on different designated days) is now celebrated with varying degrees of success in Canada (1995), the United Kingdom (1990), France (1987), Italy (2005), and other countries. In the United States, there are related holidays, and the question of whether they compete or complement arises. President John Kennedy designated May as Senior Citizens Month in 1963 and the Carter Administration renamed it Older Americans Month in 1980. There is also Intergenerational Day/Month. A Colorado Springs attorney, Sandy Kraemer, founded the nonprofit Fountain Institute in 1987, renamed the Intergeneration Foundation in 2000. Originally seeking support for the first Sunday in October to be Intergeneration Day, the organization supports an Intergeneration Month and moving it to September to coincide with Grandparents Day. Accepted in the vast majority of states, Intergeneration Day/Month is campaigning for national recognition. Whether multiple designated days, weeks, or months compete or complement each other remains to be seen, but America’s seniors and their concerns warrant year-round attention. Grandparents Day, as a relatively new holiday, needs time and publicity to become a tradition on a par with other, older holidays, but it reflects the social history of the American family in the 21st century. Katherine Scott Sturdevant Pikes Peak Community College See Also: Adoption, Grandparents and; Baby Boom Generation; Child Care; Death and Dying; Demographic Changes: Aging of America; Extended Families; Family Reunions; Genealogy and Family Trees; Grandparenting; Grandparents’ Rights; Multigenerational Households; Nursing Homes; Social History of American Families: 1981 to 2000. Further Readings American Presidency Project. “Proclamation 4679: National Grandparents Day.” http://www.presidency .ucsb.edu/ws/?pid=32826#axzz2fYuYoU8i (Accessed March 2014). Generations United. “Grandparents Day.” http://grand parentsday.org (Accessed March 2014).

642

Grandparents’ Rights

Intergeneration Foundation. “Intergeneration Month.” http://www.intergenerationmonth.org (Accessed March 2014). Legacy Project. “The History of Grandparents Day.” http://legacyproject.org/guides/gpdhistory.html (Accessed March 2014). National Grandparents Day Council. http://www .grandparents-day.com (Accessed March 2014).

Grandparents’ Rights Grandparents’ rights most often relate to visitation rights. These rights become particularly important when grandparents want to remain a part of their grandchildren’s lives after the family unit separates. The resulting issue is often whether the courts should be able to order grandparent visitation against a parent’s wishes. Beginning in the early 1960s, many state legislatures enacted statutes that permitted grandparents to seek visitation. These statutes established the standard to determine the substantive issue of whether to grant visitation. By the mid-1990s, every state had a visitation statute, although the statutes varied by state. The seminal case on grandparent visitation statutes is Troxel v. Granville (2000). The grandparents in this 2000 U.S. Supreme Court case petitioned for visitation with their out-of-wedlock grandchildren under a Washington statute that allowed any person to petition the court for visitation rights at any time. The U.S. Supreme Court ultimately held that the Washington statute permitting such visitation unconstitutionally interfered with the fundamental right of parents to rear their children. The Troxel v. Granville case arose out of a nonmarital relationship between Tommie Granville and Brad Troxel, which produced two daughters, Isabelle and Natalie. Once the couple separated in 1991, Mr. Troxel lived with his parents at their home, regularly bringing his daughters there for weekend visitation. After Mr. Troxel committed suicide in May 1993, the grandparents continued to regularly see Isabelle and Natalie until Ms. Granville told them in October 1993 that she wanted to limit their visitation with the children to one short visit per month.

In December 1993, the grandparents commenced a petition in the Washington courts to obtain visitation rights with Isabelle and Natalie under two state statutes, only one of which was at issue in this case. Specifically, Wash. Rev.Code §§ 26.10.160(3) (1994) provided: “Any person may petition the court for visitation rights at any time including, but not limited to, custody proceedings. The court may order visitation rights for any person when visitation may serve the best interest of the child whether or not there has been any change of circumstances.” At trial, the grandparents asked for two weekends of overnight visitation per month and two weeks of visitation each summer. Ms. Granville did not entirely oppose visitation, but preferred only one day of visitation per month with no overnight stay. In 1995, the Superior Court ordered grandparent visitation of one weekend per month, one week during the summer, and four hours on each of the grandparents’ birthdays. Subsequently on remand, the Superior Court found that visitation was in Isabelle’s and Natalie’s best interests. Ms. Granville appealed the grandparents’ visitation order, during which time she married a man who formally adopted the children. The Washington Court of Appeals reversed the lower court’s order and dismissed the grandparents’ petition for visitation, holding that nonparents lacked the standing to seek visitation under the Washington statute unless a custody action was pending, which the court found consistent with the parents’ fundamental liberty interest under the Constitution in the care, custody, and management of their children. The grandparents appealed to the Washington Supreme Court, which affirmed, finding that the Troxels could not obtain visitation of Isabelle and Natalie because the Washington statute unconstitutionally infringed on the fundamental right of parents to rear their children. The court determined that the Constitution permits state interference with parental rights only to prevent harm or potential harm to the child, but the statute failed that standard because it required no threshold showing of harm. Second, the court found that the statute was too broad because it allowed any person to petition for forced visitation of a child at any time. The U.S. Supreme Court agreed to hear the case and affirmed, holding that the broad Washington



Grandparents’ Rights

643

A child pushes his grandmother on a plastic scooter. Grandparents often play an important role in their grandchildren’s lives, and grandparents’ rights become particularly important when grandparents want to remain in this role after the family unit separates. By the mid-1990s, every state had a statute that permitted grandparents to seek visitation.

statute violated the substantive due process rights of the fit custodial mother. Specifically, the statute was an unconstitutional infringement on Ms. Granville’s fundamental right under the Constitution to make decisions concerning the care, custody, and control of her two daughters. The court did not, however, consider whether the Due Process Clause required all nonparental visitation statutes to include a showing of harm or potential harm to the child for the grant of visitation. As a result of the U.S. Supreme Court’s ruling in Troxel, some state statutes on grandparent visitation have been successfully challenged based on constitutional grounds that the statutes unjustly interfere with the parent’s fundamental rights. Other state statutes on grandparent visitation, however, have not been challenged, although they are subject to Troxel if they are challenged in the future. Commentators have pointed out, however, that the Supreme Court’s decision in Troxel may be limited because it applies to only an exceedingly

broad statute, which allowed visitation by any person at any time. Nonetheless, states must consider the competing public policies when tailoring their approach to grandparent visitation and rights. On the one hand, grandparent visitation may be best for the child. On the other hand, grandparent visitation statutes raise the issue of whether, and to what extent, a fit parent is free to decide with whom the child associates without state intervention. Both are important, but competing considerations must be balanced. In practical terms, grandparents often play an important role in their grandchildren’s lives, which may be jeopardized when the family unit dissolves. In such a case, grandparents may choose whether to assert statutory rights to visitation if their state statute allows it, but they are subject to the limitations imposed by the U.S. Supreme Court in Troxel v. Granville. Margaret Ryznar Indiana University

644

Great Awakening

See Also: Adoption, Grandparents and; Child Custody; Grandparenting. Further Readings Ryznar, Margaret. “Adult Rights as the Achilles’ Heel of the Best Interests Standard: Lessons in Family Law From Across the Pond.” Notre Dame Law Review, v.82/4 (2007). Troxel v. Granville, 530 U.S. 57, 70 (2000). Washington State Legislature. “RCW 26.10.160: Visitation Rights—Limitations.” http://apps.leg.wa .gov/rcw/default.aspx?cite=26.10.160. (Accessed December 2013).

Great Awakening What came to be known as the Great Awakening refers to a series of Protestant religious revivals centered in the northern U.S. colonies from 1734 to 1743 that transformed the religious and social life of colonial churches in embracement of, or reaction to it. While the factors that gave rise to it and the meaning assigned to it are sometimes contested, historians agree that it had a profound influence on colonial society and on the emergence and growth of American evangelical Christianity that followed. British and Colonial Precursors Localized awakenings flowing from the Connecticut River Valley of western Massachusetts and the Raritan Valley of eastern New Jersey were already part of the colonial experience prior to the Great Awakening, reaching back to roots across the Atlantic. Protestant British immigrants from those regions traced their heritage from the birth of the church at Pentecost, which they read about in their Bibles, to the Reformation, especially embodied in John Calvin’s theology and ministry in Geneva. This included some English Puritans (known as Congregationalists in America), Scottish Presbyterians, German Reformed, and the Dutch Reformed. They expected God to act in extraordinary ways during times of renewal. The expectation of God initiating extraordinary renewal was sometimes incompatible with another Puritan and Presbyterian view that God worked through ordinary patterns of devotion such

as church attendance, Bible reading, prayer, family devotion, and through the sacraments of baptism and the Lord’s Supper. These two views often competed with one another. Most revivalist (who became known as New Light) Puritan ministers in New England had roots in the British northwest—a center for devotional piety and evangelism and the birthplace of Quakerism and Methodism—and many New Englanders who responded to evangelistic messages were from that region. The Puritan dynamic tension between ordinary sacramental means of grace and extraordinary conversion into a new birth was resolved in a different way among immigrant Presbyterians from Scotland and Ulster, who settled in New Jersey and Pennsylvania. They brought with them a tradition of seasonal communion, or Holy Fairs—a time when Presbyterians would gather together for several days to celebrate communion and have special services in expectation of renewing grace from God, combining the sacraments with conversion and new birth. These were communal events in which the entire family would attend and participate. Jonathan Edwards and the Youthful Beginning to the Revival in Northampton It was into this environment that Jonathan Edwards was born. His father, Timothy Edwards, was not untypical among Puritans in his role of overseeing spiritual nurture and discipline. While he could be warm and affectionate toward his children, suppressing the willfulness of children, who, Puritans believed, inherited sinful natures from their first parents, Adam and Eve figured prominently. Philip Greven notes the following in The Protestant Temperament: Patterns of Child-Rearing, Religious Experience, and the Self in Early America: . . . the Edwardses, seemingly, chose to conquer their children’s wills without employing force; but at some point, fear of spanking or the rod may have played a role in the consciousness of their infants . . . Yet the sources are curiously silent about the actual practices by parents in their conquest of their children’s wills. What survives is mostly a literature of advice and injunctions, which testifies to the conviction and intention but only occasionally hints at the methodology of conquest.



Jonathan Edwards’s account of the awakening in Northampton, Massachusetts, begins with reforming youth. He had preached a sermon in late 1733 on the evils of “mirth and company-keeping” on Sabbath evenings, and urged family heads to govern their families well and keep children home. While the parents did little to follow these admonitions, the youth began following this advice on their own accord. The death of a young man and woman soon after, combined with preaching that prompted soul-searching and spiritual distress, soon fanned the flames of revival from children to adults throughout that region in 1734. The Birth and Growth of an Influential Faith Narrative It was not until springtime in 1735, when Edwards was concerned about the waning piety of his congregation, that a pastor in nearby Hatfield sent a letter with an account of the revival to Benjamin Colman, an influential Congregational pastor in Boston. Colman, in turn, had this account of the localized Connecticut River valley revival in Massachusetts published in the New England Weekly Journal, introducing it to a much wider audience. Encouraged by the response, Colman asked Edwards for a fuller account, to which Edwards obliged by sending eight pages—the first draft of what would become the Faithful Narrative, which included two conversion stories designed to elicit soul searching among the readers and to display God’s redemptive work in renewing people. One was an emotional deathbed conversion of a young adult woman; the other was from a little girl. Delighted by what he read, Colman forwarded it to Isaac Watts, the famous hymn writer, and John Guyse in London who were influential dissenters (sometimes called “nonconformists”) from the state Church of England. Watts, like Edwards, was concerned with youth and their education. He had previously written moralistic poems for children meant to instruct and edify them. This collection of poems was widely used as a children’s textbook in the schools for several generations. He was also a logician, having written a textbook on that subject, and in tune with the spirit of the Enlightenment with its ideas about the importance of experiential knowledge. He had published an essay outlining specific criteria to judge the authenticity of a historical narrative

Great Awakening

645

five years before receiving Edwards’s account of the revival in Northampton. Watts and Guyse asked Edwards for an expanded account that was subsequently sent to Colman, who first published the abridged account as an appendix to a sermon. When Watts received this modified account, he wanted it published, though he made several editing suggestions on how to “improve” it based on his Enlightenment ideas. Edwards was not happy with all the suggestions, but through this editorial process, the Faithful Narrative was published in London during the fall of 1737. What had started as a secondary account of a local revival had now crossed the Atlantic Ocean and been transformed through the increasing power of publishing into a means to inspire wider and general awakenings on both sides of the Atlantic. George Whitefield What had been scattered local affairs came to be connected by much more than the publishing industry. More than any single person, an awareness and spread of a more general spiritual awakening was realized through the itinerant preaching tour of George Whitefield, accounts of whose successful outdoor revivals in London were published in American newspapers just as his ship arrived in late 1739. His east coast evangelistic ministry, unlike his earlier mission trip to George, was wildly successful in terms of the numbers of people it attracted and the religious impulses it generated. Whitefield had earlier been converted at Oxford University, in part influenced by the Wesley brothers, John and Charles, who had in turn been deeply influenced by German Pietists. He used his magnetic personality and theater training to full effect and attained celebrity status, drawing huge crowds. Outdoor preaching not only solved the problem of limited capacity in the churches, it also enabled him to transcend denominational boundaries and to spread his message of salvation to a much wider audience. Whitefield also understood the power of publishing and the use of media in spreading God’s message. He had read Edwards’s Faithful Narrative and echoed the belief in God’s saving providence in his 1739 biography, A Faithful Narrative of the Life and Character of the Reverend Mr. Whitefield. In the same year, an enthusiastic supporter published a hagiographic account of Whitefield’s life

646

Great Awakening

and ministry, which bolstered his celebrity status but also prompted charges of shameless self-promotion from critics. Bipolar perceptions of Whitefield and his use of media and commerce were prevalent on both sides of the Atlantic. Critics, religious and otherwise, accused him of stirring up “enthusiasm” (understood as derogatory) through marketing and emotional manipulation. Friends viewed such methods as a means by which God’s purposes were fulfilled and interpreted opposition as hopeful signs, in that the Devil knows a significant work of God when he sees it and seeks to destroy it. Even so, the emotional outpouring (including bodily agitations and involuntary vocalizations) that broke out in some revivals even had some revivalist ministers such as Edwards preaching on “The Distinguishing Marks of a Work of the Spirit of God” to discern between a work of God and the works of men, or worse, the Devil. Antiestablishment Leanings and Crossing Social Boundaries The tension between revivalist and antirevivalist parties sometimes hides the breakdown of some social boundaries during this period. In the American colonies, clergy were highly educated. Revivalists such as Presbyterian William Tennant founded alternative schools to train like-minded pastors during a time when their Presbytery restricted ministers to graduates from Yale and Harvard. His son, Gilbert, wrote about “The Danger of an Unconverted Ministry.” And while some of these alternative schools for revivalist pastors grew into highly respectable colleges, revivals also birthed less-educated itinerant preachers and evangelists who emulated the extemporaneous preaching style of George Whitefield. The result was a less educated clergy who felt less bound to the authority of denominational structures to which their churches were connected. Church separations ensued. Some colonies, such as Connecticut, keenly felt a threat to their social structure and passed laws designed to stop the more radical traveling evangelists from gaining influence. More troubling to some was the breakdown of social conventions regarding gender and race. Charles Chauncy, the antirevivalist minister of First Church in Boston, complained that women and girls and even “Negroes” had taken up

preaching. While it did not start out as an egalitarian movement, American white evangelicalism’s commitment to preaching freedom in Christ for all people resulted in a high rate of African American conversions and would factor into the antislavery movement that led up to the Civil War. Edwards’s Northampton church admitted nine African Americans into membership during his pastorate, including one of his slaves, pointing to the fact that there were limits to social change. Such was also the case with George Whitefield. While in South Carolina, he scolded slave owners for their poor treatment of slaves and for failure to evangelize them, though he was also a slave owner. But it was a young slave, Phillis Wheatley, in her widely published poem eulogizing Whitefield, who expressed the growing sentiment of African slaves: Take him, ye Africans, he longs for you, Impartial Saviour is his title due; Wash’d in the fountain of redeeming blood, You shall be sons, and kings, and priest to God. While she accepted slavery, she also argued in a letter to a Native American pastor that was published that blacks had “natural rights” that stood in contrast to slavery. She joined a substantial movement of blacks and whites who desired to send Christian Africans back to Africa to evangelize their people. If such sentiments grew slowly in the north, that growth was stunted in the south where slave owners feared that such talk of freedom in Christ might lead to insurrection as it did in later in Nat Turner’s revolt in Virginia. Revolutionary and Evangelical Trajectories The extent to which the Great Awakening sowed the seeds of the American Revolution and independence is a matter of debate among historians. Some contend that the egalitarian mindset instilled by this form of evangelical Christianity also informed and transformed ideas about political reform, and eventually, rebellion against tyranny. Others rightly point out that evangelicals held no monopoly among patriots or the founding fathers. They agree that the Great Awakening helped place the colonies on a democratizing path that would come to be more fully realized in the next century and in what some have called the Second

Great Society Social Programs



Great Awakening that accelerated trends that began in the first. Evolving Families, Church, and Society Within the context of these societal and religious upheavals, family structure was not static. The Great Awakening began within the confines of a patriarchal Puritanical structure that regulated not only the immediate family, but also larger church and civil affairs. From true conversion flowed good and orderly families, churches, and society—not the reverse. That is why ministers like Jonathan Edwards prayed for awakenings. Ironically, this movement to renew and preserve the created order served to undercut the very lines of authority that held it together. The awakening in Northampton began with responsive youth whose parents were not following the admonitions of Jonathan Edwards. Many churches later split, and ministers were fired over disagreements about outbreaks of enthusiasm in the revivals. Uneducated clergy multiplied and the traditional New England parish system was weakened. Ministers began to lose their standing in civil affairs. Such leveling within the family, the church, and society contributed to the emergence of new forms of individualism in the United States and planted seeds for greater participation among many formerly excluded. Douglas Milford University of Illinois at Chicago See Also: Christianity; Evangelicals; Fundamentalism; Social History of American Families: Colonial Era to 1776. Further Readings Edwards, Jonathan. “The Great Awakening.” In Works of Jonathan Edwards, Vol. 4, C. C. Goen, ed. New Haven, CT: Yale University Press, 1972. Greven, Philip. The Protestant Temperament: Patterns of Child-Rearing, Religious Experience, and the Self in Early America. New York, Knopf, 1980. Kidd, Thomas S. The Great Awakening. New Haven, CT: Yale University Press, 2007. Kidd, Thomas S. The Great Awakening: A Brief History With Documents. Boston: Bedford, 2008. Lambert, Frank. Inventing the “Great Awakening.” Princeton, NJ: Princeton University Press, 1999.

647

Marsden, George M. Jonathan Edwards: A Life. New Haven, CT: Yale University Press, 2003. Schmidt, Leigh E. Holy Fairs: Scotland and the Making of American Revivalism, 2nd ed. Grand Rapids, MI: Eerdmans Publishing, 2001. Stout, Harry S. The Divine Dramatist: George Whitefield and the Rise of Modern Evangelicalism. Grand Rapids, MI: Eerdmans Publishing, 1991. Wheatley, Phillis. “On the Death of the Rev. Mr. George Whitefield.” In Inventing the “Great Awakening.” Frank Lamert, ed. Princeton, NJ: Princeton University Press, 1999.

Great Society Social Programs The term Great Society refers to the enactment of civil rights laws and many federal social programs in the mid-1960s by President Lyndon B. Johnson at a time when his party, the Democrats, had large majorities in the U.S. House and Senate. Many Great Society programs sought to enhance educational training and quality of life for groups that had experienced economic hardship and/or discrimination. This segment of Great Society programming was known as the “war on poverty.” Great Society programs were—and remain—linked to American family life in many ways. Aspects of some programs were designed to remediate what policymakers perceived as skill and motivational deficits in children and adolescents, stemming from a socially and materially deprived family/home life. In some cases, programs sought to facilitate or promote intergenerational participation within families. In addition, increased access to education and health care from Great Society programs likely would have improved the quality of life for many families. On the other hand, according to critics, Great Society programs (and social-welfare programs more generally) may have enabled socially problematic behavior such as out-of-wedlock childbearing. Overview of Great Society Legislation President Johnson coined the term Great Society in a series of May 1964 speeches. In a May 22 commencement address at the University of Michigan, he

648

Great Society Social Programs

declared that “The Great Society rests on abundance and liberty for all. It demands an end to poverty and racial injustice, to which we are totally committed in our time.” Earlier that month (May 7) at Ohio University in Athens, he stated the following: It is a society where no child will go unfed, and no youngster will go unschooled. Where no man who wants work will fail to find it. Where no citizen will be barred from any door because of his birthplace or his color or his church. Where peace and security is [sic] common among neighbors and possible among nations. Later that year (July 2), the Civil Rights Act of 1964 was enacted, with the aim of ending official segregation and discrimination in both public and private sectors. Mainly in the south of the United States, laws and policies enforced segregation, including the famous examples of whites-only lunch counters at Woolworth’s in Greensboro, North Carolina, and segregated seating on city buses in Montgomery, Alabama. Key sections of the 1964 Civil Rights Act included Title II, banning discrimination in public accommodations such as restaurants and hotels; Titles III and VI, outlawing discrimination by state and local governmental actors; and Title VII, prohibiting employment discrimination. (Many of the titles banned discrimination based on race, color, religion or national origin, but Title VII also included a prohibition on sex discrimination.) A year later, Congress passed the Voting Rights Act of 1965. This law banned practices (e.g., literacy tests), present mainly in the south, that had prevented many African Americans from voting, even though the U.S. Constitution’s Fifteenth Amendment (1870) had outlawed racial discrimination in voting. The final piece of the 1960s civil rights trilogy was the Civil Rights Act of 1968, which sought to attack housing discrimination. The other main facet of the Great Society was the passage of many social programs, some similar in aim and structure to programs of the 1930s New Deal, such as Social Security. In fact, Johnson worked in the 1930s as a New Deal program administrator and admired President Franklin Roosevelt. Among the best-known Great Society programs are Medicare (a federal health insurance program for the elderly), Medicaid (a joint federal-state health insurance program for low-income persons of all ages), and Head Start (a program of preschool education, preventative

health care, and other services for low-income children ages 3–5). Despite changes to some Great Society agencies and programs, most of them—including these three—remained intact as of 2013. In addition, spin-offs from existing programs, such as Early Head Start for ages 3 and younger (enacted in 1995), have been periodically created. A key element of Great Society legislation was the 1964 Economic Opportunity Act, creating the Office of Economic Opportunity (OEO). This office administered not only Head Start but also programs including the Job Corps (residential job-training centers for at-risk youth ages 16–24) and Volunteers in Service to America (VISTA). In the 1970s and 1980s, the OEO had its name changed and many of its programs transferred to other agencies. Great Society programs also played a large role in increasing access to higher education. Joseph Califano, a domestic policy advisor to Johnson, wrote that as of 1999, “nearly 60 percent of full-time undergraduate students receive federal financial aid under Great Society programs and their progeny.” Further, many programs enacted well after the 1960s reflect the philosophical underpinnings of the Great Society. President Bill Clinton oversaw enactment of AmeriCorps (government-funded domestic community-service stints) and the Children’s Health Insurance Program (CHIP, funding states to insure previously uninsured children whose

President Barack Obama signing the Affordable Care Act on March 23, 2010. The act expands Medicaid eligibility and providing subsidies to purchase healthy insurance.



families earned too much to qualify for Medicaid). Portions of Clinton’s 1996 welfare reform (Personal Responsibility and Work Opportunity Reconciliation Act), which imposed work requirements but also assisted participants (e.g., with childcare), would be seen by some as embodying Johnson’s original Great Society vision. President George W. Bush’s Healthy Marriage Initiative, providing marital counseling/education geared toward lowincome couples, appears consistent with the education/empowerment aspect of the Great Society, but many protested the exclusion of same-sex couples as violating equality principles. Finally, President Barack Obama’s Affordable Care Act seeks to provide health insurance to Americans lacking either private/job-based coverage or coverage under existing government programs; the act does so by expanding eligibility for Medicaid and providing subsidies to others to purchase insurance. Another Great Society–like the Affordable Care Act, initiated on a small scale, is the Department of Education’s Promise Neighborhoods, in which distressed communities receive various services to facilitate a college and career orientation among children. The Great Society and American Families Great Society programs have responded to and affected American family life in many ways. Eileen Boris, drawing from OEO documentation, characterizes some Great Society programs as seeking to remediate young participants’ deficient cognitive and social skills resulting from “failure of the home” and “inadequate environments.” As reviewed by Gareth Davies, another perspective among some socialwelfare administrators was that reaching out to children was perhaps the best way of ending families’ cycles of welfare receipt; the implication appears to be that many parents’ difficulties were beyond repair. Presaging the Great Society, the section of the 1956 Social Security Amendments on Aid to Dependent Children (the welfare program at the time) makes repeated reference to the need to “strengthen family life” in the care of needy children, as an aim in awarding and administering federal funds to states. Some Great Society programs seek to encourage intergenerational involvement within the same family, such as Head Start’s strong encouragement of parental participation with their children. Also, amendments passed in 2000 to the Older Americans Act (1965) authorized support (e.g., training

Great Society Social Programs

649

and respite care) for family caregivers of elderly persons. Other Great Society programs, though not promoting family interaction per se, likely have benefited families by expanding access to health care and reducing elderly poverty (i.e., Medicare and Social Security). Additional programs to enhance family stability and economic security, known as the Family Welfare Act, Family Security System, and Family Assistance Plan (the latter under President Richard Nixon), were proposed during the 1960s, but not enacted. However, the Earned Income Tax Credit, enacted in 1975, arguably may be seen as deriving from earlier proposals to aid low-income working families. Great Society programs also had what many would consider damaging aspects for individuals and families. Some Great Society leaders and programs held to stereotypical family roles, as reviewed by Boris; whereas men in the Job Corps received instruction in skilled trades, women received homeeconomics training. Some Great Society program administrators made clear that their aim was to perpetuate the male-breadwinner household. Another criticism is that the Great Society’s focus on job training and personal rehabilitation would not necessarily be effective in communities that lacked job openings. The Great Society was also criticized for not tackling hunger and malnutrition. Proposals in the 1960s for government to create public-works jobs and/or provide monetary assistance to ensure impoverished families a minimum subsistence level, though welcomed by some, had the effect, in critics’ view, of transforming the ethos of the Great Society from one of opportunity to one of entitlement. Program Effectiveness Overall, were Great Society programs successful in improving quality of life for disadvantaged families? Program supporters argue that they were, citing the lifting of roughly 6 million people out of poverty and improvements in access to health care and education. Paradoxically, welfare rolls increased concurrently, but this pattern may have stemmed partly from welfare-rights organizations mobilizing previously eligible people to sign up. Pessimistically, critics cite developments such as a declining rate of two-parent families among African Americans during this era. However, co-occurrence of a policy and some societal phenomenon does not prove that one caused the other.

650

Great Society Social Programs

Former senator and Johnson/Nixon advisor Daniel Patrick Moynihan, a Ph.D.-level sociologist, is perhaps most famous for the Moynihan report. This 1965 document focused on the growing prevalence of single-mother families among African Americans and what Moynihan judged as its harmful implications. In 1992, however, Moynihan termed the suggestion of a causal connection between the Great Society and family breakdown “manifestly absurd. The breakdown was there in the data before the Great Society, just as the welfare system was there before the Great Society.” Some pieces of Great Society legislation mandated that the programs being created receive rigorous evaluation of their effectiveness, and such investigations of specific Great Society programs continue to this day. Head Start and other early interventions have received extensive study. The ideal research design is one that randomly assigns children to either an experimental group, which receives the program, or a control group, which does not. Some educational evaluations have used randomized designs, whereas others have used alternative methods, including what is known as a “discordant sibling design.” The latter compares sibling pairs in which one child has participated in a program and the other has not, under the assumption that the two children, having grown up in the same household, are largely alike on background characteristics. Studies of conventional Head Start programs often find initial gains in cognitive/academic performance among program children, but these gains tend to fade within a few years. Even with this fade-out, however, some studies find long-term benefits from Head Start participation on young-adult outcomes such as educational attainment, successful employment, avoidance of crime, and delayed parenthood. Extraintensive childhood intervention programs, such as the Abecedarian Project, also show strong effects, although such programs may be cost-prohibitive for large-scale implementation. Nobel-laureate economist James Heckman, among others, argues that “soft skills” such as motivation and attention are important contributors to individuals’ life success and are capable of being increased through early intervention. Another way to judge Head Start’s effectiveness is by the percentage of eligible children and families served. Recent figures indicate that Head Start serves roughly 1 million children annually, but that is only around 40 percent of eligible preschool-age children

(Early Head Start serves a much lower percentage of its eligible population). The Job Corps has also undergone extensive evaluation. On economic measures such as income and attainment of full-time employment, benefits from program participation appear modest, at best. Positive effects in the domains of education and crimeavoidance are stronger. Job Corps serves roughly 60,000 people per year. Great Society programs in many other areas, such as aid for disadvantaged schoolchildren (Title I of the Elementary and Secondary Education Act) and proposed tax-credit plans, have also been evaluated. Conclusion Great Society programs were not always well conceptualized and implemented. The surrounding turmoil of the 1960s—peaceful protests and social movements, along with urban riots, assassinations of political leaders, and the Vietnam War—created additional difficulties in terms of political support and federal budget constraints. Still, Great Society programs remain a major part of American life nearly 50 years after their enactment, to the benefit of many. Alan Reifman Texas Tech University See also: ADC/AFDC; Earned Income Tax Credit; Head Start; Healthy Marriage Initiative; Medicaid; Medicare; Moynihan Report; New Deal; Poverty and Poor Families; Social Security; TANF; War on Poverty; Welfare; Welfare Reform. Further Readings American Experience. “LBJ” (1991). Public Broadcasting System Documentary. http://www.pbs.org/wgbh/ americanexperience/features/primary-resources/ lbj-michigan (Accessed December 2013). Boris, E. “Contested Rights: How the Great Society Crossed the Boundaries of Home and Work.” In The Great Society and the High Tide of Liberalism, S. Milkis and J. Mileur, eds. Amherst: University of Massachusetts Press, 2005. Califano, J. A., Jr. “What Was Really Great About the Great Society: The Truth Behind the Conservative Myths.” Washington Monthly (October 1999). http:// www.washingtonmonthly.com/features/1999/9910 .califano.html (Accessed December 2013).

Davies, G. From Opportunity to Entitlement: The Transformation and Decline of Great Society Liberalism. Lawrence: University Press of Kansas, 1996. Johnson, L. B. “Remarks in Athens at Ohio University, May 7, 1964.” American Presidency Project. http:// www.presidency.ucsb.edu/ws/?pid=26225 (Accessed December 2013). Moynihan, D. P. “How the Great Society ‘Destroyed the American Family.’” Public Interest, v.108 (1992).

Green Card Marriages Each year, tens of thousands of American citizens marry citizens of foreign countries, then file sponsorship applications for permanent U.S. residency visas on their spouses’ behalf. Those visas, called green cards, are government-issued documents that legalize and authenticate a foreign-born immigrant’s status as a permanent resident of the United States. While many transnational marriages are grounded in companionship and commitment, others are not and can be considered “green card marriages.” “Green card marriage” is a colloquial term used to describe a marriage of convenience; this type of marriage of convenience is between a citizen or legal resident of the United States and a foreign-born person who would otherwise be ineligible for permanent residency in the United States. Marriages of convenience are generally intended to achieve some strategic purpose (rather than for interpersonal relationship considerations such as interest, commitment, affection, or love), and are often regarded as fraudulent. One of the implications of the term green card marriage is that the couple that has entered the union has done so under false pretenses; they are also referred to as sham marriages. People who enter into a green card marriage do so for personal or mutual gain that results from one’s permanent U.S. residency. To understand green card marriages, one must first have some insight on immigration policy in the United States and the role that green cards play in domestic residency. A green card is federally granted documentation of an individual’s status as a permanent resident of the United States. A green card holder, then, is someone who has been granted permission to live

Green Card Marriages

651

and work in the United States on a permanent basis. A person can become a permanent resident of the United States in several ways. Most individuals become green card holders when either a family member who is a citizen or who already has legal permanent resident status files on their behalf, or when an employer who operates in the United States sponsors their immigration application. Others may become permanent residents through various humanitarian statuses, such as those seeking asylum. While multiple avenues toward permanent residency exist, marriage to a U.S. citizen is one of the fastest and most secure ways of obtaining legal residence. Approximately 2.5 million foreign nationals have obtained green cards through marriage to an American citizen in the last decade; nearly a million more have obtained green cards through marriage to legal permanent residents of the United States that are not citizens. Moreover, the numbers are growing; issuance of marriage-based green cards more than doubled between the mid-1980s and the 2000s, and has quintupled since 1970. There are several marriage-based visa options for people, depending on marriage to either a U.S. citizen or a legal permanent resident, and depending on whether the person requesting the green card came to the United States illegally or legally. Anyone who entered the United States without proper documentation to do so, or who may have come legally but whose proper documentation has since expired, can be considered an illegal immigrant. People who have been cited for illegal immigration more than once and/or who were deported at least once may be permanently barred from immigrating to the United States. In such a case, access to a green card is extremely difficult. The practice of obtaining residency through marriage is legal; however, a marriage that is solely intended for the purpose of establishing permanent residence for someone is a crime that is punishable by law. When U.S. citizens petition the federal government for green cards for immigrant spouses, their requests are reviewed and investigated for inconsistencies by federal officers from the Department of Homeland Security’s U.S. Citizenship and Immigration Services Office (USCIS). USCIS agents look for evidence of legitimacy of the marriage; they look for homogamy, or sameness, among couples in particular. Those who appear to have had shorter relationships, or whose personal characteristics and

652

Green Card Marriages

lifestyles, such as race, religious preference, primary language, socioeconomic status, and recreational preferences, seem too disparate may be deemed suspicious. Suspicious cases are sequestered, and further investigations of fraud commence. Investigations of fraud typically include in-depth interviews with the married couple. These interviews are conducted separately, with spouses questioned individually. Husbands and wives are asked the same questions, and responses are later compared for legitimacy. Questions may pertain to a wide range of topics, including household layout and composition, schedule of daily activities, sexual preferences and behaviors, and other personal and familial preferences and inclinations that someone with intimate knowledge of the other should be able to answer. Most transnational marriages are not investigated, and most applicants comply with government mandates for issuance of green cards. The American who sponsors his or her spouse must first provide documentation supporting his or her citizenship or status as a permanent resident. The sponsor must take full responsibility for his or her spouse’s basic needs (e.g., food, clothing, and housing) and social acculturation needs (e.g., English language education and establishment of social connections and support) for a period of three years after arrival in the United States. Moreover, the sponsor must provide financial documentation that demonstrates his or her ability to financially support the spouse with a standard of living above their designated federal poverty level for the same three-year period. Green cards are provisionally issued; a couple must remain legally married and be able to provide documentation of coresidence and financial interdependence for two years before the conditional status is removed. Upon removal, couples are free to divorce without fear of deportation or other complications for the spouse’s legal permanent residency status. Three years after the conditional status is removed, spouses are eligible for American citizenship. Once spouses are citizens of the United States, they can then begin the sponsorship process to bring their relatives stateside; new spouses, parents, and unmarried minor children can all be sponsored and join him or her with almost no waiting period. Various communication and transportation technologies have changed the scope of family formation across international boundaries; it is possible in contemporary society to meet and ostensibly date

and marry someone who lives outside one’s national boundaries. Before the advent of the Internet, for example, American citizens could initiate long-term relationships with foreign nationals through international catalogues of personal advertisements. Such practices date back to the colonial era, when men would send word to their families in their countries of origin of their interest and intent to marry. The families would identify and select possible mates from their home country, and send pictures and descriptions to their sons, who would then pay to have that person sent to the United States. The use of similar catalogue systems grew at the end of the Cold War; women who came to the United States to marry through such a process were often referred to as “mail order brides.” Today, hundreds of online dating Web sites and other free and low-cost communication technologies (e.g., Facebook and Skype) connect individuals across international borders, enabling the development of intimate relationships globally on a scale that has never before been experienced. The potential to abuse the immigration process for personal gain via green card marriages is not new. It is so common, in fact, that references are routinely made in popular movies and televisions shows, and have been for decades. Most often, media depictions of green card marriages come in the form of romantic comedies. The popular films Green Card (which was nominated for an Academy Award), French Kiss, Wayne’s World II, and more recently The Proposal depict marriages of convenience; each includes stories of couples who consider marrying, or do wed, with the intent to garner permanent residence to one of the movie’s characters who was at-risk for deportation. Popular television references to green card marriages can be pulled from shows from the 1970s to present day, including Taxi; Wings; Beverly Hills, 90210; The Martin Lawrence Show; Friends; Will and Grace; Desperate Housewives; That ’70s Show; Reaper; and New Adventures of Old Christine. Despite their often romantic and humorous popculture representations, green card marriages have the potential to put an already susceptible population (foreign-born) at further risk. Research indicates that men are more likely to marry and sponsor foreign-born spouses, creating and reinforcing gendered power dynamics that disservice women by putting them at increased risk for maltreatment.



Groves Conference on Marriage and the Family

Others contend that such unions create the means for trafficking vulnerable populations on a global scale. Green card marriages raise serious and fundamental questions about the integrity of U.S. immigration policies and procedures. The U.S. federal government’s preference for promoting families in our rapidly globalizing world is called into question when issues of fraudulent marriages highlight loopholes in the system that disadvantages others. Bethany Willis Hepp Towson University See Also: Anchor Babies; Immigrant Families; Immigration Policy. Further Readings Lewis, L. N. How to Get a Green Card. Berkeley, CA: Nolo, 2005. Merali, N. (2008). “Theoretical Frameworks for Studying Female Marriage Migrants.” Psychology of Women Quarterly, v.32 (2008). Monger, R., and J. Yankay. “U.S. Legal Permanent Residents: 2011.” http://www.dhs.gov/xlibrary/assets/ statistics/publications/lpr_fr_2011.pdf (Accessed December 2012). Stevens, G., H. Ishizawa, and X. Escandell. “Marrying Into the American Population: Pathways Into CrossNativity Marriages.” International Migration Review, v.46/3 (2012). U.S. Citizenship and Immigration Services. “Green Card.” http://www.uscis.gov/greencard (Accessed March 2014).

Groves Conference on Marriage and the Family The Groves Conference on Marriage and the Family was started by sociologist Ernest Rutherford Groves in 1934 in Chapel Hill, North Carolina. It was originally named the Conference on the Conservation of Marriage and the Family, which reflected the general belief that the social institution of the American family needed saving. At the time, experts surmised that new standards of love, companionship, and gender equality within

653

marriage led to increasing rates of divorce as those high expectations often went unfulfilled. Groves believed that educating young adults about marriage and family life would help to preserve this social institution amid these changes. His conference brought together a select group of those who taught about marriage and the family to foster the exchange of ideas and methods. The conference attendees ranged from college professors of home economics and sociology to ministers involved in marriage preparation. As family life education and marriage counseling became more widespread in America—partially because of Groves’s pioneering efforts—the annual conference included other types of marriage and family educators and counselors, as well as researchers of marriage and family life from a range of disciplines. Ernest Groves (1877–1946) received degrees from Yale Divinity School and Dartmouth College and served as a minister before beginning his work as a professor. Over the course of his academic career, he served on the faculty at multiple colleges and universities and wrote many articles and books on family life. He is cited as offering the first credit-giving course on preparation for family living, developing the first graduate-level program in marriage and family life, and publishing the first college textbook for marriage preparation courses. From 1927 to his death, he served as professor of sociology at the University of North Carolina, and there he hosted the early Groves Conferences. He also served as a leader of the Federal Council of Churches’ committee on the family from 1938 to 1940, as president of the National Council on Family Relations in 1941, and as president of the American Association of Marriage Counselors in 1942. Through his academic work, the Groves Conference, and his personal involvement in other organizations dedicated to studying and improving family life, Ernest Groves helped to build family studies and marriage counseling as distinct fields. Because segregation did not allow white and black people to meet together at the University of North Carolina, Ernest Groves and his wife, Gladys Hoagland Groves, decided to create a parallel conference for black educators of marriage and the family in 1942. This conference was directed by Gladys Groves, and was held at the North Carolina College for Negroes in Durham, North Carolina. Although

654

Groves Conference on Marriage and the Family

it did not share the emphasis on research development, the conference in Durham invited many of the same white speakers as the Chapel Hill conference to discuss the practical aspects of teaching and counseling. After Ernest’s death in 1946, Gladys became the director of both conferences. To alleviate this overlap in program and leadership, the two conferences combined in 1952. Despite the desire for integration, this led to a decrease in black participation because of the white-dominated, research-heavy program, the limited number of black people on the invitation list, and increased travel expenses, especially as the conference moved to locations out of the south to accommodate a racially integrated participant list. It was not until the 1960s and 1970s that black participants became more prominent on the conference programs. After the conferences combined, Gladys served as a codirector until her death in 1980, and control over the program passed to alternating program chairs. The Groves Conference remained intentionally small and encouraged long-term participation to create an intimate atmosphere for intellectual exchange. The formal reading of conference papers was discouraged in favor of small-group discussions and workshops. New members were voted in to replace lapsed members. This created an elite group that prided itself as being on the cutting edge of marriage and family work. It lacked a bureaucratic structure and elected officers, which resulted in a more organic development but also sporadic record-keeping and occasional instability. In 1969, the conference had to be cancelled and some worried that it was the end of the Groves Conference, until several members took the initiative of hosting a conference the following year. They instituted the office of president to create more stability, and in the years since, they have adopted bylaws and an incorporated status to ensure the continuation of the Groves tradition. Although the emphasis of the Groves Conference remained on teaching, counseling, and research throughout most of its life, the themes changed

based on the interests of each program chair. Annual conference themes for exploring marriage and the family have included parenthood, divorce, health and illness, wartime problems, international and intercultural developments, gender roles, sexuality, family policy, stress, happiness, technology, immigration, aging, intergenerational relations, economic realities, genetics, single parenting, the environment, and the criminal justice system. Since 2001, some themes have focused on specific cultures, regions, and ethnicities, including conferences on Alaskan, Cuban, and Native American families; families on the border of the United States and Mexico; and Ireland’s families. The Groves Conferences celebrated its 50th and 75th anniversaries in 1984 and 2009, respectively, resulting in two books. Several institutions house archival records related to the Groves Conference, including the Social Welfare History Archives at the University of Minnesota, the North Dakota State University Archives, and the Merrill-Palmer Institute at Wayne State University. Kristy L. Slominski University of California, Santa Barbara See Also: Education, College/University; Family Life Education; National Council on Family Relations. Further Readings Dail, Paula W. and Ruth H. Jewson, eds. In Praise of Fifty Years: The Groves Conference on the Conservation of Marriage and the Family. Lake Mills, IA: Graphic Publishing, 1986. Groves Conference on Marriage and Family. http://www .grovesconference.org (Accessed November 2013). Rubin, Roger H., and Barbara H. Settles, eds. The Groves Conference on Marriage and the Family: History and Impact on Family Science. Ann Arbor: MPublishing, University of Michigan, 2012. http://quod.lib.umich .edu/cgi/t/text/text-idx?c=groves;idno=9453087.0002 .001 (Accessed December 2013).

H Half-Siblings Estimates are that 16 percent of children in the United States reside in stepfamilies. Of those, 67 percent live with a half-sibling, a sibling with whom they share only one biological parent. Increasingly, both men and women are having children with multiple partners, resulting in an increase in the number of children growing up with half-siblings. These estimates likely fall below the true incidence of halfsiblings, as they do not account for those living in separate households. Previously, half-siblings were most often the result of a mutual child born into a married stepfamily in which either the mother or father had children from a previous relationship. More recently, because of high rates of nonmarital childbearing, half-sibling relationships are increasingly being created through one or both parents’ multipartner fertility outside of the context of committed, cohabiting, or married relationships. Thus, children are more likely than before to have one or more half-sibling relationships that span multiple residences. Half-Siblings’ Roles and Relationships For half-siblings who are born into stepfamilies, how their entrance into the family is perceived by stepchildren may vary by the developmental stage of the stepfamily and the age of the stepchild. Research suggests that the birth of a half-sibling

is viewed as a more positive event in stepfamilies of longer duration because they are likely to have established clearer family roles and boundaries and stronger bonds between the stepparent and stepchild. Children under the age of 5 are more likely to experience the birth of a half-sibling similar to full siblings in nonstepfamilies, and their relationships develop similar to full-sibling relationships among children with the same age difference. Stepchildren over the age of 10 are most receptive to a new half-sibling, particularly only or the youngest children who gain the opportunity to demonstrate their greater maturity and to help care for their younger sibling. This transition can be most difficult for school-aged children, who are likely to feel displaced and discriminated against compared to the new child in the family. Overall girls tend to be more welcoming of half-siblings than boys, yet regardless of the timing, the majority of stepchildren’s concerns regarding younger half-siblings do not differ from those among full siblings. Much of the research on siblings in complex family configurations has focused primarily on stepsiblings. However, overall it appears that halfsibling relationships are more similar to those of full siblings than to stepsiblings. Half-siblings know one another from the day that the younger sibling is born, compared to stepsiblings, where each child has spent a portion of his or her life without the other. Further, the arrival of a half-sibling is 655

656

Half-Siblings

expected and anticipated, compared to stepsiblings. Half-siblings also share one biological parent, and thus are more likely to have similar physical and personality characteristics. Generally, half-siblings are more likely to think of each other as family than stepsiblings. They report levels of positivity in their relationships similar to those of full siblings and greater positivity than stepsiblings. Younger children are less discriminating of family relationships based on biological relatedness and tend to feel equally close to full, half-, and stepsiblings. However, from later childhood through adulthood biological relatedness is more important, and half-siblings tend to have closer relationships than stepsiblings, but slightly more distant relationships than full siblings. When half-siblings share the same biological mother, they report feeling more like “real” siblings than those who share the same biological father. Those who spend more time together in the same residence are more likely to regard half-siblings the same as full siblings. Living in the same household is also related to greater similarity among half-siblings in temperament and more positive sibling relationships overall. In adulthood, half-siblings tend to have more frequent contact than do stepsiblings, yet less contact than full siblings. Interestingly, contact among adult siblings seems to be associated with the complexity of sibling composition. First, the presence of a half-sibling is related to the amount of contact among full siblings in the family, where full siblings with half-siblings have less frequent contact than those without; yet, half-siblings also report more contact when there are no full-sibling relationships. Thus, greater family structural complexity may lead to more difficulty in managing sibling relationships later in life. Half-Siblings and Relationships With Parents Differential parenting of children occurs in all family types. However, due to differences in biological relatedness to parents, half-siblings are more likely to experience variations in their relationships with the nonshared parent. Generally, parents tend to have more positive and closer relationships with their biological children than with their stepchildren, and compared to their half-siblings, stepchildren report less warmth and closeness in relationships with stepfathers who are biological fathers to their half-siblings. Although some of these

differences suggest that stepchildren fare worse in their families compared to their half-siblings, the presence of a half-sibling has also been associated with greater involvement on behalf of stepfathers and higher quality relationships among stepparents and stepchildren. Research findings suggest that children’s perceptions of relationships with parents are more similar among half-siblings compared to stepsiblings, but less similar compared to full siblings. This is because how individuals perceive their relationships is partially genetic, and so halfsiblings, who share only 25 percent of their genes, are expected to have more similar perceptions than stepsiblings, who do not share any genes, and less similar perceptions than full siblings, who share 50 percent of their genes. Half-Siblings and Individual Well-Being Differences have been found between stepchildren and their half-siblings, who live with both biological parents, when it comes to several indicators of well-being. In stepmother households, stepchildren complete less education compared to their halfsiblings, who are the mother’s biological children. Stepchildren in these families also engage in fewer extracurricular activities and are more likely to have been suspended from school than their halfsiblings. Findings suggest that stepmothers without biological children may invest less in their stepchildren’s well-being by spending less money on their education and health care. However, at the household level stepchildren tend to benefit from the birth of a half-sibling as household expenditures in these areas are likely to increase. A few recent studies have also suggested that the presence of a half-sibling may have a negative effect on various outcomes (i.e., academic achievement, educational attainment, and delinquency) for both stepchildren and children living with both biological parents. However, most of the effects that have been found have been small and diminish after accounting for other parent, child, and family characteristics, such as children’s age, race, parents’ education, and family income. Current research findings are not sufficient evidence to suggest that there is anything inherently detrimental to the presence of half-siblings for individual well-being. Chelsea L. Garneau Auburn University



Hall, G. Stanley

657

See Also: Multiple Partner Fertility; Remarriage; Stepchildren; Stepparenting; Stepsiblings.

Psychology (formerly the Pedagogical Seminary), and the Journal of Applied Psychology.

Further Readings Baham, Melinda E. et al. “Sibling Relationships in Blended Families.” In The International Handbook of Stepfamilies: Policy and Practice in Legal, Research, and Clinical Environments, Jan Pryor, ed. Hoboken, NJ: John Wiley & Sons, 2008. Gennetian, Lisa A. “One or Two Parents? Half of Step Siblings? The Effect of Family Structure on Young Children’s Achievement.” Journal of Population Economics, v.18 (2005). Halpern-Meekin, Sarah and Laura Tach. “Heterogeneity in Two-Parent Families and Adolescent Well-Being.” Journal of Marriage and Family, v.70 (2008).

Biography Granville Stanley Hall was born on February 1, 1844, in Ashfield, Massachusetts, a small farming community. He was born into an old New England, Puritan family. His mother, Abigail Beals Hall, was religious and warm. His father, Granville Bascom Hall, was stern, yet tender toward his son. Both parents recognized his intellectual talents early, and encouraged their son, hoping that their oldest son would be a minister. Literature, oratory, and music were special interests of Hall as a child. Hall married Cornelia Fisher, a young art student who he had met in Germany, in September 1879. They were married in Berlin and returned to the United States the following year. Together, they had two children. A son, Robert Granville Hall, was born on February 7, 1881. A daughter, Julia Fisher Hall, was born May 30, 1882. While Hall was away in May 1890, his wife and daughter were accidently asphyxiated. The tragedy deeply affected Hall. Hall married a second time to Florence E. Smith in July 1899. Unfortunately, his second wife showed signs of an emotional problem. He died on April 24, 1924, at his home in Worcester, Massachusetts.

Hall, G. Stanley Few psychologists have had a greater impact on American psychology than Granville Stanley Hall. A pioneer in the fields of psychology and education, Hall has been referred to as the founder of organized psychology, the father of the child study movement, and the founder of child psychology and educational psychology. As a leader in educational reform, he linked genetic psychology and adolescence to child education. His interests focused on childhood development, adolescence, evolutionary theory, and the narrowing role of the elderly in the family. Hall’s work focused on the connections between child study, schools, teachers, and educational reform in order to improve the conditions of children and families. His 1904 book Adolescence was very influential with parents, teachers, and child welfare professionals. The book aimed to provide teachers and administrators with scientific tools to rationalize and improve education. Hall is credited for introducing into the psychological discussion of his time the ideas of Charles Darwin and Sigmund Freud. Hall was the first person to be awarded a doctorate in psychology in the United States. He was the first president of the American Psychological Association and the first president of Clark University. Hall also founded several psychology journals, including the American Journal of Psychology, the Journal of Genetic

Education and Work In 1862, Hall left Ashfield for Williston Academy. He was not happy there, so he transferred to Williams College. There he studied religion, romantic literature, and philosophy. He graduated in 1867. Hall then enrolled at Union Theological Seminary in New York City. He was ambivalent about becoming a clergyman and began to think about becoming a professor of philosophy and studying in Germany. In 1869, this became a reality for Hall. Through his studies, primarily at the University of Berlin, Hall became convinced that he should become a philosopher. However, after only more than a year abroad, Hall was forced to return to the seminary to complete his studies due to a lack of funds. Hall remained in New York for two years as a private tutor. He then took a position at Antioch College in Ohio as a professor of modern languages, English literature, and eventually philosophy for four years. In 1874, Hall developed an appreciation for the work of Wilhelm Wundt and even moved back to Germany to study under him.

658

Hanukkah establish Clark University in Worcester, Massachusetts, where he became a major force in shaping experimental psychology as a science. In 1904, Hall published a two-volume piece, Adolescence. Hall’s beliefs about adolescent development were based on evolutionary psychology, which assumed the inheritance of acquired characteristics and memories. He also argued that sexuality, masturbation, and religious conversion are normative in adolescence. Hall’s interest in these topics led him to become acquainted with the work of Freud. As a proponent of psychoanalysis, Hall invited Sigmund Freud and Carl Jung to participate in the Clark Conference in 1909. Hall published 489 pieces of his work. The areas of research covered most of the major areas of psychology. Such topics included Educational Problems (1911), Jesus, the Christ, in the Light of Psychology (1917), Senescence, the Last Half of Life (1922), and Life and Confessions of a Psychologist (1923), his autobiography.

G. Stanley Hall was a pioneer in the fields of psychology and education. He was referred to as the founder of organized psychology, child psychology, and educational psychology.

Eventually, Hall enrolled in the philosophy department at Harvard University to pursue a doctorate degree. There, Hall worked closely with William James, the father of American psychology. In June 1878, Hall was awarded the first Ph.D. in psychology in the United States. His dissertation topic was “The Muscular Perception of Space.” Hall’s work served as the foundation of the functionalist movement in the United States. From July 1878 to September 1880, Hall studied at Berlin and Leipzig, exploring psychopathology and physiology. Hall was drawn to the work of Ernest Haeckel, who had developed ideas related to the work of Charles Darwin. Their work concerning recapitulation, the notion that a developing individual repeats the development of the species, would be found in Hall’s later work. A lectureship in philosophy and a professorship in psychology and pedagogy at Johns Hopkins University followed in 1883. There, Hall developed the first psychological laboratories in the United States. In 1888, he helped

Joanne Ardovini Metropolitan College of New York See Also: Adolescence; Evolutionary Theories; Freud, Sigmund; Functionalist Theory; Gesell, Arnold Lucius; Psychoanalytic Theories. Further Readings Averill, Lawrence A. “Recollections of Clark’s G. Stanley Hall.” Journal of the History of the Behavioral Sciences, v.26 (1990). Hothersall, David. History of Psychology. 3rd ed. New York: McGraw-Hill, 1995. Rush, N. Orwin, ed. Letters of G. Stanley Hall to Jonas Gilman Clark. Worcester, MA: Clark University Library, 1948.

Hanukkah Hanukkah (also spelled Chanuka, among other spellings) is an eight-day holiday celebrated by Jews during the 25th of Kislev on the Jewish calendar, which typically falls sometime from late November to the end of December on the Gregorian calendar. The holiday is known as the Festival of Lights, and



commemorates the Maccabbean victory over the Greek Syrian rulers of Jerusalem and surrounding lands in the 2nd century b.c.e. During that time, the ruler Antiochus had desecrated the Second Temple, forbade Jews from practicing their religion, and attempted to force Jews to offer ritual sacrifices of swine. Jews banded together under Judah Ha-Maccabee and fought to regain control of the Temple. When the Jews triumphed and went to rededicate the Temple, they could only find one cruse of oil, but the rededication would require eight days of oil. A miracle occurred in that the oil that was only to last one day continued to burn for eight days—hence, the eight days of Hanukkah, and the literal meaning of Hanukkah as “dedication.” Lighting the Menorah Jews light a candelabra with nine placeholders, referred to as a menorah (Hebrew for “lamp”) or a hanukkiah (a lamp for Hanukkah). One candle is set apart, called the shamash (Hebrew for “attendant”). This candle is lit first and is used to light the subsequent candles. One candle is lit per night, adding one per night successively, until all eight of the other candles are lit on the final night of Hanukkah. Prayers commemorating the miracle of Hanukkah are said when lighting the candles each night. The menorah is supposed to be placed in a location visible to others in the community. Today, it is not unusual for electric menorahs to be on display in the windows of homes. Games Families play dreidel (a Yiddish word meaning “top”), which involves a four-sided top with the Hebrew letters nun, gimel, hay, and shin on each side. The letters are an acrostic for nes gadol haya sham (Hebrew for “a great miracle happened there” [Israel]). In Israel, the dreidels have letters representing nes gadol haya poh, “a great miracle happened here.” The game emerged because, under Greek rule, Jews were not allowed to study the Old Testament, and to hide their communal study, they would play a game common among the Greeks at that time by casting lots and betting. To play the game, each player needs a pool of tokens, which may be candies, coins, or other items, and with each round, the players ante one token to create a kitty. One player spins the dreidel

Hanukkah

659

and the corresponding letter indicates the action to take place. If the dreidel lands on nun, the spinner receives nothing. If it lands on gimel, the player receives all tokens in the pot. If it lands on hay, the player receives half the number of tokens. If it lands on shin, the player has to put in one token. The game continues until there are no more players. Songs Traditionally, “Ma’oz Tzur” (the Hebrew precursor of the modern song “Rock of Ages”) is sung after candlelighting. The song includes themes of salvation, struggles of Jews, and praise to God for survival, a theme of Hanukkah. Foods Because of the association between Hanukkah and oil, many Jews prepare foods that include oil or frying. One of the common foods that Jews prepare at this time is latkes, which are potato pancakes fried in oil. These potato pancakes are typically topped with sour cream or applesauce. In Israel, it is common to purchase or prepare sufganyiot, jelly-filled doughnuts (which are also fried in oil). In Sephardic tradition, bimuelos (Ladino for fritters), fried dough fritters flavored with sugar, honey, or flavored syrup, are prepared. Traditionally for Jews, Hanukkah has been a minor holiday. Historically, Hanukkah was not a pilgrimage holiday that necessitated visiting the ancient temple in Jerusalem; therefore, there is no designated service conducted at synagogue, and Hanukkah celebrations are typically conducted at home. Generally, money has been given to children for Hanukkah. This money is referred to as gelt, the Yiddish term for “money.” Today, gold or silver foil-covered, chocolate coins are often given to children as treats during Hanukkah. During the later 20th century, Hanukkah became more prominent and gift giving began in response to the commercialization of Christmas. Now, Hanukkah decorations, merchandise, foods, and other related products can be found in many major retailers. Gifts are typically given during the eight nights of Hanukkah. December Dilemma Families may experience the “December dilemma,” which refers to the co-occurrence of Hanukkah and Christmas in light of the dominance of Christmas

660

Head Start

celebrations, festivities, paraphernalia, and media. Jewish children may want to engage in Christmas traditions that they see displayed around them and in schools. Also, many non-Orthodox, Jewish American families are interfaith, with one spouse Jewish and one spouse Christian. Because of this, the situation may become more challenging to celebrate Hanukkah when Christmas is omnipresent. Families may choose to pay particular attention to the distinctiveness of the Jewish holiday from Christmas, return to a simpler celebration, or emphasize the major Jewish holidays at other times of the year to balance the commercial focus on Christmas in December. Robert S. Weisskirch California State University, Monterey Bay See Also: Judaism and Orthodox Judaism; Passover; Religious Holidays. Further Readings Anti-Defamation League. “The ‘December Dilemma’: December Holiday Guidelines for Public Schools.” http://archive.adl.org/religious_freedom/resource _kit/december_holiday_guidelines.asp (Accessed December 2013). Ashton, Dianne. Hanukkah in America: A History. The Goldstein-Goran Series in American Jewish History. New York: New York University Press, 2013. International Fellowship of Christians and Jews. “Hanukkah.” http://www.ifcj.org/site/PageNavig ator/eng/inside/hanukkah#whatis (Accessed December 2013).

Head Start Since 1965, the federal agency Head Start has funded local programs to work with low-income families to promote the school readiness of their preschool children (ages 3–5 years) by enhancing parenting skills and children’s cognitive, social, and emotional development. (Early Head Start has served low-income children [ages 0–3 years] and pregnant women and their families since 1995.) All of the following statistics are from 2012, for Head Start only.

Structure In 1966, Head Start was an eight-week summer program designed to help break the cycle of poverty. Now typically running for nine months, Head Start consists of many different organizations, such as local programs in school districts, public agencies, Indian tribes, nonprofit organizations, and, since 1998, for-profit organizations. The federal government funds Head Start through competitive grants ($6.4 million to 1,800 organizations in 2012), and these local programs must provide volunteer hours and in-kind resources. Some states, cities, and foundations provide additional money. Without a national curriculum or standardized services, the wide range of organizations and local adaptations result in vastly different Head Start experiences for participating children. For example, these programs can be in classroom settings, children’s homes, and/or family childcare homes. Most children (96 percent) are enrolled in classroom settings in schools or community centers and parents participate in at least two home visits each year (48 percent full day and 48 percent part day). In some programs, staff work directly with parents and children at their home each week and organize group socialization activities twice a month (home-based option, 2 percent of children). In other programs, the staff provides services in a family childcare setting (0.2 percent of children). Finally, some programs use a combination of these options (1 percent). Participants Head Start serves a wide range of low-income children (848,000 in 2012) and parents. Eligibility is limited to children with families below the poverty level (if extra space is available, it is open to families that earn less than 130 percent of the federal poverty level; up to 10 percent of any program’s enrollment can be from higher income families or families experiencing emergency situations). Head Start has served migrant children since 1968, children with disabilities since 1972 (at least 10 percent of national enrollment), limited English proficient children since 1977, and homeless children since 2007. Head Start includes 245,000 staff members and 1.3 million volunteers. Of the total staff, about half consists of professional staff, including teachers, assistant teachers, home visitors, and family childcare providers. Many staff members are proficient



in a language other than English (30 percent), and most teachers earned degrees in early childhood education or related fields (31 percent associate degrees, 51 percent bachelor’s, and 11 percent master’s or doctorates). Essential to many Head Start activities, volunteers include 867,000 parents of Head Start children. Services While services differ across Head Start programs, each local program assesses and caters to the needs of their client families. Typically, these needs include parenting practices and children’s health, learning, and social-emotional development. Because of migrant families’ work constraints, for example, migrant and seasonal Head Start service hours are longer, but for fewer months than other Head Start services. Likewise, many programs adapt to the children’s ethnic, cultural, and linguistic heritages, with the help of local community staff and parent volunteers. Head Start programs have been reviewed based on performance standards since 1974, and their school readiness goals have been aligned with early learning state standards since 2007. To improve parenting practices, Head Start helps parents build social connections to access community resources and enhance their knowledge of parenting and child development to increase parent resilience and nurture their children. In addition to connecting families to one another (e.g., parenting support groups) to pool their resources and skills, Head Start links them and community organizations (e.g., government agencies and churches), especially those with language- or culturally appropriate services. Furthermore, Head Start often offers parenting workshops, home visits, newsletters, and resource Web sites. These can help parents understand their children’s development (e.g., wears Mom’s clothes or tells stories), anticipate problems (bedwetting and peer pressure), learn parenting strategies (empathic listening and conflict resolution), build up their resilience to challenges (child is not in schoolyard), and nurture their children (talk and play). Compared to other parents, those who are more involved with Head Start have greater knowledge of available social services and resources, more confidence in their coping abilities, greater life satisfaction, and less anxiety, depression, or sickness. Head Start programs initially offered health services on site, but now most programs help families

Head Start

661

apply for health care funding (e.g., Medicaid) and find and enroll in health service programs provided by other providers/brokers. Through hands-on workshops, expert guest speakers, and newsletters, Head Start helps families recognize symptoms of common illnesses, develop good eating habits, prepare emergency first aid and hazard kits, engage in safety practices, and learn general health information. Children in Head Start are as likely to have health insurance and thus be as healthy as their wealthier peers in private preschool or who stayed home with a caregiver. Because hungry children are lethargic and less attentive, Head Start typically provides breakfast or lunch to aid their learning and social-emotional development. Children in Head Start learn letters, words, counting, and other concepts through social activities (songs, story readings, discussions, and role-playing) within a locally chosen curriculum. Regular assessments of children’s skills help target teaching to their specific needs. Children in Head Start academically outperform their wealthier peers who attend private preschools or stayed home with a caregiver. However, these cognitive advantages fade within a few years, disappearing completely in some studies but not in others. This fading may occur because children in Head Start are poorer, often racial minorities, and are more likely to attend lower-quality public primary schools after Head Start, which might disadvantage them compared to other children. Still, adults who had attended Head Start were more likely than others in the same demographic group to graduate from high school and attend college, and they were less likely to be retained a grade or placed in special education. Through guided social learning and play activities, children can recognize one another’s social cues, overcome social anxieties, learn rules about appropriate behaviors, and develop social and conversation skills. Teachers report more emotional problems in Head Start children and poorer relationships with them, compared to other children. In contrast, parents report that their children’s problem behaviors decreased after enrolling in Head Start. These opposite results may reflect different behaviors or expectations across settings. Meanwhile, studies of adults show that women who were in Head Start were less likely to have outof-wedlock births or to be teen mothers, compared

662

Health Care Power of Attorney

to other women with similar demographics. Meanwhile, black American adults who attended Head Start were less likely to be arrested or charged for a crime (with notably fewer felonies) than other black Americans. Ming Ming Chiu Michael Pawlikowski State Universityof New York, Buffalo See Also: Child Care; Day Care; Kindergarten; Parent Education; TANF. Further Readings Currie, Janet and Matthew Neidell. “Getting Inside the ‘Black Box’ of Head Start Quality.” Economics of Education Review, v.26/1 (2007). Deming, David. “Early Childhood Intervention and Life-Cycle Skill Development.” American Economic Journal, v.1/3 (2009). Ludwig, Jens and Douglas L. Miller. “Does Head Start Improve Children’s Life Chances?” Quarterly Journal of Economics, v.122/1 (2007).

Health Care Power of Attorney A health care power of attorney (DPAHC)—also called a durable power medical attorney, health care representative, health care agent, or health proxy—is a legal document that allows someone to designate another person to make all health care decisions for them if they are determined unable to make decisions. This document was established as part of implementation of the Patient Self Determination Act (PSDA) in the 1990s. A health care power of attorney is different from a living will. The living will is a legal document that describes an individual’s specific instructions about the desire to have or not to have certain lifesustaining procedures, such as feeding tubes, and artificial respiration, administered for the purpose of prolonging life when permanently incapacitated or unable to force those preferences. A health care power of attorney covers all health care decisions with some limitations, and addressed deathbed

issues. A health care power of attorney is a document that is signed by a competent adult to designate an individual to make health care decisions on his or her behalf, should the adult be unable to make such decisions. Under this agreement, the designee is given wide latitude in considering options of treatment of the individual on their behalf. The designee who is chosen as the decision maker needs to be a legal adult, usually over the age of 18. The document gives the person who has been named as the designee the authority to make all health care decisions, in accordance with the individual’s wishes, when a doctor certifies that the person lacks the capacity to make health care decisions. Lacking capacity usually means that one cannot understand the nature and consequences of the health care choices that are available. Usually, it is when one is unable to communicate wishes for care, either orally or in writing, or through gesture, such as while under anesthesia. The designee assigned the power of attorney does not have the authority to override what the individual would want in life, unless the wishes are specifically revoked or there is a court action to address the health care needs. The designee should be knowledgeable about the wishes, values, and beliefs of the adult for whom he or she holds the health care power of attorney. In the event that the designee does not know the wishes of the adult, the designee is given the responsibility to make the health care decisions based on the individual’s best interest. Each state sets different rules regarding these legal documents and a document drawn in one state may not be recognized in another. Depending on the state, these documents may be created by an attorney, or by the individual using witnesses or notary. Family members need not be the designee, which allows for the designee to be located in the same community and to be more aware of the health issues of the patient. Multiple designees can be named in most states, which allows for shared input, but the first list designee is considered primary. In most states, a medical power of attorney is immediately effective after it is executed and delivered to health care sources. It is recommended that an individual’s position and primary care facilities receive copies of these documents. In most instances, the designee cannot consent to the

Health of American Families



commitment to a mental institution, convulsive treatment, abortion, or neglect of comfort care. The document will allow in most cases the designee to talk to health care providers and health care insurers on behalf of the patient. A health care power of attorney can be revoked or changed by competent adult at any time. Generally, a health care power of attorney is no longer necessary when the adult has died. In some states, however, the health care directives remain in effect after the death for some limited purposes. The designee may be granted to supervise the disposition of the body. The designee may authorize an autopsy or organ donation, unless the individual has specifically withheld these powers when health care documents were made. In most states, the attending physician or other health care providers will not be subject to civil or criminal liability, for disciplinary action if any act or omission is performed in good faith under the direction of the designees that have medical power of attorney, provided the act or omission does not constitute a failure to exercise due care in the provision of health care services. A discussion of health care wishes and health care power of attorney should be held between the patient and the family before it is needed. The family should ensure that the documents exist, and should understand why the person was chosen as the designee. The designee should know the level of interaction expectations with the family. The health care designee needs to understand which medical treatments the patient would want to receive or refuse, and under what conditions those decisions should be made, before agreeing to serve in the role. Janice Kay Purk Mansfield University See Also: Assisted Living; Caring for the Elderly; Health of American Families; Wills. Further Readings American Bar Association. “Giving Someone a Power of Attorney for Your Health Care.” http://www .americanbar.org/content/dam/aba/uncategorized/ 2011/2011_aging_hcdec_univhcpaform.auth checkdam.pdf (Accessed November 2013). Ashley, R. C. “Why Are Advanced Directives Legally Important?” Critical Care Nurse, v.25/4 (2005).

663

Clites, J. “Durable Power of Attorney for Healthcare.” Pennsylvania Nurse, v.64/4 (2009).

Health of American Families The health of American families affects many aspects of their lives and many portions of society. Because health, or lack thereof, influences the economic, political, social, and educational future of both the family and society, it has been a matter of keen interest for more than a century. The health of American families is influenced by a variety of systems, including regulatory, financial, political, and bureaucratic systems. Whenever a change occurs in one of these systems, it has repercussions for the rest. Technological and scientific advances have made it possible to prolong life, where this was once impossible—this has dramatically raised many of the costs associated with health care. Although the Affordable Care Act seemingly resolved many issues related to the health of American families, the continued interaction of competing interests will allow this issue to remain of primary interest to many. Background Throughout history, health has been a major concern to individuals, families, and the society in which they lived. Despite this concern, until midway through the 19th century, medical care provided little more than care and tending when individuals were ill. No methods or treatments existed that could prevent illness or cure infection. A few individuals, some with little or no formal training, had reputations as healers who could diagnose problems or provide suggestions to sick people regarding what to do to improve their health. Sometimes, these doctors could set broken bones, pull decayed teeth, or provide herbal mixtures that provided some relief to the suffering. Beyond these primitive measures, little could be done to improve the health of the ill, leading to a life expectancy for white males of 38 years in 1850, with white females having a life expectancy of 40. That same year, the life expectancy for nonwhite males was 32 years, and 35 years for nonwhite females.

664

Health of American Families

Beginning in the mid-1850s, scientific advancements resulted in positive changes that affected patients’ health for the better. Physicians began to receive training in a more systematic analysis of patients’ symptoms. This change resulted in better diagnoses of ailments. Powerful new discoveries were made, such as anesthesia and antiseptics. These discoveries permitted the development of aseptic operating theaters, which permitted surgeries that could assist patients. Certain endemic infectious diseases had cures developed for them, which meant that physicians could offer relief to those affected. Advances in chemistry, coupled with improvements in laboratory techniques, led to a revolution in health care. Bacteriology and virology developed, which were based in science, and replaced old ideas of infectious disease epidemiology, which were based on theories, some of them false. These developments, such as germ theory, changed the health of American families for the better. The death rate of new mothers from childbed fever, for example, was reduced by 90 percent by the simple precaution of having physicians wash their hands before delivering babies. Louis Pasteur, the French chemist, conducted lab work that linked microorganisms and developed the pasteurization process, which heats food to a specific degree to slow spoilage and reduces the number of pathogens that cause disease. These changes dramatically improved public health and created keen interest among the public in finding other ways of protecting families from disease and infection. In 1906, Upton Sinclair published The Jungle, a muckraking novel that exposed the questionable practices of the meatpacking industry. The uproar caused by The Jungle created pressure on the federal government to take steps to protect the public from unsanitary food. Governmental Regulation Much of the public was outraged at Sinclair’s disclosures about unsanitary conditions in packing houses and slaughterhouses. In response, the U.S. Congress passed several laws intended to protect the health of American families. The Federal Meat Inspection Act of 1906 (FMIA) sought to prevent the adulteration or misbranding of meat products. FMIA also sought to set up a regulatory system that would provide inspections by government agents to ensure that animals were slaughtered and meat was

processed under sanitary conditions. Specifically, FMIA gave the federal government jurisdiction over meat that was placed in interstate commerce and sought to enforce four requirements. These included the following: • Mandatory inspections of livestock before slaughter • Mandatory post mortem inspection of every animal carcass • Establishing sanitary standards for all slaughter houses and meat processing plants • Charged the U.S. Department of Agriculture with monitoring and inspecting of slaughterhouses and meat processing plants This regulation of the meat slaughtering and processing industries has continued to the present day and was extended to poultry by the Poultry Products Inspection Act of 1957. The Pure Food and Drug Act of 1906 was also passed as part of an effort to improve public health by reducing the chance of contamination. The first federal legislation that regulated food and drugs, the Pure Food and Drug Act, defined “adulteration” and “misbranding” for the first time, seeking to assure that consumers were clear as to what they were purchasing. The act sought to enforce truth in labeling, which it was felt would raise production standards for food and drugs and reward businesses that engaged in honest practices. Establishing the agency that came to be known as the Food and Drug Administration (FDA), the act designated 10 ingredients as dangerous and required that these ingredients be listed on the label of all bottles or tins that contained them. The 10 ingredients designated as dangerous consisted of alcohol, morphine, opium, cocaine, heroin, alpha or beta eucaine, chloroform, cannabis, chloral hydrate, and acetanilide. These laws were augmented over the years by a variety of statutes that followed, creating a system where food and drugs are regulated by the federal government. Government regulatory systems help to assure the health of the American public. New drugs must be tested before they are approved for wider distribution. Food additives are tested, and those that are determined to risk harming the public are banned or restricted. Medical devices are also tested and



when a health risk to the public is detected, they are often recalled. While the regulatory system is sometimes criticized for being too restrictive, and other times for being too lax, it has done a great deal to improve the health of American families. Access to Medical Care Providing access to medical care has long been a contentious issue in the United States. The first attempt to offer health care to those who were unable to afford it was the Bill for the Benefit of the Indigent Insane, which was approved by both houses of Congress in 1854, but vetoed by President Franklin Pierce. Pierce believed that health care was the responsibility of the states and that the federal government should not become involved in the process of providing social benefits to the poor. At the conclusion of the Civil War, the federal government took steps to establish a system to provide medical care to the recently freed slaves. The Freedman’s Bureau built 40 hospitals

French chemist Louis Pasteur. The health of American families in the mid-1800s was dramatically improved because of his work developing the pasteurization process.

Health of American Families

665

and engaged a team of more than 100 physicians to provide medical care to former slaves. Although the Freedman’s Bureau was viewed as highly successful, it was shut down after 1870. During the Great Depression, President Franklin D. Roosevelt sought to include provisions in Social Security legislation that would have assured some poor families access to federally funded medical care. Vigorously opposed by the American Medical Association (AMA), Roosevelt was forced to withdraw his proposal so that the Social Security legislation could be passed. During the 1930s, many hospitals recognized the difficulty that many patients had paying for medical care, and established insurance programs that would help individuals share the risk of health crises. These programs, known as Blue Cross, began offering insurance to employers, who in turn offered such services to their employees. After World War II, Congress passed legislation that benefited third-party insurers. Although President Harry Truman attempted to make universal health care part of his Fair Deal legislation, this legislation failed as the AMA and many physicians opposed the bill as “socialistic.” After 1951, the Internal Revenue Service (IRS) held that premiums for medical insurance for employees were deductible by employers. This change helped third-party insurers, and more employers began to offer this benefit as a means of attracting and retaining the best employees. During the 1960s, President Lyndon B. Johnson was able to obtain passage of legislation that created Medicare and Medicaid as part of his Great Society legislation. Medicare is a national insurance program that guarantees all Americans 65 and older health insurance. Medicaid is a health program for individuals and families who are from low socioeconomic status (SES). Medicare and Medicaid reduced pressure on many families who struggled to find a way to fund access to medical care. Medicare was especially popular because few elderly citizens were working and because health care was prohibitively expensive for many of them. Medicaid was also found to reduce certain health expenditures for the government because permitting low-SES families access to regular care reduced the need for costly emergency treatment. Although multiple attempts to pass legislation creating universal national health insurance were made during the 1970s, 1980s, and 1990s, all of

666

Health of American Families

these efforts were ultimately unsuccessful. The efforts made during the administration of President Bill Clinton were popular and both President Clinton and First Lady Hillary Clinton campaigned extensively for the program, this ultimately proved unsuccessful. During the 2000s, President George W. Bush did not attempt to gain universal national health insurance, although he did obtain passage of the Medicare Prescription Drug, Improvement, and Modernization Act (MPMA). The MPMA was the greatest expansion of medical care since the 1960s, providing basic prescription drug coverage to Medicare recipients for the first time. Although opposed by many conservative groups, the MPMA proved highly popular. Patient Protection and Affordable Care Act After President Barack Obama’s 2008 election, he proposed the Patient Protection and Affordable Care Act (PPACA). The PPACA represented an attempt to provide health insurance and medical care for more American families and to address the rapidly spiraling costs of the health care market. To achieve these goals, the PPACA proposed engaging in a series of reforms designed to reduce the cost of health care and to make medical insurance more widely available. These reforms involved a variety of initiatives, including the following: • Creation of an individual insurance mandate that includes a financial penalty and guaranteed coverage • Promotion of care coordination and patient centered care through creation of a “medical home” to coordinate care • Updating the Medicare physician fee schedule • Linking payments to quality, outcomes, adherence to guidelines, and patient experience • Establishing standards for safety and quality of diagnostics • Bundling payments for the treatment of chronic conditions • Setting up a fixed-rate all-inclusive average payment for acute care episodes The act was trumpeted as reducing the number of uninsured Americans and making health insurance and medical care available to many individuals

and families who were previously unable to obtain either. The legislation was vigorously opposed by the Republican minority in Congress, and was opposed by the AMA and other lobbyists. After lengthy debate, Congress passed and President Obama signed the PPACA in 2010. Almost immediately, certain states, organizations, and businesses challenged the law as unconstitutional. In National Federation of Independent Business v. Sebelius, the U.S. Supreme Court considered these challenges. In a 5–4 decision, the court determined that the PPACA was constitutional, because its requirement that most Americans obtain health insurance was a valid use of Congress’s authority to impose taxes. The court determined, however, that the act’s expansion of Medicaid was invalid because it coerced states to accept the expansion or forgo funding for Medicaid. Despite this ruling, Republicans in Congress have continued to attempt to repeal the PPACA. The PPACA promises tremendous changes for how American families deal with health and health care. The act expanded Medicaid eligibility to individuals and families who make up to 136 percent of the federal poverty level—this will allow many of the working poor who previously had not been eligible to qualify for Medicaid. Additionally, families and individuals with incomes up to 400 percent of the federal poverty level will be eligible for subsidies for insurance purchased on state-created insurance exchanges and the federal health insurance Web site. This will greatly expand the number of individuals with health care insurance, by some estimates increasing the number of those with coverage by 32 million. This will decrease the likelihood that many families will have to file for bankruptcy protection because of health costs that are not covered by insurance and reduce the number of individuals who face “job lock,” or the inability to leave a job because of the need for health insurance associated with that job. As the PPACA allowed children to be covered on their parents’ health insurance up until age 26, this also increased the number of individuals with coverage. It is believed that the overall health of American families will improve as a result of PPACA because many family members will have access to health care and will consequently seek early treatment for diseases and conditions that become more difficult to treat if such help is delayed.



Other Health Initiatives Federal reforms are not the only attempts to improve the health of American families. Several states, local governments, and municipalities have also taken steps to implement universal health insurance. Massachusetts, for example, passed its Health Reform Statute in 2006, which expanded coverage to many who had not previously had health insurance. This initiative proved popular, although the influx of newly insured residents placed pressure on doctors, medical groups, and hospitals that were already short staffed. Although several state legislatures have attempted to pass bills providing for a single-payer health care system, where the government pays for all health care costs rather than insurers, these efforts have failed. Such initiatives have reduced the number of residents in these states without health insurance, however, and the numbers of uninsured in states such as Massachusetts, Connecticut, and Oregon are lower than the national average. First Lady Michelle Obama has led an initiative to reduce childhood obesity, known as “Let’s Move!” This program seeks to reduce unhealthy behaviors in children and families by encouraging healthier eating habits and increased exercise. To date, the campaign has had significant success in changing the composition of lunches provided at many public schools, although these reforms have not always proven popular with children who are served the healthier food. The campaign has been successful for involving a number of famous chefs and television personalities in efforts to improve the quality of food served at schools, and has increased attention for the need for physical activity. Other health initiatives have worked to improve conditions for individuals and families in the United States, often with mixed results. During the 19th century, the temperance movement sought to eliminate the consumption of alcohol, which was seen as harmful to many families. These efforts led to the passage of the Eighteenth Amendment to the U.S. Constitution, which established the prohibition of alcoholic beverages by declaring the production, transportation, and sale of alcohol illegal. When millions of Americans elected to disregard this law, organized crime stepped in to provide alcohol. This created an atmosphere where laws were disregarded and broken, and alcohol was as available as ever.

Healthy Marriage Initiative

667

This experience suggests that when the individuals for whom health benefits are intended do not agree with the policy, these initiatives are unlikely to succeed. This has not stopped such efforts, but it has impeded their success. Stephen T. Schroth Knox College See Also: Adolescent Pregnancy; Alcoholism and Addiction; Almshouses; Assisted Living; Caring for the Elderly; Child Health Insurance; Family and Medical Leave Act; Family Medicine; Medicaid; Medicare; Nursing Homes; Polio; Pure Food and Drug Act of 1906; Welfare Reform. Further Readings Hoffman, B. Healthcare for Some: Rights and Rationing in the United States Since 1930. Chicago: University of Chicago Press, 2012. Jabour, A., ed. Major Problems in the History of American Families and Children. Belmont, CA: Wadsworth, 2005. Mintz, S. and S. Kellogg. Domestic Revolutions: A Social History of American Family Life. New York: Free Press, 1988. Starr, P. Remedy and Reaction: The Peculiar American Struggle Over Health Care Reform. New Haven, CT: Yale University Press, 2011. Starr, P. The Social Transformation of American Medicine: The Rise of a Sovereign Profession and the Making of a Vast Industry. New York: Basic Books, 1982. Warner, J. H. and J. A. Tighe, eds. Major Problems in the History of American Medicine and Public Health. Boston: Wadsworth, 2006.

Healthy Marriage Initiative In 2001, the Administration for Children and Families began funding research and demonstration grants as part of a larger Healthy Marriage Initiative. From 2006 to 2011, grants were awarded in more than 100 communities nationwide in an effort to expand the availability of marriage

668

Healthy Marriage Initiative

education training to promote healthy marriages and improve child well-being by strengthening couples and families. The primary goal of the initiative is to help couples, who have chosen to marry, to gain improved access to educational resources and services so that they can voluntarily obtain the skills and knowledge that they need to form and sustain a healthy marriage. The Administration for Children and Families defines a “healthy marriage” as a marriage that is mutually enriching and beneficial to the husband and wife, and where both spouses have a deep respect for each other. History The Healthy Marriage Initiative arose from grassroots movements in several states, communities, and faith-based organizations in the early 1990s to reduce divorce, strengthen marriage, and improve child well-being. A large body of research in the 1990s showed the negative effects of divorce and unwed childbearing and the positive effects of healthy marriage on adults and children, providing research centers and nonprofit organizations with evidence to create legislation to make divorce more difficult to obtain and strengthen couple relationships. In 1996, the U.S. Congress declared that marriage is the foundation of a successful society that promotes the interests of children. They also passed a major overhaul of the welfare system that provided block grants to states. The new Temporary Assistance for Needy Families (TANF) focused not only on the requirements for welfare recipients to work and the time limits on payment assistance, but also included promoting marriage and two-parent families and reducing nonmarital births. A few states began allocating TANF dollars toward strengthening marriage shortly thereafter. From 2001–2005, the Bush administration and other congressional leaders worked on a reauthorization bill that amended the 1996 law in ways that encouraged states to work toward promoting and strengthening marriage. The resulting Deficit Reduction Act of 2005 included the Healthy Marriage and Responsible Fatherhood Act, which provided funding of $150 million each year (from 2006 to 2011) for promoting healthy marriage and fatherhood activities. Demonstration grants were also funded through existing discretionary

programs under the Administration for Children and Families, from 2002 to 2007, to promote healthy marriages. The grant funding awarded to various organizations over time was to be used for competitive research and demonstration projects to test promising approaches to promoting healthy marriages. Grantees were required to include one or more of the following allowable activities: • Public advertising campaigns on the value of healthy marriages and the skills needed to increase marital stability and the health of the marriage • Education in high schools on the value of healthy marriages, healthy relationship skills, and budgeting • Marriage education, marriage skills, and relationship skills programs that may include parenting skills, financial management, conflict resolution, and job and career advancement for nonmarried pregnant women and nonmarried expectant fathers • Premarital education and marriage skills training for engaged couples and for couples or individuals interested in marriage • Marriage enhancement and marriage skills training programs for married couples • Divorce reduction programs that teach healthy relationship skills • Marriage mentoring programs that use married couples as role models and mentors in at-risk communities • Programs to reduce the disincentives to marriage in means-tested aid programs, if offered in conjunction with any other allowable activity • Research on the benefits of healthy marriages and healthy marriage education • Technical assistance to grantees who are implementing any of the allowable activities to help them succeed The Administration for Children and Families provided specific goals of the Healthy Marriage Initiative. These included increasing the percentage of married couples who are in healthy marriages and the percentage of children who are raised by two parents in a healthy marriage.



Healthy Marriage Initiative

669

A couple learns communication and conflict resolution skills in a class designed to support healthy marriages. In 2010, $75 million was awarded to fund 60 organizations across the country to provide comprehensive healthy relationship and marriage education services, as well as job and career advancement activities to promote economic stability and improve family well-being.

Additionally, they wanted to increase the percentage of premarital couples and youth who have the skills and knowledge necessary to form and sustain a healthy marriage and make informed decisions about marriage. Other goals included increasing public awareness about the value of healthy marriages and supporting research on healthy marriages and healthy marriage education. A final goal was to increase the percentage of women, men, and children in homes free of domestic violence. In addition to these goals, the Administration for Children and Families made clear that the Healthy Marriage Initiative was not about coercing anyone to marry or stay in unhealthy relationships, or to limit access to divorce or stigmatize those who choose divorce. Additionally, the Healthy Marriage Initiative is not about withdrawing support from single parents or diminishing the important work of single parents. It also does not promote the initiative as a panacea for achieving positive outcomes for child and family well-being

or an immediate solution to lifting all families out of poverty. In 2010, the Administration for Children and Families merged Healthy Marriage and Responsible Fatherhood activities and broadened the overall mission to strengthen families to improve the lives of children and parents and promote economic stability. A total of $75 million was awarded to fund 60 organizations across the country to provide comprehensive healthy relationship and marriage education services, as well as job and career advancement activities to promote economic stability and improve family well-being. An additional $75 million was used to fund 55 Responsible Fatherhood grantees to strengthen father–child interaction. Proponents and Opposition The Healthy Marriage Initiative, like many government initiatives, has both passionate supporters and ardent opponents. Proponents of the Healthy Marriage Initiative saw an opportunity

670

Higher Education Act

to increase child well-being and adult happiness while reducing child poverty, welfare dependence, and the number of children growing up without both parents. Moreover, some advocates of the initiative suggested that the relatively small investment in promoting healthy marriages could result in potentially large savings in the future by reducing welfare dependence and other social services. Those opposing the initiative are skeptical about whether government programs should be promoting marriage and if it is possible to even do so. Many believe it diverts funds from other proven programs aimed at reducing poverty. Resources The Healthy Marriage Initiative has produced several resources and has prompted countless local and state coalitions and grassroots community efforts aimed at promoting healthy relationships. The National Healthy Marriage Resource Center and the National Resource Center for Healthy Marriage and Families are the federally funded Web sites that provide resources on healthy marriage. Other resources include Webinars, fact sheets, symposia, training, and videos. Over time, the resources have been expanded to cover a variety of audiences including youth, military, grandparents, stepfamilies, and other family forms. Additional healthy marriage initiatives have been designed for African Americans, Hispanics, Native Americans, and Asian and Pacific Islander populations to encourage culturally competent approaches to serving couples. David G. Schramm University of Missouri See Also: Family Life Education; Fatherhood, Responsible; Parent Education. Further Readings National Healthy Marriage Resource Center. “Administration for Children and Families Healthy Marriage Initiative, 2002–2009: An Introductory Guide.” http://www.healthymarriageinfo.org/re source-detail/index.aspx?rid=3298 (Accessed September 2013). U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/ofa/programs/ healthy-marriage (Accessed September 2013).

U.S. Department of Health and Human Services. http://www.healthymarriageinfo.org/index.aspx (Accessed September 2013).

Higher Education Act The Higher Education Act of 1965 (HEA), also known as Public Law 89-329, has played a major role among families in the United States because of its impact on access to education, social policy, and economic advancement. Originally linked to the Great Society policy goals of President Lyndon B. Johnson, which sought to improve the economic standing of disadvantaged groups, the HEA has dramatically increased the access to education of students who previously had been economically limited from participating. The mandate of the HEA includes taking actions to improve teacher quality, providing financial and other forms of assistance to certain institutions, offering assistance to students (through grants, loans, and work-study programs), helping develop institutions, supporting international education programs, implementing graduate and postsecondary improvement programs, and supporting educational programs for select populations, such as Indians or deaf individuals. Some of those HEA-mandated actions have included directly funding research and minority institutions such as Historically Black Colleges and Universities (HBCUs), Hispanic Serving Institutions (HSIs), and Tribal Colleges and Universities (TCUs); elevating the functional level of the Office of Indian Education (1972); and establishing a National Teacher Corps (1965), a National Commission on Financing Postsecondary Education (1972), and a National Center for Education Statistics (1974). Amendments Since its original passage, the HEA has been amended on several occasions. Sometimes, it has been amended to expand the scope of certain programs or narrow the focus of others; other times, to strengthen specific sections or streamline the overall way the HEA is implemented. For instance, the Education Amendments of 1972 set up the



Education Division in the U.S. Department of Health, Education, and Welfare; established federal matching grants for certain state grants; and banned sex bias in admissions to higher education institutions. The changes also added what is now known as Title IX, which addressed discrimination in the area of athletics. The Higher Education Amendments of 1992 (Public Law 102-325) put forth a concentrated effort to stop improper recruitment and admissions practices by institutions—a step that led to numerous legal actions against for-profit educational institutions. The Higher Education Amendments of 1998 (Public Law 105-245) created a new program to augment preparedness for success in higher education: Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP). It also added the contentious Question 31 to the Free Application for Federal Student Aid (FAFSA), which asked about convictions for “possession or sale of illegal drugs for an offense that occurred” while an applicant was “receiving federal student aid” (i.e., grants, loans, and/or work-study). The Higher Education Opportunity Act of 2008 (Public Law 110-315) focused on increasing accountability, addressing costs, and achieving results. For example, it defined the term diploma mill, and sought to educate the public about related fraudulent and illegal practices. It also mandated the U.S. Department of Education to increase the amount of information it makes available to students and their families with such strategies as net price calculators and four-year tuition calendars. In addition, it established new programs for Hispanic students and HBCUs. Litigation The HEA has been litigated many times. In 1979, in Cannon v. University of Chicago, the U.S. Supreme Court held that Title IX could be used by individuals to bring suit. In 1984, in Grove City College v. Bell, the court held that Title IX also applied to private institutions of higher education. In 2003, two cases, Gratz v. Bollinger and Grutter v. Bollinger, dealt with race and admissions policies and practices. In 2008, in United States of America ex rel. Hendow v. University of Phoenix, the court addressed irregularities in recruitment and admissions staff compensation. These cases, and many more, have

Higher Education Act

671

shaped and reshaped the HEA, its scope, policies, and implementation. Programs Throughout the years, different HEA programs have touched the lives of many families across the country by providing much-needed funding for students. Some of the most popular and well known include the Federal Family Education Loan (FFEL), also known as the Federal Insured Student Loan, Guaranteed Student Loan (GSL), and Stafford Loan; William D. Ford Federal Direct Loan; Perkins Loan; Federal Work-Study Programs; Federal Pell Grant; Gaining Early Awareness and Readiness for Undergraduate Programs (GEAR UP); Patricia Roberts Harris Fellowship; and National Early Intervention Scholarship and Partnership (NEISP). Taken together, these programs have offered countless possibilities for students to improve their access to education, to be able to afford education, and thus to boost their prospects in life. Future The HEA has had its share of controversy. It has been attacked in practice at times, challenged in court many times, and repeatedly altered in structure and application. Yet, the future of the HEA appears bright, and the HEA will likely continue to increase access to educational opportunities for students across the nation. Raúl Fernández-Calienes Hagai Gringarten St. Thomas University See Also: Brown v. Board of Education; College Education; Education, College/University; Education, Postgrad; Student Loans/College Aid. Further Readings Altbach, P., R. Berdahl, and P. Gumport, eds. American Higher Education in the Twenty-First Century: Social, Political, and Economic Challenges. 2nd ed. Baltimore, MD: Johns Hopkins University Press, 2005. Lucas, C. American Higher Education: A History. 2nd ed. New York: Palgrave Macmillan, 2006. Olivas, M. The Law and Higher Education: Cases and Materials on Colleges in Court. Durham, NC: Carolina Academic Press, 2006.

672

Hite Report

Russo, Charles J., ed. Encyclopedia of Law and Higher Education. Thousand Oaks, CA: Sage, 2010. U.S. Department of Education. “Higher Education.” http://www.ed.gov/highereducation (Accessed June 2013).

Hite Report The Hite Report: A Nationwide Study of Female Sexuality, often referred to as simply the Hite Report on Female Sexuality, was written by Shere Hite and published in 1976 by Seven Stories Press. The book was reprinted in 1981 and republished in 2004, with an additional introduction written by Hite. The initial report was written in response to research published about female sexuality written by men, which, according to the report, often resulted in men “telling women how they should feel rather than asking them how they feel.” In contrast, The Hite Report documents women’s experiences of sexuality and orgasm in their words. To compile her report, Hite sent questionnaires to thousands of women with a variety of ages, education, and socioeconomic and cultural backgrounds. Over 3,000 women responded, providing insight into what provides pleasure and frustration in their sexual lives. The report revealed that women can reach an orgasm that is easy and strong if given the right stimulation. Moreover, the report indicated that many women reach orgasm through private masturbation without the use of penetration but rather through using clitoral stimulation by hand. These findings challenged the current beliefs that women not only have difficulty reaching orgasm but also that they should try to do so during intercourse with vaginal penetration. As Hite stated, recommendations of the time suggested that orgasm through clitoral stimulation was “immature and lesser” compared to a vaginal orgasm by penetration during sexual intercourse. Although Hite recognized the valuable role of prominent sex researchers Alfred Kinsey, Williams Masters, and Virginia Johnson, she criticized their work. Specific to Masters and Johnson, Hite highlighted their failure to incorporate cultural attitudes on sexuality and their subsequent recommendation that a failure to achieve orgasm by

thrusting during intercourse was because of female sexual dysfunction. She stated that the popularity of the Hite Report was primarily because of to its message that there was nothing wrong with women. Hite proposed that research can only be relevant to real-world sexual behavior if it is conducted with an understanding of how individuals and cultures construct definitions of sexuality, and therefore, sexual behaviors and experiences. Shere Hite was born Shirley Diana Gregory in 1942, and received her bachelor’s and master’s degrees from the University of Florida, and a Ph.D. from Nihon University in Japan. She is internationally recognized as a cultural historian for her work on gender relations and psychosexual behavior. She served as the director of the National Organization for Women’s feminist sexuality project from 1972 to 1978. After publishing her initial report, she continued to publish a series of Hite reports on male sexuality (1987), on the family (1994), and on women and love (2007). She also taught as a visiting professor at Nihon University in Japan and at New York University, with guest lectures around the world at universities including Harvard, Columbia, Cambridge, and Oxford. Laura M. Frey Jason D. Hans University of Kentucky See Also: Kinsey, Alfred (Kinsey Institute); Masters and Johnson; Sex Information and Education Council of the United States. Further Readings Hite, S. “Female Orgasm Today: The Hite Report’s Research Then and Now.” On the Issues Magazine (July 2008). http://www.ontheissuesmagazine.com/ july08/july2008_6.php (Accessed April 2014). Hite, S. The Hite Report: A Nationwide Study of Female Sexuality. New York: Seven Stories Press, 1976. Hite, S. The Shere Hite Reader: New and Selected Writings on Sex, Globalism, and Private Life. New York: Seven Stories Press, 2006. Jayson, S. “Decades Later, Hite Reports Back.” USA Today (May 15, 2006). http://usatoday30.usatoday .com/news/health/2006-05-15-hite-report_x.htm (Accessed April 2014). Jong, E. “If Men Read It, Sex Will Improve.” New York Times (October 3. 1976). http://www.nytimes.com/

HIV/AIDS

books/97/03/23/reviews/bright-hite.html (Accessed April 2014).

HIV/AIDS Perhaps no other public health topic in the last decades of the 20th century provoked as much conversation and controversy as the discovery of the human immunodeficiency virus (HIV) and acquired immunodeficiency syndrome (AIDS). Although HIV often leads to the onset of AIDS, they are two unique viruses and should not be synonymously referred to. The emergence of HIV/AIDS in public discourse within the United States continues to provoke various conversations related to family life, including issues of discrimination, family planning, and normative sexual practices. A great deal of misinformation continues to surround the means of transmission of HIV/AIDS both within the United States and abroad. Because of the stigmatizing of people who have contracted HIV/AIDS, providing a corrective for this false information is a helpful step in contributing to a constructive public conversation about the disease. The remainder of this entry traces the origins of the terminology used to publicly discuss HIV/AIDS, provides a brief overview of its development, and suggests basic implications of HIV/AIDS for family relationships and daily life in the 21st century. Origins of the Labels HIV and AIDS The way that topics are discussed at a family dinner table are often reflective of the way that these topics are reported by the news media, which in turn have received their nomenclatures from press releases and research reports. In light of this fact, the actual origins of HIV and AIDS are perhaps less important than the way in which the viruses have been discussed in the public realm. The history of the various names of HIV/AIDS provides insight into how labeling a disease helps frame the way that the public at large thinks about an illness. The evolution of HIV/AIDS provided much for a family to discuss throughout the mid-1980s and beyond. For example, in 1982, one of the first names for what later came to be called HIV/AIDS was gayrelated immune deficiency (GRID). At the time, it

673

was widely believed that GRID was only prevalent in communities populated by male homosexuals. This myth was partly propelled by a lack of information on the part of many conservative social and religious voices in the United States. A growing awareness about the reach of GRID led to a reevaluation of whether the GRID label was appropriate and by the end of 1982 researchers and physicians began using AIDS as the new designation for the virus. By 1984, it was understood that AIDS was caused by a different virus, which was subsequently labeled HIV in 1986. The names of several people have become well known in the public history of HIV/AIDs. Two examples of household names related to HIV/AIDS are Ryan White, whose story helped correct myths about AIDS transmission, and Magic Johnson, who helped introduce HIV into the public conversation. Ryan White was a teenage student in Indiana when he received his diagnosis that he had AIDS from a contaminated blood transfusion. He was expelled from middle school in 1985 because of the misinformation that surrounded his new diagnosis. His legal battle helped spur public conversation about the myth that AIDS was only something contracted by homosexual men. In 1989, a madefor-TV movie titled The Ryan White Story helped to make White’s name part of the public lexicon for AIDS. White’s case brought to light the discrimination that one could experience because of the lack of correct information and provided a human face to serve as a way to raise awareness that HIV/ AIDS could also affect people who were not sexually active. Unlike Ryan White, who became well-known because of his AIDS diagnosis, Magic Johnson’s fame preceded his announcement that he was infected with HIV. When Magic Johnson announced his retirement from the National Basketball Association (NBA) on November 7, 1991, he was one of the most popular athletes in the United States, if not the world. Johnson’s decision to play in the 1992 NBA All-Star Game, despite the protest of other competing players who were afraid to become infected, helped provoke further conversation that corrected various myths about how HIV was transmitted. As a heterosexual who never received any questions about his masculinity, Johnson raised awareness that heterosexual intercourse could also transmit HIV.

674

HIV/AIDS

The attention paid to Ryan White, and the subsequent prime-time television movie, along with the notoriety of Magic Johnson, made the topic part of the conversation of many families. In addition to these public figures were other prominent cases of HIV/AIDS such as tennis player Arthur Ashe, movie star Rock Hudson, academic Michel Foucault, and even a character in the Broadway musical Rent. A growing awareness of the origins of the disease also contributed to greater level of betterinformed public conversation. By the 1990s, it was well-documented that the earliest known human case of HIV was in Belgian Congo (now the Democratic Republic of Congo) in 1959. Other evidence suggests that the United States may have had a case as early as 1966, although most cases in the United States can be traced to one individual who brought the infection to the country, via Haiti, in 1969. It was not until 1981 that AIDS was first discovered in a clinical setting in which it was identified as a unique and previously unrecognized virus. Implications for Family Life A diagnosis of HIV or AIDS creates many issues for a family, including concerns related to how the person infected has contracted the disease (and subsequent disclosure to an adult partner if the disease was from a relationship outside of a monogamous arrangement), potential transmission to other immediate family members, and disclosure to the family’s circle of friends and relatives. There are multiple ways to contract HIV/AIDS, including blood transfusions and sexual contact. HIV/AIDS can also potentially be spread through sharing intravenous needles and through various aspects of childbirth and nursing. Complicating the issue of disclosure to sexual partners are the legal requirements and obligations of people who have HIV/AIDS and are engaged in unprotected sexual activity. For example, in the state of Alabama, any person who knowingly engages in activities likely to transmit their sexually transmitted disease (STD) is guilty of a misdemeanor. In the state of Maryland, the law is more blunt, and simply states that a person with HIV may not knowingly transfer or attempt to transfer HIV to another person. It is not just sexual relationships that are hindered; the social life of the infected person (and family) is also affected. For example, in the state of

Arkansas, a person who is HIV positive is legally required to inform his or her physician or dentist of his or her HIV status. While many false myths about HIV/AIDS have been corrected over the past 30 years, much incorrect information still persists. Many of these myths can be debunked by family members who live in close quarters with an individual who is infected with HIV/AIDS. For example, although none of the following are legitimate ways to spread the disease, many people still believe that those who participate in any of the following are at risk: touching a toilet seat or doorknob handle after an HIV-infected person; hugging, kissing, or shaking hands with someone who is HIV-infected; or sharing eating utensils with someone who is HIV-infected. These are all common practices for family members living under one roof, and none are directly linked to spreading the disease. The greatest risk comes from blood and fluid exchange during unprotected sexual contact.

An HIV-infected human T-cell. Misconceptions about the transmission of HIV still persist. The greatest risk comes from blood and fluid exchange during unprotected sexual contact.



Perhaps the most difficult aspect of dealing with HIV/AIDS in the life of a family comes from disclosing the information to the immediate family and the wider circle of friends and relatives. While no legal obligation requires a person to share this information, it could become a question of ethical obligation, depending on the frequency and depth of friendship with these individuals. On one hand, sharing information about the disease can provide a greater level of care and support from those closest to the infected person. Also, by sharing information about the disease, a person can raise awareness and help correct some of the false information that is likely part of the circle’s life experience. On the other hand, because there is false information that still abounds, many strong stigmas exist for those who have acquired HIV/ AIDS. Some states provide legal stipulations for sharing this information with certain health care providers. The life of a family is greatly affected when a new case of HIV/AIDS is brought into the household. As awareness of the correct information about this disease continues to grow, there are increasingly more families who are willing to adopt children who are HIV/AIDS-positive. Regardless of how the disease enters one’s family, realistic precautions should be exercised when considering the various ways in which the disease can be transmitted. Any decision about disclosing the disease should be made with consideration given to the entire family because the potential stigma could affect each member in different ways. Disclosing a Child’s HIV/AIDS Status: Ethical Considerations Disclosing a positive HIV/AIDS test to family and friends can be extremely complicated. While there is a great deal of research exploring the disclosure of HIV/AIDS to the infected child, there is much less academic conversation about disclosing information about the health of the child to those outside of the immediate home environment. Although many who contract HIV/AIDS were innocent victims due to no action of their own, many who are HIV/AIDS-positive have engaged in some level of risky behavior that has been well documented as potentially spreading the disease. The issue becomes much more complicated when a child is HIV/AIDS positive and a family gives

HIV/AIDS

675

consideration to disclosing this information to those beyond the immediate household. Despite the fact that HIV/AIDS has been part of the public lexicon for more than 30 years, there are still stigmas remaining for those who have contracted the virus. A practical concern for parents of a child (or children) with HIV or AIDS is whether or not to disclose the virus to neighbors, school districts, churches, or other community organizations. Unlike the legal obligations placed on adults and summarized above, there are no legal requirements that obligate a parent or caregiver to reveal the HIV/AIDS status of a child to outside individuals or organizations. Therefore, this question is not one of legality but one of ethical consideration. In other words, what ethical obligations are placed on those with knowledge of the illness? Within the context of this entry this question is introduced to highlight the difficulty of navigating through family issues related to HIV/AIDS and will remain unanswered since each family will arrive at a differing conclusion. But, in lieu of providing an answer, a few additional considerations are next provided to help texture the conversation. The ease with which a parent or guardian can disclose the HIV/AIDS status of a child is partly dependent on the preparations that outside organizations already have in place. For example, if a school has an established protocol for proper hygiene in treating any blood-related injury, then there is minimal risk that a child who is HIV/AIDSpositive could spread the disease in the context of his or her classroom. Unfortunately, even with these standards in place many are not followed on a regular basis, thus putting any person who does not wear gloves or take the proper precautions at greater risk for contracting any disease carried by the hurt child. While there is a high level of governmental supervision for school settings, places such as a local church are much less regulated, and often make it much more difficult on parents to withhold the health status of a child. Again, this returns to a question of who holds the ethical obligation: Should schools and churches create an environment in which proper hygiene is followed, thus allowing parents to choose whether or not to disclose the health status of a child? Or, should these environments force parents to disclose the health status due to the lack of proper preparation for

676

Hochschild, Arlie

those who have HIV/AIDS or other communicable diseases? As above, this question will remain unanswered and is simply provided as a way to contribute to an ongoing conversation about HIV/AIDS for families who are sorting through the options available to them. The discovery, subsequent labeling, and ethical considerations of HIV/AIDS have greatly affected family life over the past decades. As new treatments are released and as the stories of public figures continue to be explored, the nature of the conversation will continue to evolve. Brent C. Sleasman Gannon University See Also: Gay and Lesbian Marriage Laws; Health of American Families; Home Health Care. Further Readings Crawford, Dorothy H. Virus Hunt: The Search for the Origina of HIV/AIDS. Oxford: Oxford University Press, 2013. Lesbian and Gay Rights Project AIDS Project. “State Criminal Statutes on HIV Transmission—2008.” http://www.aclu.org/files/images/asset_upload _file292_35655.pdf (Accessed July 2013). Pepin, Jacques. The Origins of AIDS. Cambridge: Cambridge University Press, 2011. U.S. Department of Health and Human Services. “About the Ryan White HIV/AIDS Program.” http://hab.hrsa.gov/abouthab/aboutprogram.html (Accessed July 2013).

Hochschild, Arlie Academic discussions about the topic of the American family must include one of its most significant authors: Arlie Hochschild, an emerita of sociology at the University of Berkeley, California. Her main topics are family sociology; the sociology of emotion; gender aspects of divisions of work within families; and the economic or, more broadly, the capitalistic influence on emotional management. The start of her academic work was to question the cultural aspect of feelings. In the Managed Heart (1983), she argues that there are “feeling rules” in

every society. One expects, for example, to feel grief or to laugh at certain events, and are astonished at anybody who does not fit into this cultural feeling pattern. The innovative aspect of the Managed Heart was to invent the term emotional labor. That means that we observe growing investment (in the labor market, especially the growing service job market) to show the right feeling. Her study was about female flight attendants who try to manage the needs and feelings of passengers and their own feelings onboard. In her book The Second Shift (1989), Hochschild describes the stress of combining work life with what she calls the “second shift.” The second shift is family work after business, such as household chores. She shows that women do the lion’s share of this second shift and therefore more easily suffer from burnout symptoms. Furthermore Hochschild argues that even if the work is done by men and women equally, the time for the second shift is decreasing in favor of the labor market. Another concept that Arlie Hochschild brings into the discussion is the economy of gratitude. She not only observed how work at home is divided by men and women, but also that there is a certain amount of gratitude that is not equally distributed by men and women. If men, for example, take care of the children but leave the homework to the working mom, it is likely that women are grateful that “at least he does something.” The book Love and Gold (2003) combines and develops these two aspects: emotional labor and managing the second shift. Not only does Hochschild combine these aspects, she analyzes them within a global capitalistic system by showing that women of the poor Global South are paid to do the care and the household work for the women of the rich Global North. The main focus of the book goes even further: it is not only work that is paid by the rich and done by the poor; it is also the transfer of emotion, which one person gives to another child in a paid context, which is then lacking for the person’s own child. The question is whether love is a product such as gold that can be transferred from one person to another in a capitalistic context. She also shows that there is Western ideology of how much time and love should be invested in a child and at the same time there is no time, and therefore time and love has to be imported within a migration context.

Holt, Luthor



In Time Bind (1997), Hochschild describes the irreconcilability of work and family in a place called Spotted Deer. She interviewed workers of a Fortune 500 company called Amerco. This book is again about the challenge of balancing work and family. Amerco was seen as a particularly familyfriendly company, offering job sharing, or flexible working hours. The question that Hochschild tried to answer in this book is why so few employees of Amerco used the family-friendly offers. The answers are varied: Most employees want to become what she calls the “ideal employee.” This is highly valued in America and more people seek to follow this model. Hochschild also found in her data that the people who she interviewed were overstrained by the demands of family, whereas the workplace seemed a place of safety and positive emotion and feedback. Outsourced Self: Intimate Life in Market Times (2012) takes the topic a step further: Has the outsourcing of private activities now reached the emotional self? The family—and private life in general— has been the place where one wants to experience unselfish love and emotion, and the only place that does not work according to economic and capitalist laws. Hochschild argues that this has now changed. Whether one wants to name his or her desires by consulting a “wantologist,” or a wedding planner because they can plan their own wedding better, or a “nameologist,” who is better than the parents to find a name for the child, it always comes to professionalists who seem to do the better job in managing one’s private life. The reasons are again varied: Sometimes it seems that professionalists are simply better at doing the job, sometimes they simply lack the time. Hochschild also shows the costs of outsourcing: She calls this the “depersonalization of our bonds with others.” Hochschild values this process not in a moral way, but shows that capitalism has reached people’s private lives. Arlie Hochschild’s work shows that the American family is not just a private matter, but regulated by social and economic pressures. Thus, she is also a family sociologist, a critic of the capitalistic system as the founder of the sociology of emotions. Methodologically, she is an ethnographer: Whether in the company Amerco, within the household workers in Love and Gold, or the irritating outsourcing processes of private life in Outsourced Self, Hochschild goes into the lives of her respondents and

677

draws the native’s point of view. It works in the best ethnographic tradition. Katharina Miko Vienna University of Economics See Also: Breadwinner-Homemaker Families; Suburban Families; Work and Family. Further Readings Hochschild, Arlie. The Managed Heart: The Commercialization of Human Feeling. Berkeley: University of California Press, 1983. Hochschild, Arlie. The Outsourced Self: Intimate Life in Market Times. New York: Metropolitan Books, 2012. Hochschild, Arlie. The Second Shift: Working Parents and the Revolution at Home. New York: Viking, 1989. Hochschild, Arlie. The Time Bind: When Work Becomes Home and Home Becomes Work. New York: Metropolitan/Holt, 1997. Hochschild, Arlie and Barbara Ehrenreich, eds. Global Woman: Nannies, Maids, and Sex Workers in the New Economy. New York: Metropolitan Press, 2003.

Holt, Luthor Luther Emmett Holt (1855–1924) was an American pediatrician and public health reformer. He published 170 papers and three books, most notably The Care and Feeding of Children in 1894. He was an authority of childcare as pediatrics became a formal medical field, and was among the founding members of the American Society of Pediatrics in 1888. His efforts turned to public health reform and education, and he was involved in the Rockefeller Institute, the Child Health Organization, and the Junior Red Cross. His work led to the creation of milk commissions. Holt was born on March 4, 1855, in Webster, New York, to Horace Holt and Sabrah Amelia Curtice. He received his primary education at Webster Academy and Marion Academy. He then graduated seventh in his class from the University of Rochester in 1875. Upon graduation,he taught at the Riverside Institute at Wellsville, New York, for a year, at which time he decided to pursue medicine. He enrolled at the Medical College of Buffalo in 1876.

678

Holt, Luthor

He soon began a student internship at the Hospital for the Relief of the Ruptured and Crippled in New York City under Virgil Pendleton Gibney. He transferred to the College of Physicians and Surgeons in New York, and graduated in 1880 at the top 10 of his class. Holt selected an internship at Bellevue Hospital, where he worked in William Henry Welch’s bacteriology laboratory. Holt opened a general practice with Charles M. Cauldwell in 1981. Shortly after opening this practice, he met Linda F. Mairs, whom he married in 1886 and with whom he had five children. During the 1880s and 1890s, Holt gained recognition and worked at several hospitals, including the Northwestern Dispensary, the Infants’ Hospital on Randall Island, the Foundling Hospital, the Nursery Child’s Hospital, the Hospital for the Ruptured and Crippled, the New York Orthopedic Hospital, and the Lying-In Hospital. In particular, while Holt worked as a physician at the county branch of the New York Infant Asylum at Mount Vernon (1885– 92) and the Babies’ Hospital (1888–1924) he documented and researched the diseases of children who he encountered, leading to a series of publications on the subject. The Babies’ Hospital was the first hospital in the United States that cared soley for infants less than 3 years of age. Within a year of its opening, Holt was asked to be the physician in chief. Although it was financially unstable when he accepted the position, Holt secured the hospitals’ future by organizing a prestigious board of directors and focusing on clinical research and training nurses. The Babies’ Hospital became the leading pediatric hospital of its time. Publications and Teaching In 1893, Holt wrote a short booklet on the care of children for the training of nurses. He expanded this booklet the following year, and titled it The Care and Feeding of Children: A Catechism for the Use of Mothers and Children’s Nurses, which became the most widely used childcare manual for educated mothers until Benjamin Spock’s Baby and Child Care in 1946. It went through 75 printings, 12 revisions, and several translations. Most historians agree that it was the first book to bring the science of childcare to the public. In The Care and Feeding of Children, Holt detailed proper hygiene, expected growth and developmental milestones,

and nutritional advice for infancy through early childhood, and various other concerns. In the late 1800s, women were more frequently using cow’s milk to feed their children, but it was often contaminated. Holt advocated for breastfeeding in his book, but also detailed the process of safely making formula from cow’s milk. Holt continued to pursue his intentions of writing this first book, and from the 1900s until his death, Holt was instrumental to public health education and reform. He sat on the founding committee of the Rockefeller Institute, which opened in 1906 as a medical research institution, and Holt sat on the board of directors and served as secretary. In 1909, he was involved in organizing the Association of the Prevention of Infant Mortality (later renamed the American Child Hygiene Association). Holt was influential in the Rockefeller Institute’s studies of the contamination of the city’s milk supply, which led to establishing regulations for quality improvement. He also assisted securing the city’s first certified milk dairy and infant-formula laboratory. In 1918, Holt accepted the position of chair of the newly founded Child Health Organization (CHO), which was established to educate the public in child health, especially by marketing directly to children. The CHO also published articles on child health in newspapers and magazines. Through this international recognition, in 1919, Holt proposed an international child welfare program at the Red Cross’ international conference in Cannes. This program, called the Junior Red Cross, focused on health education for schoolchildren and fundraising for needy children. In 1923, Holt facilitated negotiations to merge the CHO with the American Child Hygiene Association to become the American Child Health Association, of which he was vice president. Holt was also a prolific teacher. In 1891, he became a professor of diseases of infants and children at the New York Hospital and Polyclinic. Holt authored a second book, The Diseases of Infancy and Childhood (1897), which was used as a pediatric textbook for several decades. In 1901, he became Carpentier Professor of the Diseases of Children at the College of Physicians and Surgeons, Columbia University, and held that post for 20 years. He was visiting Lane Lecturer at Stanford University. In 1922, he expanded his lecture series into his third book, Food, Health, and Growth: A Discussion of the

Home Economics



Nutrition of Children. Holt died of a heart attack just after completing a lecture series in Peking, China, on January 14, 1924. Rachel T. Beldner University of Wisconsin–Madison Janice Elizabeth Jones Cardinal Stritch University See Also: Child-Rearing Experts; Child-Rearing Manuals; Families and Health; Family Medicine; Health of American Families; Spock, Benjamin. Further Readings Corner, George W. History of the Rockefeller Institute, 1901–1953: Origins and Growth. New York: Rockefeller Institute Press, 1964. Holt, L. Emmett Jr. and R. L. Duffus. L. Emmett Holt: Pioneer of a Children’s Century. New York: D. Appleton-Century, 1940. Jones, Kathleen W. “Sentiment and Science: The Late Nineteenth Century Pediatrician as Mother’s Advisor.” Journal of Social History, v.17 (1983). Park, Edwards A. and Howard H. Mason. “Luther Emmett Holt.” In Pediatric Profiles, Borden S. Veeder, ed. New York: Mosby, 1957.

Home Economics Home economics (now usually called family and consumer science, or human ecology) originally was the study of homemaking, the relationships that exist within the home, and the relationship of the home to the community. Initially, it was limited to the problems of food (nutrition and cooking), clothing and textiles (sewing and care), household equipment, housekeeping (cleaning and equipment), housing, hygiene, and household economics. It later came to include aspects of family relations, parental education, consumer education, and institutional management. Many programs today also include gerontology and child development. Foundation In the United States, the teaching of cooking and sewing in public schools was tied with the manual training of boys, beginning in the 1880s. State

679

institutions began introducing home economics courses at the college level in the 1870s. The move to include these courses in higher education was tied to the development of land-grant colleges from the Morrill Act, which devoted federal lands to support the development of colleges of agriculture and mechanical arts. The development of home economics closely parallels the general development of education for women. This period began the formal training of women and acknowledged that the obligation of the home extended beyond its walls. Home economics formed as a discipline with the application of scientific techniques. The original development was under the leadership of Ellen Swallow Richards and Catharine Beecher. They are considered the key pioneers who developed domestic science and home economics, in addition to the pioneering work that took place in Europe. Additionally, Wilbur Atwater, Edward Youmans, and Isabel Bevier were involved in the formation, with the goal to develop a profession that understood the obligations of and opportunities for women. The training of teachers of home economics began after 1895. By 1907, there were eight colleges offering courses for training teachers of domestic science. Ellen Richards was the first woman graduate of the Massachusetts Institute of Technology in 1873; she brought her engineering background to the development of home economics. A series of Lake Placid conventions organized by Richards and others mark the beginnings of the field. The first conference was held in 1899, at which the term home economics was coined. Ten conferences were held, devoted to the development of the field. At the 1902 conference a definition of the discipline was adopted to define home economics as devoted to the study of law, conditions, principles, and ideas concerned with the person’s immediate physical environment, his or her nature as a social being, and the inner relationship therein. At the 1909 conference at Lake Placid, the American Home Economics Association (AHEA, now called the American Association of Family and Consumer Science) was founded. They agreed to publish a journal. The first edition came out in February 1909, which contains the original constitution for the AHEA, along with research articles on dietitians and farming. Other publications were also a significant part of the development of home economics. Such

680

Home Economics

publications as Catharine Beecher’s A Treatise on Domestic Economy (1841) laid the original foundation. Ellen Richards’s influence on the field of home economics came in literature and action. She published The Chemistry of Cooking and Cleaning and A Manual for Housekeepers both in 1881. Other early literature included work on food, domestic economy and household science, food and feeding, and the kitchen garden, as well as sewing courses for training teachers published in the 1880s and 1890s. The Smith-Lever Act of 1914 was the first attempt to educate adults for better home living. The act provided for cooperative home economics extension work. This added to the tie between the farm and college and solidified the place of home economics education. The Bureau of Human Nutrition and Home Economics became an independent bureau under the Department of Agriculture in 1923. It was in operation until 1953. The work of this agency included work toward the establishment of the school lunch program, as well as government publications to support the homemaker. This help supported the changes following World War I, when demands called for more assistance and work in public health, community feeding, school lunch supervision, consumer protection, and advice for family management. A Change of Direction The 1930 report from the AHEA, along with expanding needs after World War I and the difficulties families faced because of the Depression, changed the direction of home economics education. The training kept the focus on the physiological, psychological, economic, social, and political perspectives, and increased the sociology, economics, and philosophy, while decreasing courses in education, science, and home economics. The change was a move to address the issues that the family, especially women, faced when they were away from the home and still had to manage family issues. The emphasis also changed for higher education in the home economics. There was a movement to divide subjects into narrower specialty areas, and to conduct more focused research. The Bureau of Home Economics expanded in the 1940s to include the divisions of food and nutrition, textiles and clothing, and housing and household equipment. The national emphasis on nutrition led to the Bureau of Home Economics,

consolidation with the Division of Protein and Nutrition Research of the Bureau of Agriculture’s Chemistry and Engineering to become the Bureau of Human Nutrition and Home Economics in 1944. During World War II, these divisions furnished food facts for the nation’s wartime nutrition programs for the lend-lease administration, as well as for the U.S. Army, Navy, Red Cross, and purchasing agents of U.S. allies. The Division of Textiles and Clothing was inaugurated to meet the nation’s need for promoting efficient use of cotton, wool, and other agricultural products for clothing and home furnishing. This division explored the standard of garment sizes, resulting in recommendations for basic sizes and patterns. The Division of Housing and Household Equipment focused on labor-saving devices, fuel economy, and improvement of kitchen equipment. The Division of Family Economics focused on such problems as family food plans for groups at different income levels and family budgets. The Information Division put the research information from the Bureau’s scientific findings into news styled for press and radio. They edited scientific reports and prepared popular publications for group teaching to benefit the homemaker. The Bureau of Human Nutrition and Home Economics is today part of the U.S. Department of Agriculture, and continues to work on research and education in these areas, especially addressing dietary guidelines, nutrition, and household economics. The American Home Economics Association worked before and after World War II toward strengthening family life. They worked to expand the offerings and produced skill courses for the five largest areas of the professions that are part of home economics education: child development; family relations; textiles, clothing, and fashion merchandising; general home economics; and food nutrition and dietetics. In 1948, Katharine Alderman summarized the home economics philosophy as follows: • • • •

Improvement of instruction Betterment of the status of consumers Fostering international understanding Importance of research

The ultimate goal of home economics was for families to achieve the highest quality of life and happiness in their homes and communities.

Home Economics



Continued Growth Home economics continued to grow during the 1950s, often considered as a discipline to train women toward performing the roles of housewife and mother. Growth continued in high school preparation and college courses. The research focus, however, remained in the area of improvement of the health and well-being of families. Home economics education expanded offerings and produced skill courses in the five largest areas of the home economics education profession: child development; family relations; textile, clothing, and fashion merchandising; general home economics; and food, nutrition, and dietetics. Building on the basic disciplines, home economics promoted research in the areas related to nutrition, child development, consumer economics, and home management to increase the discipline’s impact. They remained active in promoting legislation to support the family and consumer. In the 1960s, efforts were expended toward developing accreditation of all undergraduate programs, and achieved in 1967. In 1963, there were four conceptions of home economics that evolved from the beginning of the 1960s: • A single field with a broad general perspective and a number of subspecialties • A unified field with subspecialties embedded in the home and family • A collection of disciplines with no unifying theme or anchor • Analysis, dialogue, and generally at least partial agreement on the body of knowledge came out as a result of the various meetings held between 1961 and 1993 The Lake Placid conference met in 1973 to revitalize values and to develop first future directions to broaden home and family life into ecosystem conceptualization, emphasizing interdependence of people and a rapidly changing environment. In the 1980s, the organization focused on certification of professionals that began in 1986. Home economics organizations met together in 1993 in a professional summit to build businesses among the five related organizations, including the AHEA, the home economics division of the American Vocational Association, the Association of Administrators of Home Economics, the National

681

Association of Extension Home Economist, and the National Counsel of Administrators of Home Economics. These organizations agreed to change the name from Home Economics to Family and Consumer Science, it was officially approved the following year. The new name sought to better position the professional within the academic community and to further illustrate the actual majors in the profession. The AHEA changed their organization title to American Association of Family and Consumer Science (AAFCS). The focus in 1997 was national standards for middle school and high school. The national standards were developed and adopted for education, focusing on content, process, and competencies. This move was to better standardize the coverage of the core areas. Home economics continues to provide a base of study in the five core areas. Recent Trends Home economics has faced many changes with the change in the structure of the organization; many areas of study have been moved to other disciplines of study. Household equipment and interior design courses have been moved to architectural programs. Similarly, programs of nutrition and dietetics moved to health science programs. Many higher education programs have been eliminated due to these changes and the changing opportunities available for women. With reductions in funding to public education, high school programs have also reduced many of their course offerings in traditional home economic courses. Today’s family and consumer science professionals continue to practice in many venues, including secondary teaching, college and university teaching and research, and outreach through cooperative extension programs. Their practice also includes human service areas working with all ages and types of families. Nutritionists, consumer specialists, and housing and textile specialists continue to provide services in their areas to improve the quality of family life and that of the community. The AAFCS continues to support the beginning concepts of home economics in improving family life. Their vision statement today is that individuals, families, and communities are achieving optimum quality of life assisted by confident,

682

Home Health Care

caring professionals whose expertise is continually updating through the AAFCS. They state their core values and beliefs through the following AAFCS code of ethics: • Believe in family as a fundamental unit of society • Embrace diversity and values of all people • Support lifelong learning and diversities and diverse scholarship • Exemplify integrity and ethical behavior • Seek new ideas and initiatives and embrace change • Promote an integrated and holistic approach, aligned with the FCS body of knowledge, to support professionals who work with individuals, families, and communities Home economics in the 21st century remains focused on life skills needs of the individual and family. With the changing family, courses aim to provide skills that will help students promote successful careers and strong families through financial management, nutrition education, and consumer education to manage resources to meet financial needs and address and mediate problems in family, community, and work environments. Janice Kay Purk Mansfield University See Also: American Home Economics Association; Gender Roles; Homemaker. Further Readings Baugher, Shirley, Carol Anderson, Kinsey Green, Jan Shane, Laura Jolley, Joyce Miles, and Sharon Nickols. “‘Body Knowledge’ for Family and Consumer Science.” Alexandria, VA: American Association of Family and Consumer Science, 2013. Craig, Hazel. The History of Home Economics. New York: Practical Home Economics, 1949. Donham, S. Agnes. The Eastern Massachusetts Home Economics Association: The First Forty-Three Years. Boston: Eastern Massachusetts Home Economics Association, 1954. Stage, Sarah and Virginia Vincenti. Rethinking Home Economics: Women and the History of a Profession. Ithaca, NY: Cornell University Press, 1997.

Home Health Care Most older Americans are facing chronic illness or disability in the final years of life. As more need for medical services becomes apparent, so too are the growing challenges that health care services face in meeting the demands for in-home care. While the medical community and government finds ways to address this issue, care for the aging population is becoming more expensive. Furthermore, the availability of time, resources, and support of spousal and family caregivers is dwindling. Home health care, also called “home care” or “in-home care,” has been one way to address the medical, psychosocial, and daily living needs of individuals needing care outside of a hospital or medical setting. Home health care is a formal, regulated program of care delivered by a variety of health care professionals in the patient’s home. Home health care includes a range of medical and therapeutic services, as well as other services delivered at patients’ homes. These services help promote, maintain, and maximize the level of independence while maintaining the effects of disability and illness. This care helps seniors to live independently for as long as possible, within the limits of their medical condition. It covers a wide range of services and can delay the need for long-term nursing care. Additionally, home health care can enable elderly individuals with physical and medical limitations to remain in their normal living environment with family and continue their lives with limited amount of disruption or restriction. Background The emergence of the home health care movement has its beginnings in the late 19th century and early 20th century, where trained nurses helped the sick and poor in their homes. Medical professionals provided services in the home while informing the family about the status of the patient. In 1909, Metropolitan Life Insurance Company was the first to write policies for home health care initiatives. This group was credited with the first reimbursement schedule for home medical services. As the 20th century progressed, home care went through some major setbacks. Home care was marginal because fewer patients wanted to be cared for at home. There was a precipitous drop after World War II, as house calls for physicians fell from 40 percent of all patientphysician encounters in 1930, to 10 percent by 1950.



When rising hospital costs continued to mount for Americans, campaigns by nursing organizations and Congress brought the home health care movement back to the forefront. By 1965, Medicare legislation was enacted to provide benefits to home care patients. Medicare’s aim was to decrease the escalating costs of medical services and hospitalizations by establishing a diagnostic related group. The program was created under the Social Security Act to provide health insurance to individuals 65 and older, regardless of income or previous medical history. At the same time, Medicaid coverage was also implemented, which included certain provisions for services provided in the home environment. The aim of Medicaid for homebound patients was to serve low-income, underserved individuals with complex medical issues. In 1982, the National Association for Home Care was founded to provide care to hospice and home providers and their patients. This group’s aim was to serve as the American voice on home care before Congress. Today, the association is the country’s largest trade association, representing the interests and concerns of home care agencies, hospices, and medical equipment services. Members have primarily been administrators, aides, social workers, and nurses who provide patient care in the home to recovering, disabled, or terminally ill patients. More recently, the Balanced Budget Act (BBA) of 1997 mandated a major overhaul in Medicare payments for home health care. The BBA was initially enacted to constrain the explosive growth in home health costs. Based on this reimbursement model, Medicare Home Health Care was shifted from a cost-based method to a bundled prospective payment method. The act was successful at reining in the use of the home health benefit and shifting it toward skilled services. As a result, home health utilization rates substantially dropped and hospice utilization rates increased. The BBA was successful at containing the costs of homebound patients and increased savings in overall cost for patients with certain diagnoses. Statistics There has been a growing increase in the percentage of patients needing both in-home medical services and help from other family members. Approximately 12 million Americans require some form of services from home health care. Today, more than

Home Health Care

683

33,000 home health care providers exist. The average length of services received through this program is 315 days. Nearly two-thirds (64 percent) of home health care recipients are women. In 2009, more than one in three households had an informal caregiver (estimated 48.9 million caregivers) taking care of a patient older than 18. Furthermore, 63 percent of caregivers are married and/or living with a partner, and two-thirds of these caregivers are women. Conditions that are treated by home health care vary considerably. Home health care patients had an average of 4.2 diagnoses per patient at the time of the initial intake interview. The most common conditions that required home health care included diabetes, heart failure, chronic ulcer of the skin, osteoarthritis, and hypertension. Endocrine, nutritional, metabolic and immunity disorders, diseases of the musculoskeletal system, connective tissue, and symptoms of ill-defined conditions were also principal diagnoses reported. Eighty-four percent of home health care patients had at least one activity of daily living (ADL) limitation and only 14.8 percent had no ADL limitations. Home health care is a costly expenditure. Approximately $72 billion was spent on all home health care services in 2009. Medicare spending was 41 percent of the total home health care and hospice expenditures. Other public funding sources included Medicaid, the Older Americans Act, Title XX Social Services Block Grants, the Veterans’ Association, and Civilian Health and Medical Program of the Uniformed Services. Even though Medicaid spending for home-based care is expected to slow in the coming years, it will still account for 11.4 percent of expenditures per year. Home Health Care Professionals The home health care team (HHCT) was first started in 1977 as an outreach program through the University of Rochester Medical Center’s Ambulatory Care to provide health care to homebound chronically ill and disabled patients. All patients at intake appointments received visits and evaluations by the physician during any hospitalizations. The health care team encouraged informal care by family and friends by providing them with the necessary physical and psychological support. Since its establishment, the HHCT has consisted of a physician specializing in geriatric medicine, a nurse

684

Home Health Care

practitioner, and a medical social worker. The team provided primary care to homebound elderly and chronically ill patients, where they are on call 24 hours per day, 7 days per week. Home health care members vary based on the services required for the patient when they are homebound. A physician is required to monitor the patient’s medical necessities, whether they attend the patient’s home or not. Common professionals who will attend the homebound patient include a skilled nurse, an occupational therapist, a speech therapist, or a specialized medical professional. Home health care staff members are well-equipped to handle a variety of health conditions, along with giving patient education about the illness and a plan for recovery. A growing trend in home health care treatment is for hospitals and medical centers to provide collaborative teams consisting of various medical and mental health professionals. The aim of this movement is for patients to reduce their health care costs while receiving multiple services from their care team at one time. The Current Landscape Home health care has become the fastest growing expense of the current Medicare system. With the rapid increase in chronic conditions in the aging population, combined with escalating hospital costs, this care has become an affordable option for many Americans. To receive services under Medicare, one must be under the care of a physician when homebound. Additionally, there must be a justifiable need for skilled nursing, occupational therapy, or speech therapy provided for the patient. The care must be part-time (28 hours or less per week), and occur at least every 60 days, with the exception of special medical cases. The need for family involvement is critical for the coordination of services between the care team and the patient. Because growing demands have been placed on informal family caregivers to sacrifice time and money for their duties, home health care can help alleviate some of the burden placed on these individuals. Nevertheless, most healthcare agencies expect family members and others involved in the patient’s care to be familiar with the daily living activities, medications, symptoms, and medical necessities needed for the patient. This is especially true for those with more serious conditions, where members are often responsible

for administering medication and attending to hygiene issues. Specific tasks that family members or caregivers are asked to perform include personal care (e.g., bathing, washing one’s hair, and getting dressed), homemaking (cleaning and laundry), cooking and delivering meals, and operating medical equipment and other health needs for the patient. Home Health Care Services A home health care medical device is any product or equipment used in the home environment for people who have chronic illness or disabilities. These patients, along with any caregivers or attending providers of care, may need basic education, training, or additional resources to maintain these devices and learn how to use them in a safe manner. The Food and Drug Administration’s Center for Devices and Radiological Health regulates the safety and approval of these devices. Some examples of these devices include ventilators, nebulizers (to help with breathing), wheelchairs, blood glucose meters, and infusion pumps. New advances have been introduced to help meet efficiency standards of care of homebound patients. Telecommunicating devices transmit a patient’s vital signs over the phone. Remote monitoring devices help home care agencies keep a constant watch on patients. It allows patients to maintain their independence, prevent potential complications, and minimize the costs of other medical equipment or devices. Patients and their families feel comfortable knowing that the patient will be continually monitored in case a crisis or emergency arises. Physiological data are collected, such as blood pressure readings or blood glucose levels in diabetes patients. A revolutionary form of residential care, health smart homes, have been experimented with to address the increasing need for elderly care and helping caregivers with increasing demands and needs. Health smart homes represent one of the most promising new developments in the area of telemedicine. The aim is to improve patient living conditions and avoid the large costs of hospitalizations for those with disabilities or long-term chronic conditions. There are several reasons why this trend is growing: earlier detection of treatment of diseases, reducing high costs, monitoring the remote status of the patient living in the home,



Visiting nurses from the Navy-Marine Corps Relief Society provide at-home health care to service members and their families. The home health care movement began in the late 19th and early 20th centuries with trained nurses helping the sick and poor.

and collecting medical data. The use of this system addresses sociomedical, economical, technical, and ethical issues in the everyday life of the patient. Future Trends The costs of caring for a sick patient will always remain a concern in the health care system. As patients are discharged from hospitals with more conditions, rising expenses, and less attention to care, a greater responsibility will be placed on the family to provide a higher level of care. The U.S. population of those 65 and older is expected to increase to more than 70 million by 2030. By 2050, 27 million people will need some type of long-term care. Because of these rising numbers in the aging

Home Health Care

685

population, many medical professionals say that home care for certain chronic health issues must expand in the years to come. With the Patient Protection and Affordable Care Act recently implemented in the current health care system, changes in coverage and requirements for patients needing home care will soon follow. Substantial cuts in Medicare payments and reimbursements for home care will be enforced that may indirectly affect payment options for patients and their families. This act will also make changes to the services provided under hospice care for individuals. The Affordable Care Act is expected to reduce Medicare spending on certain home health care services by $4.2 billion from 2010–2014, with an estimated reduction of more than $39.5 billion through 2019. Through this new legislation, home health agencies will need to become more efficient to save costs and provide quality care for Medicare beneficiaries. For home health care to thrive in future decades, adjustments in many areas (especially financial and medical) will be needed. If home care is to avoid operating in “silos” like many sectors of the health care system, home health care will need to work in partnership with other care models to provide services for chronically ill individuals. Agencies that have both a strong infrastructure and clinical workforce will be well positioned to address the challenges and services needed for home health care to strive in the future. Embracing the home care model may be a key part to the success and well-being of caring for the chronically ill for years to follow. James M. Zubatsky University of Minnesota See Also: Caregiver Burden; Caring For the Elderly; Later-Life Families; Medicaid; Medicare; Nursing Homes. Further Readings Ellendbecker, C. H., L. Samia, M. J. Cushman, and K. Alster. “Patient Safety and Quality in Home Health Care.” In Patient Safety and Quality: An Evidence-Based Handbook for Nurses, R. Hughes, ed. Rockville, MD: AHRQ, 2008. Murtaugh, C. M., N. McCall, S. Moore, and A. Meadow. “Trends in Medicare Home Health Care Use: 1997–2001.” Health Affairs, v.22/5(2003).

686

Home Mortgage Deduction

U.S. Department of Health and Human Services. “Home Health Care and Discharged Hospice Care Patients: United States, 2000 and 2007.” National Health Statistics Report, v.38 (2011).

Home Mortgage Deduction The home mortgage deduction is a federal income tax exemption that allows families to deduct what they pay in mortgage interest from their taxable income. This deduction came from the initiation of the Revenue Tax Act of 1913, which tried to provide a more equitable tax plan for everyone, with the intention of providing relief to those heavily burdened by indebtedness. The initial tax break of the Revenue Tax Act was to allow the interest paid on credit debt to be excused from the calculation of taxable income. Interest from a home mortgage was included in this exclusion as an unintended benefit because few American carried mortgages on their homes when the act was initially implemented. Home mortgages became popular when congress enacted the 1934 Federal Housing Authority (FHA). Up until this time, home mortgages were difficult to secure, and even more difficult to pay off. With the initiation of the Federal Housing Authority (FHA), many more Americans were able to buy homes using FHA mortgages. These new mortgages allowed consumers to purchase a home with less than 20 percent of the home selling price while taking 10, 20, and now 30 to 40 years to pay. In exchange for the loan used to purchase the house, lenders assessed interest that needed to be paid with a monthly repayment of the initial loan. The accrued interest from these loans benefited from the home mortgage deduction because it allowed consumers to deduct what they paid in interest each year from their income used for determining federal taxes. Home ownership is considered that an essential part of the American Dream, meaning, everyone should have the right to own a home if they are willing to work for it. The ability to purchase a home using a mortgage is considered the most influential factor in making homeownership

attainable for middle- and lower-income families. The home mortgage deduction is a tax credit that provides an additional support to working-class families by lowering the amount of federal income tax paid. The mortgage deduction is calculated on the amount of money paid in interest on the loan. Essentially, the money paid in interest for the mortgage loan is exempt from the calculation of taxes to be paid. The exact deduction is based on the family’s total income in relation to the amount of money paid in interest. Home mortgage deductions are allowable for first and second homes, as well as home equity loans taken for the improvement of the dwelling. The interest paid in home mortgage loans can benefit American families and can be seen by using the following example: A $150,000 mortgage at 6 percent interest costs the homeowner about $7,000 the first year. This taxpayer would likely be eligible for about 25 percent in savings of what has been paid in mortgage interest, or about $1,600. Because this is calculated off of income taxes paid, it is beneficial to homeowners because it reduces the amount of taxes paid to the federal government. The actual benefit of the home mortgage deduction has been questioned by tax specialists for decades. While real estate lobbyists supporting home ownership tout the benefits, economist and some politicians provide evidence that contradicts this claim. Ideally, the home mortgage tax deduction encourages consumers to purchase homes, knowing that the money paid in interest charged on that loan is deducted from their taxable income. The National Association of Realtors claim that American families purchase homes knowing that they will receive this tax break. Americans cite the tax exemption as a factor in decisions regarding the purchase of a home; however, there is a concern that Americans “over invest” and purchase more expensive homes, believing that they will benefit from a larger tax break. The reality is that very few families actually benefit from this deduction based on current tax law. It has also been argued that only those in the upper-income tax brackets actually benefit from this deduction. The economic risk to most families is in buying expensive or larger homes, believing that they are investing for future capital or benefiting from an income tax break. As real estate markets fluctuate, many homes increase in value over time. However, economists

Homelessness



consistently demonstrate that, with few exceptions, home values do not increase enough to be profitable investments in relation to the cost of owning, maintaining, and repaying mortgage expenses. There are exceptions and housing market bubbles (economic periods where housing values increase by values of 2 to 5 percent in short periods of time), where a long-term investment in purchasing a house with a mortgage can be more profitable. Bruce Covey Central Michigan University See Also: Budgeting; Credit Cards; New Deal; Standard of Living. Further Readings Internal Revenue Service. “Publication 936, Home Mortgage and Interest Deduction.” http://www.irs .gov/uac/Publication-936 (Accessed September 2013). Kennedy, Mark. How to Buy a House the Right Way: The Complete Home Buying Guide for First-Time Home Buyers and Seasoned Pros (Smart Living). Wyoming, DE: Back to School Press, 2012. Ventry, Dennis J., Jr. “The Accidental Deduction: A History and Critique of the Tax Subsidy for Mortgage Interest.” Duke Law Scholarship Repository Journal, v.73/1 (2010).

Homelessness According to 2012 estimates, on a national level, 633,782 people, or 20 per 10,000 people, experience homelessness on any given night. Under federal law, a homeless individual is a person who lacks a stable and adequate nighttime residence. This definition includes people who live in places that are not suitable for long-term human habitation, such as temporary shelters or transitional housing, and who are losing their residence within 14 days and/ or reside in motels or live with relatives or friends. Families with children or unaccompanied youth who lack stable housing, that is, they have not had a lease agreement for housing in the last 60 days, have experienced two or more moves in the last 60

687

days, and will continue to lack stable housing due to disability or unemployment, are also considered homeless. Finally, people who are escaping or attempting to escape life-threatening and dangerous situations related to violence such as domestic violence, sexual assault, and stalking are also counted as homeless. There is considerable heterogeneity in the nature of the homeless experience, and substantial variability in the homeless population. Whereas historically homelessness was a temporary experience, today homelessness may be either short- or longterm. Thus, assessment of the current homeless population must take into account that homelessness may be temporarily or chronically experienced. Although historically, homelessness was thought of as an urban problem, today it occurs in urban and rural areas. Hence, stereotypic ideas of the urban homeless “bum” do not accurately describe the complex pathways and life experiences of homeless people in contemporary U.S. society. The homeless population may include individuals, families, youth, war veterans, the physically or mentally disabled, and victims of domestic and sexual violence. Homelessness: Then and Now Historically, homelessness was a temporary experience related to a sudden catastrophic event or a shift in the economy. For example, from the Great Depression emerged a “hobo” population comprised mostly of men who wandered the country in search of work. When the Great Depression ended and the economy improved, these men found employment and homes. Homelessness in the past also existed in skid row–type “bum” areas, for instance in neighborhoods such as the Bowery in New York, West Madison in Chicago, and the Tenderloin in San Francisco. The poor, transient, sick, and those who suffered from substance abuse sought refuge in these confined urban enclaves. In contrast, modern-day homelessness is not a temporary experience limited to a hobo or bum population; nor is it an out-of-sight issue that solely exists in undesirable neighborhoods. Instead, it exists in urban and rural communities across the United States, and affects the lives of families, individuals who suffer from physical and mental disabilities, veterans, and many others. For some people, homelessness is a temporary experience brought on by disruption in employment, change in marital/

688

Homelessness

relationship status, medical bills, or other traumatic and stressful life events. For some individuals, being without a home is a chronic condition that is a nearpermanent way of life. Measuring Homelessness There are two principal methods to assess the size of the homeless population: point-in-time counts, and period prevalence counts. Point-in-time counts involve counting the number of individuals and families who are homeless in one night; period prevalence counts are gathered by surveying the number of people who are homeless during a given period of time. Each year in late January, communities across the United States conduct point-in-time counts of individuals and families experiencing homelessness. In one night, communities collect information on sheltered and unsheltered homeless individuals. The annual point-in-time counts represent the only method that captures information on both populations, albeit with some limitations in relation to the size of the unsheltered homeless population. The advantage of assessing homelessness via the period prevalence count method is that it allows researchers to identify a broader range of individuals who experience homelessness, such as abused women and those who have experienced layoffs from work. Thus, period prevalence counts provide an in-depth look at the social problem of homelessness by yielding information on the nature and length of the homeless experience, while point-in-time-counts only provide a snapshot of the homeless population on any one night. Chronic Homelessness Chronic homelessness is a long-term experience that is usually coupled with either a physical or mental disability. The Department of Housing and Urban Development defines a chronic homeless individual as a disabled person who has experienced homelessness for more than one year, or has experienced four episodes of homelessness over a three-year period. A family with one adult member who meets this classification would be considered chronically homeless. Chronically homeless individuals often reside in shelters and are highly reliant on emergency, public services. Over the last decade, significant progress has been made to address the chronic homeless population. Since

2007, chronic homelessness has declined by 19.3 percent (or 23,939 people). According to the report 2012 Point-In Time Estimates of Homelessness, 15.8 percent (99,894) of the homeless population was chronically homeless. Individuals who experience chronic homelessness need access to permanent supportive housing. Housing, along with rehabilitation and therapeutic services, can provide a foundation for successful recovery. Further, permanent supportive housing is a cost-effective intervention that reduces the chronically homeless individual’s reliance on public services, which also reduces federal, state, and local costs. Permanent supportive housing can be used as a preventative measure to keep people who are exiting prison or psychiatric facilities from becoming chronically homeless. Homeless Subpopulations Certain subpopulations are vulnerable to homelessness: families, veterans, youth, those living in rural areas, and survivors of domestic violence. Families. Homeless families are similar to other families living in poverty. Many families are headed by a single mother with minimal education, are young, and have increased rates of domestic violence and mental illness. Some families living in poverty become homeless because of a financial crisis—a job loss, medical emergency, or death in the family—which results in the family’s inability to maintain housing. According to the January 2012 point-in-time counts, 239,043 people in families were homeless. Typically, homelessness among families is a short-term experience, and families are able to reacquire housing with temporary public services, such as rent assistance, housing placement services, and job assistance. One of the most important resources for homeless families is rapid rehousing. When families are able to quickly reacquire permanent housing, they can return to relative stability. Additionally, families who can obtain preventative resources such as cash assistance, housing subsidies, and other services may never have to experience homelessness. Veterans. The veteran homeless population is comprised of veterans who have served in wars ranging from World War II to Afghanistan and Iraq. Upon



returning home from war, veterans can experience difficulties with adjusting to civilian life. Some veterans return home with war-related disabilities— physical disabilities, post-traumatic stress disorder, or mental anguish. Due to these major physical and psychosocial changes, disabled veterans may exhibit dangerous behaviors (e.g., addiction, abuse, and violence). These behaviors, along with readjustment difficulties, can lead to homelessness. The 2012 study Point-in-Time Estimates of Homelessness identified 62,619 homeless veterans in a single night, which translates to 13 percent of homeless adults. In 2010, the Veterans Administration Secretary, Eric Shinseki pledged to reduce the number of veterans experiencing homelessness. Since 2011, there has been a 7.2 percent (4,876) decline in homelessness among veterans, and a 17.2 percent (12,990) decline from 2009 estimates. Solutions to homelessness among veterans are similar to preventing homelessness among other homeless populations. Homelessness preventative measures and rapid rehousing are critical resources for homeless veterans and those on the verge of becoming homeless. Veterans who suffer from severe war-related physical and mental disabilities require permanent housing and support services. Youth. To date, reliable data on the number of unaccompanied homeless youth in a single night are not available; however, the 2013 point-in-time counts will begin to present data on this homeless subpopulation. According to the National Alliance to End Homelessness, approximately 380,000 unaccompanied youth (ages 18 and younger) experience an episode of homelessness in one year. Currently, only 50,000 homeless youth per year receive services from homeless youth programs. Based on these numbers, additional resources are needed to effectively respond to this population. Lesbian, gay, bisexual, transgender, and questioning (LGBTQ) youth make up 20 percent of homeless youth. Many LGBTQ youth experience family conflict and are shunned due to their sexual orientation and gender identity. In comparison to heterosexual homeless youth, LGBTQ homeless youth are more prone to increased rates of physical and sexual assault, higher rates of mental health problems, and unsafe sexual practices. Additionally, LGBTQ homeless youth are two times as likely

Homelessness

689

to attempt suicide (62 percent) than heterosexual homeless youth (29 percent). The majority of LGBTQ youth fail to receive support services or housing resources. LGBTQ youth report discriminatory behavior from peers and staff in youth shelters and drop-in centers. Further, there are a small number of nonprofit organizations across the United States that offer services focused on LGBTQ youth. Most of these organizations are located on the west and east coasts of the country. Support services that aim to encourage family reunification and support can reduce the homeless youth population. Rural citizens. Many people consider homelessness to be solely an urban problem, but homelessness is also prevalent in rural communities. The same societal issues that lead to urban homelessness—lack of affordable housing and low income—are also contributing factors in rural homelessness. Based on data from the National Alliance to End Homelessness’s “Geography of Homelessness” report, there are 14 homeless people for every 10,000 people in rural areas, compared with 29 homeless people for every 10,000 people in urban areas. Rural areas tend to have higher rates of poverty than urban areas, which contribute to individuals becoming and remaining homeless. Unlike in urban areas, homeless individuals in rural areas lack access to many public services. Rural homeless programs tend to have less federal funding, as federal funds are allocated more toward urban areas. As a result, rural programs lack the appropriate infrastructure to provide quick and comprehensive services to the rural homeless population. Geographic and transportation challenges also limit homeless individuals’ access to services. Due to these challenges, one of the most crucial solutions to ending rural homelessness is prevention. Domestic violence survivors. About 12 percent of the homeless population is made up of domestic violence survivors. After they flee the abusive relationship, they are often isolated from family and friends, and they lack financial assets. The absence of a support system and the lack of access to financial resources put domestic violence survivors at risk of becoming homeless. Once homeless, many women experience difficulties trying to secure new housing because they lack a steady income,

690

Homemaker

employment history, credit history, and rental history. Additionally, after enduring ongoing abuse, many women suffer from mental health and substance abuse disorders. Domestic violence survivors have two sets of housing needs. They need immediate housing to remain safe from their abuser, and long-term housing to establish a secure and stable home. Thus, initiatives toward creating affordable housing options are crucial to this population, so that the family or woman will not return to the abuser and can transition out of the shelter system into permanent housing. Solutions The overall homeless population is comprised of many types of homeless individuals who fall under various homeless subpopulations. Each subpopulation has a set of defining characteristics; however, when solutions to end homelessness are presented, there are commonalities across subpopulations. Rapid rehousing, support services, and prevention are key strategies in ending homelessness. Helping households quickly reacquire permanent housing limits the length of homelessness; aids individuals and families in avoiding the additional stressors of living in homeless shelters; and frees up additional space in shelters. For those individuals who suffer from physical and mental disability, support services are crucial resources. Providing specialized and effective support services, coupled with rapid rehousing to disabled homeless individuals, increases their chances of regaining stability in their lives and decreases the likelihood of a repeat episode of homelessness. Preventative strategies across subpopulations can help communities effectively reduce episodes of homelessness. Preventative strategies can help households keep their current housing and reduce the number of people accessing the homeless assistance and shelter system. A decline in individuals experiencing homelessness saves the homeless system considerable money, which also decreases the personal and public costs of homelessness. Monica Miller-Smith Annamaria Csizmadia University of Connecticut

See Also: Living Wage; Mental Disorders; National Affordable Housing Act; Poverty and Poor Families; Poverty Line. Further Readings National Alliance to End Homelessness. http://www .endhomelessness.org (Accessed July 2013). National Alliance to End Homelessness. The State of Homelessness in America 2013. Washington, DC: National Alliance to End Homelessness, 2013. Rothenberg, Paula. “It Didn’t Happen and Besides, They Deserved It.” In What’s the Problem? A Brief Guide to Critical Thinking, E. Gilg and D. Kasowitz, eds. New York: Worth, 2011. U.S. Department of Housing and Urban Development, Office of Community Planning and Development. “The 2012 Point-In-Time Estimates of Homelessness: Volume 1 of the 2012 Homeless Assessment Report.” Cambridge, MA: ABT Associates, 2012.

Homemaker Homemaker is a gender-neutral term that encompasses both housewives and househusbands, terms that refer to women and men who are responsible for the management of their households and have no other employment. Homemaking includes childcare, meals, and housecleaning, and may include the management of the household finances. Traditionally, when women (sometimes more than one generation of women in a given household) were housewives by default, in all but the wealthiest families homemaking would also include the making of clothing for the family. Tasks such as tending to horses or other beasts of burden and repairing vehicles and equipment fell to the men of the house. The traditional expectation that most women would be homemakers—and, later, the modified assumption that even women with jobs would still be principally responsible for the management of their household—affected the formulation of public education curricula. In many school districts, home economics was long a required course for girls. In the 21st century, in most places these classes have either been discontinued or made optional. Home economics classes were first



offered in the 19th century, sometimes under the name “domestic science.” An early formulator of the approach was Catherine Beecher, a schoolteacher and advocate of women’s education who was also the sister of abolitionist author Harriet Beecher Stowe. At 23, Beecher opened a private girls’ school, preparing textbooks in arithmetic and theology. She incorporated dietary advice into her teaching, and in 1941, she published her first treatise on “domestic economy.” While home economics is today associated with the forced relegation of women into a homemaking role, for Beecher, the formalization of the science was key to underscoring the importance of a woman’s role in society. She believed that home economics should be taught not because women were trivial, but to prove that they were not. Like some of the antisuffragists of her time, Beecher opposed voting rights for women out of the belief that politics were a corrupt pursuit that would compromise women’s capacity to influence the world in their role as mothers. Ellen Richards was an early feminist, the first female student at the Massachussets Insitute of Technology in 1873, and later its first female instructor. She soon became the first female chemist in the United States and was involved in social activism, helping to institute the first public school lunch program, which launched in Boston in 1894. Richards did not oppose suffrage like Beecher, but she also saw great value in women as homemakers, and believed that by being expert homemakers, and turning homemaking into a modern science, women could do the most good in the world. She wrote several books on food, sanitation, home economics, and ecology, and was asked to be the first president of the American Home Economics Association (now the American Association of Family and Consumer Sciences) in 1908. These early home economics classes occupied a somewhat moderate political position: they sought to improve the skills of women and acknowledge the importance of their contributions, without being forward-thinking enough to foresee a time, just around the corner, when men would be expected to contribute to homemaking. Much of their support came from progressive social reformers who wanted to improve the diet and habits of Americans, to provide instruction for the daughters of the many immigrants arriving in the United States

Homemaker

691

who may have been removed from the traditional environment in which domestic skills would have been taught to them. It has also been argued that the home economics movement was a middle-class phenomenon responding to the rise of the working class (both immigrant and native-born), imposing on that working class a normative model of singleincome nuclear families that they may not have been able to afford. Home economics is one of the earliest interdisciplinary fields of study incorporated into public education, and it includes not just cooking, but also nutrition and health sciences, human development and family studies, interior design, family resource management, textiles and sewing, and family and consumer science, which incorporates an understanding of family economics and household finance, early childhood education, and theories of parenting. Home economics was not limited to primary education, and many land-grant colleges (state universities funded with grants provided by federal legislation) followed the Mount Holyoke Plan, which mandated a minimum two hours of instruction in food preparation every week for female students. In the 1950s educational film produced by Centron, Why Study Home Economics, the lead character explains, “If I’m going to be a homemaker for the rest of my life, I want to know what I’m doing.” The objection that was raised against home economics classes was that the message, either implicit (in enrolling girls in the classes by default or mandate) or explicit (in classroom discussions), argued that a girl’s destiny was as a homemaker, even more than 30 years after they had been granted political equality. That home economics classes and texts were heteronormative goes without saying; they also assumed that all home economics students would one day marry, have children, and be responsible for raising those children and caring for them outside school hours. The idea that a woman’s place is in the home and nowhere else is what Betty Friedan called the “feminine mystique” in her 1963 book by the same name. It was inspired by her experiences surveying her Smith College classmates in preparation for their 15th anniversary reunion in 1957. In the course of that survey, she discovered how many of the girls she had gone to school with had unhappily settled down as housewives. She embarked on

692

Homestead Act

a wider survey, interviewing suburban housewives about how they felt about their domestic work and lives, and whether they were fulfilled—much less felt divinely assigned to their roles. Friedan called this widespread unhappiness and ennui, in a brilliant turn of phrase, “the problem that has no name.” Friedan’s book was instrumental in secondwave feminism, in letting millions of women know that they were not alone in their unhappiness, nor were they at fault for it. Though the world has not changed as much as some may have predicted at the end of the 1960s, today few women are stigmatized for choosing to work, though they may feel the related pressure to be “the woman who has it all,” that is, a woman with a healthy career and bustling family. The practice of men becoming househusbands, while not stigmatized as strongly as their converse counterparts, women who chose not to be housewives, were in the past, neither have they necessarily been mainstreamed, and the supposed helplessness and ridiculousness of a man attempting to raise children and care for a household continues to be used as the premise of sitcoms and films. Bill Kte’pi Independent Scholar See Also: Cult of Domesticity; Domestic Ideology; Domestic Masculinity; Home Economics. Further Readings Craig, Hazel. The History of Home Economics. New York: Practical Home Economics, 1949. Elias, Megan J. Stir It Up: Home Economics in American Culture. Philadelphia: University of Pennsylvania Press, 2010. Stage, Sarah and Virginia B. Vincenti, eds. Rethinking Home Economics: Women and the History of a Profession. Ithaca, NY: Cornell University Press, 1997.

Homestead Act The Homestead Act, signed by President Abraham Lincoln on May 20, 1862, was an effort to affordably distribute public lands and encourage western migration and settlement. The act stipulated

that current or future American citizens who were heads of household or over age 21, and who had not taken up arms against the United States, could claim 160 acres of land in exchange for an $18 filing fee. Those who lived on and made improvements to the land received a deed of title after five years. The homesteading population was primarily comprised of families, including experienced farmers, immigrants, and former slaves. Between 1862 and 1986, nearly 4 million Americans participated in the largest public land distribution program in U.S. history, and in the process settled more than 270 million acres in 30 states. Throughout the life of the program, 1.6 million deeds were granted to homesteaders, who settled a total of 10 percent of all land in the United States. The families who acquired land through the Homestead Act helped shape social and political life on the Great Plains and the west, and were likewise affected by their homesteading experiences. Homesteading: Labor, Marriage, and Childhood The majority of homesteaders were experienced farmers who lived in close proximity to available land; the move west was expensive, and the establishment of a functional farm required expertise and costly tools and equipment. The Homestead Act also drew European immigrants, primarily from Germany and Scandinavia, who were attracted by the prospect of cheap land and the potential of economic prosperity. Building a successful homestead was a difficult endeavor, however, one that entailed constant work and repeated setbacks. Homesteading was thus a family enterprise for most who acquired land under the act, and the labor of all family members, regardless of sex or age, was a necessity. The family life of homesteaders revolved around improving their land, especially in the immediate aftermath of its purchase. Labor roles for homesteading men and women were modeled after traditional 19th-century white middle-class gender roles: women were responsible for the majority of domestic work and were the primary caretakers of children, and men engaged in physical labor outside of the home. However, the wide range of work required to establish a successful homestead necessitated the blurring of traditionally male and female roles. Women thus engaged in physical



work normally considered the purview of men, and in the process altered and expanded the definition of womanhood in a manner that deviated from 19th-century notions of female domesticity. Family members labored together to clear their land and plant crops, ensure the health of their animals, and build homes and other structures. Men also participated in domestic life and assisted with childcare and cleaning, especially when their wives were ill or working outside the home to earn additional income. Homesteading families’ reliance on female labor elevated women’s status and affected women’s roles within the family. Within homesteading marriages, husbands and wives managed their property together, and women often allocated resources and directed family expenditures alongside their husbands. Some women entered their marriages with property of their own, and retained any resulting profits and spent or invested them as they wished.

Homestead Act

693

Wives were also responsible for maintaining the homestead, sometimes for months at a time, in their husbands’ absences. Women’s success in managing the family’s property during such times further emphasized their individual abilities and importance within the family. The interdependent nature of homesteading marriages suggests that they were less patriarchal than other 19th-century marriages, which may have also affected homesteading children’s perspectives on the nature of marriage and family life. Children were a vital source of labor on homesteads, where they herded animals, harvested crops, and assisted with plowing, hunting, cooking, and childcare. They also hunted and foraged for food, collected water, and supplemented family income by working for neighboring farms, often from an early age. Children’s prominent role in meeting the needs of their families instilled in them a sense of independence and self-confidence,

A homesteader works on his Tennessee property in 1933. Between 1862 and 1986, nearly 4 million Americans participated in the largest public land distribution program in U.S. history, and in the process settled more than 270 million acres in 30 states. This represented a total of 10 percent of all land in the country.

694

Homestead Act

which sometimes led to clashes with their parents, many of whom were raised in more traditional environments. Unlike their parents, exploration and autonomy were part of daily life for homesteading children, and these experiences highlighted the differences between their generation and that of their parents, and likely affected their personalities and expectations about family life. Despite these differences, children remained close to their parents and took pride in their contributions to the family’s success, while also enjoying the autonomy and selfsufficiency that they experienced. Like other 19th-century parents, homesteading mothers and fathers were concerned with their children’s moral instruction, which would normally be seen to by their mothers and reinforced in local schools. However, because mothers were active participants in building their homesteads, and schools were a relative rarity along the frontier, parents worried that their children were not receiving an appropriate moral education. These concerns were compounded by working parents’ inability to closely supervise the daily activities of their children, especially those working away from their families who faced exposure to frontier violence, drinking, gambling, and profanity. The necessity of child labor outweighed parents’ desires regarding their children’s educations, however, simply because it was vital to the success of the homestead. While improving the land was the primary focus of life on a homestead, parents also sought to establish safe and loving environments for their children, and spent as much time as possible caring for and nurturing them. African American Families For thousands of formerly enslaved African Americans, the Homestead Act was a means through which they could purchase land and settle their families in communities free of the racism and violence of the south. After the end of the Civil War, African Americans throughout the south worked diligently to reunite with family members from whom they were separated during slavery, and construct legally recognized nuclear families based on white middle-class notions of domesticity. African Americans further sought economic security through the purchase of land and educational opportunities for their communities. The unchecked racial violence and oppression that emerged after the withdrawal

of federal troops from the south in 1877, as well as prohibitions against African American land ownership, threatened the safety and stability of African American families and prompted many to consider emigration. It was in this context that the largescale movement of African Americans to Kansas, referred to as the Great Exodus, began in 1879 and eventually led to the movement of thousands of African American “Exodusters” into the state. African American homesteaders purchased land throughout the Great Plains and the west, and established communities with churches, hotels, schools, and newspapers. Homesteading was viewed as a way to protect and stabilize black families and ensure the safety and material success of future generations of African Americans. Native American Displacement While the Homestead Act provided land and opportunities for white Americans and European immigrants to establish new settlements along the American frontier, it also contributed to the ongoing displacement of Native American families. As settlements expanded, Native Americans, the vast majority of whom were not U.S. citizens and was therefore ineligible for the Homestead Act, were removed from their lands and forced onto reservations and then individual allotments to make room for white settlers. These lands were often small and difficult to farm, which resulted in high rates of disease, hunger, and death. American Indians were also expected to adopt sedentary farming, abandon their religious beliefs and languages, and alter their cultural norms regarding marriage, divorce, and sexuality to those of white Americans. To hasten this process, Native American children were forcibly removed from their families and placed in boarding schools or under foster care, where they were forced to Anglicize their names, speak only English, and practice Christianity. Though American Indian families did not passively accept these circumstances, and clandestinely maintained their traditions and identities, the practice of removing native populations in favor of white settlers had a devastating impact on American Indian families. Samantha Williams Bret Carroll California State University, Stanislaus

See Also: African American Families; Family Farms; Frontier Families; Gender Roles; Marital Division of Labor; Native American Families; Rural Families. Further Readings Myres, Sandra L. Westerning Women and the Frontier Experience, 1800–1915. Albuquerque: University of New Mexico Press, 1982. Painter, Nell Irvin. Exodusters: Black Migration to Kansas After Reconstruction. New York: Norton, 1986. West, Elliot. “Children on the Plains Frontier.” In Small Worlds: Children and Adolescents in America, 1850– 1950, Elliot West and Paula Petrik, eds. Lawrence: University of Kansas Press, 1992.

Hooking Up Hook-up is a slang term referring to a fleeting and uncommitted sexual encounter. Behaviors may range from kissing to intercourse. Hook-ups typically occur between people who are at least acquaintances. Since the 1960s, this “no strings attached” sexual practice has become increasingly normative for relationship initiation, especially among white heterosexual college students. In North America, 70 to 85 percent of college students have hooked up; the majority have about one hook-up per semester. Those least likely to hook up include the profoundly religious, racial minorities, and those in exclusive and committed romantic relationships. Among adolescent samples, about 25 percent have ever hooked up, but no race differences in prevalence have been observed. The rising prevalence and acceptance of hooking up as a cultural norm may partly derive from the ambiguity surrounding hook-up behaviors. Some studies estimate that about 20 percent of college seniors have never had vaginal intercourse, and only about one-third of hook-ups include intercourse. Nevertheless, the term’s vague reference to sexual behavior may facilitate common beliefs among young adults that their peers are more comfortable with hooking up, that they hook up more often, and that those hook-ups involve more advanced sexual behavior. These misconceptions create social pressure to participate in uncommitted sexual behavior, and ultimately may have contributed to it becoming normative.

Hooking Up

695

Historical Emergence of Hooking Up Several historical changes have facilitated the emergence of hook-up culture. In the 1920s, the automobile permitted increased independence among adolescents and young adults while reducing parentally supervised courtship. Introduction of the birth control pill in 1960 and feminism and the sexual revolution during the 1960s, and 1970s also promoted liberal attitudes toward premarital sexuality, contraceptives, and women’s sexual behavior. Moreover, these decades saw an increase in co-education and college enrollment, especially for women. Collectively, these factors facilitated the rise of college party culture, a prominent venue for hooking up. Many young adults also began delaying marriage, sometimes avoiding more traditional dating scripts. These changing historical trends have made uncommitted sexual encounters less socially and biologically costly, especially for women. For some adolescents and young adults, hook-ups have become a desirable alternative to traditional dating. Substance Use and Risky Sexual Behavior Among young adults, hooking up is considered an acceptable avenue for exploring sexuality while limiting emotional and physical risks. For example, kissing, petting, and oral sex are the most common hook-up practices, partially because they appear to present little hazard to physical health. Unfortunately, recent studies have estimated that about half of American college students engaging in uncommitted sex were unconcerned about health risks, and reported not using a condom. These findings raise concerns about the implications of hook-up culture for unsafe sex practices, unwanted pregnancy, and transmission of sexually transmitted diseases. Moreover, many hook-ups are unintentional, resulting from use of alcohol or substances as a “social lubricant” at bars or college parties, increasing the likelihood of risky sexual behavior. Substance use/abuse has also led to sexual exploitation among college students. Gender and Emotional Reactions Research on physical and emotional satisfaction after hooking up does not provide a clear picture of whether people are generally happy with hooking up. Whereas most people report positive feelings during a typical hook-up, and one-quarter report feeling happy or good about the experience

696

Household Appliances

afterward, about one-third report regret or disappointment. Women are consistently more likely than men to have negative reactions after a hookup. College-student studies have shown between 70 to 80 percent of people who reported ever hooking up experienced at least some feelings of regret, with women being more than twice as likely to experience regret as men. Men tend to have more permissive attitudes toward casual sex, prefer having more sexual partners, and hook up more often than women. However, women are also participating in hookup culture at high rates. Although less often than men, about two-thirds of women look for a shortterm mate, and about one-quarter report feeling good after a typical hook-up. Many first-year college women appear to enjoy hooking up, but begin to look for more committed/exclusive relationships as they progress through school. However, men typically maintain control over whether a hook-up develops into a more serious romantic relationship. “No Strings Attached” Despite widespread participation in hook-ups, the majority of young adults, particularly women, prefer traditional dating to a series of uncommitted sexual relationships. In fact, 69 percent of heterosexual college seniors reported being in a romantic relationship of at least a six-month duration. Most women and about half of the men hope that hook-ups will transform into romantic relationships. Although popularly portrayed as a “no strings attached” relationship, most hook-ups are motivated by at least some level of romantic interest. Nevertheless, most do not evolve into romantic relationships, even among partners who repeatedly hook up. Gay and Lesbian Casual Sex Research Research about hooking up in gay and lesbian populations is limited. Men who have sex with men tend to have open relationships that contribute to higher rates of casual sex. Casual sex is often initiated in venues similar to heterosexual hookups (e.g., bars). However, casual sex can be identified as a single behavior under the umbrella of hooking up, a term that does not necessarily imply sexual intercourse. Therefore, more research is necessary to determine the prevalence of broader

hook-up behaviors among men who have sex with men. Even more so, hook-up related research on lesbians and women who have sex with women is lacking, and rates of hook-ups for this group are unknown. C. Rebecca Oldham Sylvia Niehuis Texas Tech University See Also: Contraception and the Sexual Revolution; Courtship; Dating; Emerging Adulthood; Gender Roles; Teen Alcohol and Drug Abuse. Further Readings Armstrong, Elizabeth A., Laura Hamilton, and Paula England. “Is Hooking Up Bad for Young Women?” Contexts (Summer 2010). Bogle, Kathleen A. Hooking Up: Sex, Dating, and Relationships on Campus. New York: New York University Press, 2008. Garcia, Justin R., Chris Reiber, Sean G. Massey, and Ann M. Merriweather. “Sexual Hookup Culture: A Review.” Review of General Psychology, v.16/2 (2012).

Household Appliances The average American home of today is filled with a large number of appliances designed to assist in the performance of household tasks. Some of the appliances are considered essential for daily life (i.e., ovens, ranges or stoves, microwaves, refrigerators, dishwashers, washing machines and dryers) while others are more discretionary (i.e., electric coffee grinders, waffle irons, and bread makers). Ownership and use of household appliances has been tied to the availability of and access to sources of energy that can power them. The diffusion of electricity across America in the first half of the 20th century resulted in the widespread use of a number of household appliances. In 1902, less than 10 percent of American homes had electricity generated by power stations. By 1948, more than three-quarters did. Urban homes were the first to be hooked up to the power grid; rural homes were electrified later.



Many of the appliances in U.S. homes were first used in commercial enterprises. As technological advances allowed the size and price of appliances to decrease, it became more feasible for them to be used in private homes. The use of household appliances is strongly tied to gender. Women have been— and continue to be—responsible for the majority of domestic labor, and have been the family members most likely to use appliances for cooking, cleaning, and laundry. While often referred to as “laborsaving devices,” some social science research has suggested that changes in American lifestyles (e.g., increase in the size of homes, increase in the size of the typical American’s wardrobe; higher standards of living) have increased the demand for household labor, which more than offsets the amount of time saved by the use of appliances. In recent decades, however, as more American women have joined the labor force, the amount of time that they spend in household labor has substantially declined, and more domestic work is outsourced to commercial enterprises, performed by hired help, or simply not done, regardless of the presence of appliances in their homes. Cooking Meals are an enduring symbol of healthy American families. Before modern appliances were introduced into American kitchens, however, meal preparation and cleanup were time consuming and arduous tasks. In colonial American homes, meals were cooked over an open fire, typically in a fireplace that occupied a large portion of the house. The primary source of fuel was firewood, which had to be cut and carried into the house. Open fires were dangerous and dirty and difficult to control. The first wood stove was available in the early 1800s, but was not accessible to rural families until the 1850s. Few plates or eating utensils were used. Household members typically shared them. Washing dishes was thus not a primary task associated with meals. As new sources of fuel became available, appliances associated with meal preparation changed. Feminist leaders of the 1800s argued that the first step in liberating women from household burdens would be to reduce the demands emanating from the kitchen. In the early 1900s, attempts to apply the principles of production in factories failed to eliminate kitchen burdens, but they did result in a redesign of kitchen layouts that made them more

Household Appliances

697

efficient. Central to this new efficiency was the modernization of kitchen appliances. The typical stove of the late 1800s was made of cast iron or steel, fueled by wood or coal. This type of stove was large and required frequent refueling and removal of ashes, which was done manually. A solution to the problems posed by wood and coal stoves was the gas-heated stove, which continuously provided fuel and eliminated any fuel residue that had to be carried out. Gas stoves appeared in scientific expositions by the middle of the 19th century and had limited use in commercial kitchens. It was not until the 1930s that gas stoves were popular appliances in American homes. One appeal of gas stoves was that they were considerably smaller than the older stoves that required fuel boxes for wood or coal. Their aesthetic design also changed to one that is familiar today. Electric stoves were introduced in the 1930s and were quickly adopted by American families. Their appearance was quite similar to the modern gas stoves but they did not have open flames. Gas stoves continue to be used, along with electric stoves. The scientific discovery that made microwave cooking possible occurred shortly after World War II. Microwave ovens for home use were developed in the early 1950s. However, they were quite large and quite costly, so they didn’t become viable options for many American households at that time. The first countertop microwave oven was introduced in the late 1960s. As the price of microwaves decreased in the 1970s, they became one of the most common household appliances. Today, nearly all American homes use a microwave oven, which have continued to became smaller and less expensive over the past few decades. Microwavable food has proliferated, profoundly changing the nature of American meals. Cooking is not the only set of tasks affected by the availability of alternative sources of power and the invention of household technology. Before mechanical devices (i.e., ice boxes) were adopted to keep perishable food cold enough to prevent spoilage, urban families shopped for food on a frequent, often daily, basis. Ice was delivered by truck to urban neighborhoods. In cold weather, it was possible to store some perishable items outside. Prior to the development of the processes associated with refrigeration, ice was harvested from frozen bodies of water. Refrigeration was developed in the 1880s, first used in commercial enterprises. However, the equipment

698

Household Appliances

needed for the process of refrigeration was so large that it was impractical for home use. It wasn’t until the 1920s that refrigerators appropriate for home use were made available. They, like the newly modernized stoves, were covered in white enamel. Housecleaning As the size and decorating style of American homes changed over the years, the need for and use of appliances changed. Colonial homes were small, often consisting of just one room where all family activities were carried out. By the late 1880s, middle-class homes in urban areas were larger, heavily carpeted, and filled with furnishings that required dusting. Rugs were typically cleaned twice a year, when they were carried outside, hung over a line, and beaten to shake out the dirt. Carpet sweepers, manual devices that cleaned the rugs by pushing the dirt across the floor, were one of the first devices that assisted in the cleaning of floors. Electrically powered vacuum cleaners, as they are known today, were not invented until the early 1900s. Like other household appliances, vacuum cleaners went through various stages of development and had to be adapted for use in private homes after first being used in commercial enterprises. One of the most recently introduced appliances is a robotic device that can be programmed to vacuum and/or wash floors and requires little or no human effort or supervision.

A World War II–era promotional poster for refrigerator care. Refrigeration was developed in the 1880s, but refrigerators appropriate for home use were not available until the 1920s.

Laundry In the colonial period, clothing was made of materials such as felt, leather, wool, linen, and alpaca, which couldn’t be laundered. They were brushed or shaken to remove dirt. When cotton replaced linen and wool as the preferred fabrics of choice, laundry became a major component of women’s labor in the home. It shortly became one of the most difficult and dreaded of all household tasks. Before electricity and indoor plumbing were introduced to American homes, laundry was an arduous, multistep process that involved building a fire (which meant firewood had to be cut and carried to the laundry site), heating a large kettle of water over the fire, making soap from fat and lye, stirring the boiling clothes with a large stick, rinsing them in another kettle of clean water, wringing water out of the clothes after they were removed from the kettle, and hanging clothes outside to dry. Women frequently experienced

skin damage from the lye involved in the production of soap and from hanging clothes outside in cold weather. It wasn’t until later in the 1800s that laundry became a weekly task in most homes, one in which at least a full day of labor was devoted. It was still largely performed without the assistance of mechanical devices. There were more than 2,000 patents filed for domestic washing machines in the 1800s that addressed various aspects of the clothes washing process (such as tools to approximate the action of human knuckles in rubbing dirt from clothes, and mechanical devices to wring water from wet clothes). In 1869, a vertical-axis gyrator type of washing machine—which served as the basis for the washing machines developed for home use in the 1930s—was invented for commercial use. In the 1890s, commercial laundries began to be used



for “family washes.” The invention of small, electric motors made it possible to manufacture and market domestic washing machines. Laundry was one of the household tasks outsourced whenever a family had enough extra money to do so. Most families had at least some laundry done by commercial services or hired “washer women” by 1900. Aggressive advertising of washing machines to individual homes began in the 1920s. By 1927, the Maytag Corporation had sold 1 million washing machines. The early electrically powered machines consisted of tubs equipped with revolving agitators that circulated soapy water through the fabric. When the agitation cycle was complete, the clothes had to be passed through an attached wringer by hand. Machines that weren’t permanently plumbed had to be filled and drained manually. Fully automated washing machines—which filled and drained water to and from the tub and spun clothes to reduce the amount of water left in them after rinsing—did not become available until the late 1930s. Households that purchased washing machines typically used them to replace hired laundresses. As a result of buying washing machines, housewives took over the entire responsibility for doing the family’s laundry. Although the physical labor of laundry was somewhat reduced, the actual time that housewives allocated to family laundry increased. After World War II, the manufacture and marketing of domestic washing machines and other durable goods substantially increased, as the federal government subsidized the construction of highways and facilitated the growth of suburbs. Sales of domestic laundry appliances dramatically increased in the 1950s, with the increase continuing to the present. Parallel developments in other steps of the laundry process (e.g., drying and pressing clothes), in the types of cleaning agents that were available (e.g., homemade soap composed of lye and animal fats versus detergent developed during World War I), and in the types of fabrics used for clothing and household linens (i.e., the introduction of permanent press in 1964) were just as important in changing the amount and type of labor and technology associated with the family laundry. The first patent for an electric iron was granted in 1882. Conclusion Many household appliances still require a significant amount of auxiliary labor. Dishes must be

Housing Crisis

699

loaded into and removed from the dishwasher and returned to their storage place. Laundry must be sorted, carried to the laundry room, loaded into the machine, and then transferred from the washing machine to the dryer (or, in some cases, hung somewhere to dry). They must then be folded or put on hangers and returned to the place where they are stored. Broken appliances must be fixed (or replaced), which involves scheduling an appointment for a repair person to come to the home and waiting until that person arrives. Replacement of appliances requires shopping, delivery, and disposal of the units that are replaced. All of these machinetending tasks involve the owners’ time, energy, and money. The “push button” home of the future—as envisioned in the mid-20th century—has not yet been realized. Constance L. Shehan University of Florida See Also: Breadwinner-Homemaker Families; Family Consumption; Family Farms; Homemaker; Marital Division of Labor. Further Readings Cohen, Daniel. The Last Hundred Years: Household Technology. New York: M. Evans, 1982. Cowan, Ruth Schwartz. More Work for Mother: The Ironies of Household Technology From the Open Hearth to the Microwave. New York: Basic Books, 1983. Hardyment, Christina. From Mangle to Microwave: The Mechanization of Household Labor. Cambridge: Polity Press, 1988. Lupton, Ellen. Mechanical Brides: Women and Machines From Home to Office. New York: Smithsonian Institute, 1993.

Housing Crisis Since colonial times, America has faced several crises as a nation, including two great depressions, a long-lasting period of serious inflation and responding to global economic crises, and a crisis of housing. While the United States has faced issues

700

Housing Crisis

related to housing over the years, the country’s housing crisis is commonly understood to refer to the economic failure of the housing market beginning in 2006. The housing crisis affected and continues to affect residents who experience traumatic losses of their home, usually due to foreclosure and drastically reduced property tax values. This article will present history of this crisis, effects of the crisis, information on the crisis itself, as well as steps to assist those affected by the housing crisis. Congress’s Housing Act of 1949 specified a goal of decent housing and suitable living environments for each American family. While this goal has not been met, owning a home remains a fundamental dream for many families in the nation, and a number of local and federal policies have been implemented to make this dream a reality. In 1994, President Bill Clinton attempted to aid in this effort by proposing a large alteration of federal housing programs. Under the Department of Housing and Urban Development, 60 programs were condensed in his rewriting of the Community Reinvestment Act. This created more lax housing rules and placed pressure on banks to increase lending to low-income applicants. Less than a decade later, the prices of houses on the market began to exponentially increase. In fact, between 2002 and 2006, home prices increased by 64 percent. From 2006 to 2009, homes sales dropped 36 percent and the building of newly constructed homes fell by 75 percent. The effects of this crisis continue today. In response to the housing crises, the administration of President Barack Obama sought proposals from potential homebuyers in the hope of encouraging families who rented their living space to purchase homes. The strategy rested on the idea that millions of families were paying more in rent than the mortgage price of a comparable property. For example, in 2009, 3 million families with children received an annual household income of $30,000 and paid $800 each month in rent. An $800 per month rent is the equivalent of the mortgage in a $115,000 home. The Obama administration also implemented valuable home refinance and loan modification programs. Many families have been affected by the housing crisis in America. They not only lose their homes, but they also struggle to secure new housing. In addition, these families often experience disenfranchised grief. Disenfranchised grief

denotes feelings of grief that accompany the loss of a home. Disenfranchised grief has tended to be socially unrecognized, as most individuals do not anticipate grieving for a home. Nevertheless, it has been a common effect felt by homeowners losing their homes in the housing crisis. For this reason, it is important for families who lost their homes to manage all emotions that losing their home presents, including grief and the temptation to blame themselves or family members. The housing crisis has had a particularly negative impact on single mother families, who are already at greater risk for unstable employment, financial instability, and living in poverty. Currently, these families represent approximately 84 percent of homeless families in the United States. When they lose their homes, single mother families experience discrimination from apartment owners, making the possibility of renting difficult. Those single mothers who have been the victim of domestic violence find securing housing especially challenging. The Violence Against Women and Justice Department Reauthorization Act of 2005 has attempted to assist by protecting women, men, and family members who are qualified to be tenants from injustices in receiving housing. Several approaches aimed at resolving the housing crisis in America have been suggested. Some of these include homeowner vouchers to cover the sustainable costs of homes, firms or nonprofit organizations buying homes in their communities so that low-income families in the community can purchase them, and open access to classes on financial literacy. Families affected by the housing crisis can benefit from seeking free counseling at organizations that help in the process of foreclosure recovery. It can also be useful to develop a budget to help with financial planning, and a detailed strategy to repair damaged credit scores. Such a strategy should include specific amounts and timetables for attacking all debts. Of utmost importance, however, is a commitment within the family to work as a team and rebuild. Amber N. Hearn Winetta Oloo Loma Linda University See Also: Homelessness; Housing Policy; Poverty and Poor Families; Poverty Line.

Further Readings Board of Governors of the Federal Reserve System. “Community Reinvestment Act.” http://www.federal reserve.gov/communitydev/cra_about.htm (Accessed July 2013). Chernick, Howard, Adams Langley, and Andrew Reschovsky. “The Impact of the Great Recession and the Housing Crisis on the Financing of America’s Largest Cities.” Regional Science and Urban Economics, v.41 (2011). Cherry, Robert and Robert I. Lerman. “How the Government Can Solve Housing Crisis.” Urban Institute. http://www.urban.org/publications/901462 .html (Accessed July 2013).

Housing Policy Housing policy in the United States is made up of an amalgam of federal, state, and local policies that define housing. Housing can be described as individual family homes, condominiums, apartment buildings, duplexes, four family units, literally anywhere a person or family resides. There is a substantial amount of variation among policies, both direct and indirect. An example of direct housing policy would be zoning regulations that specify areas as residential, commercial, or industrial. Normally, housing for families would be considered residential; however, some homes are located in areas where businesses have cropped up around them. An example of indirect housing policy would be construction of transportation networks that expand the range of places where people can live and still travel to their job. Another example of indirect housing policy would be mortgage lending laws, which affect who can own houses by determining who can obtain a mortgage loan. What is and is not housing policy in the United States is open to some debate, whether to interpret housing policy narrowly or broadly. Should the millions of incarcerated Americans be considered? Jail is where they live, but most people don’t think of it as housing. Is the foster care system a part of housing policy? For many youth, where they live is determined by intervening institutions. The narrow interpretation of housing policy is used here, but there are many impacts on housing, from mental

Housing Policy

701

institutions and military barracks to the ghettos and the mansions that make up housing policy in the United States. Property Taxes Property taxes directly and indirectly affect housing. The direct impact comes from creating a cost to ownership, a cost passed on to renters. Property taxes also affect housing to the extent that certain things are not taxed, such as buildings that house nonprofit organizations and churches, which thus do not compete with housing on a level playing field. Conversely, industry and commercial properties can be taxed at a higher rate, effectively subsidizing housing. The balance between these taxation rates is an important housing policy control. In addition, the services funded by property taxes such as police, fire, schools, and waste management help to define a home’s worth. The higher the taxes, the better the services, which in turn leads to a higher valued home. In many states, such as Wisconsin, property taxes pay for public schools, which affects housing by creating demand to live in certain districts over others. This creates the condition where house values can substantially jump simply for being located across a street, and thus in another district. The strength of this effect depends partly upon the rate of taxation, as well as the quality of the schools and any busing or integration efforts. A curious side effect of this part of housing policy is that the tax per house does not usually scale linearly with cost of the house due to there being multiple districts. Within each district the tax is usually consistent. However, because the cost per student usually ranges between $7,500 and $15,000, the percent tax and sometimes the actual tax for most homeowners can be lower than in less wealthy neighborhoods. This is because high-end homes can be 10 times the cost of more modest housing, while student cost will normally only double or triple. Impacts on Housing Policy Education policy has indirectly affected housing policy in multiple ways. One way is through desegregation efforts following the U.S. Supreme Court decision Brown v. Board of Education (1954), which convinced many states of the need to advance educational equality in the face of federal pressure. Conversely, as educational attainment is a primary driver of lifetime income and wealth, educational

702

Housing Policy

inequality perpetuates housing inequality. Higher education institutions often purchase the land around their campus to create a buffer zone. State schools can sometimes use eminent domain to seize land and buildings from around them to create this buffer. The schools sometimes rent out the properties to students, becoming their landlords. This gives the schools more power than they otherwise would have, both as schools and as landlords. This is not necessarily a bad thing, as it allows the schools to police their environment and support their local community. Law enforcement policy indirectly affects housing policy in many ways. A lack of good police can deter people from living in an area, whereas a presence of bad police can similarly deter people from wanting to live in an area. Law enforcement policy is similar to education in that it supports demand to live in an area but also creates taxation that can drive people away from an area. Private property rights are essential to the American legal system, and are an important part of housing policy. Some states enshrine the home legally as a special place; this practice is called the Castle Doctrine, or Stand Your Ground law, allowing people more leeway in how they choose to defend themselves while in their home. Demand for housing in an area can drive up the cost of housing. Governments in the United States often try to control the cost of housing through various housing policy mechanisms. New York City’s rent-controlled apartments are a well-known example. Many cities practice a form of subsidized housing, sometimes targeted at specific populations, such as means tested programs. Zoning is another form of housing policy. It can be used to restrict housing to certain areas. Good zoning protects people from dangers such as pollution. Bad zoning can restrict legitimate housing, and thus drive up costs. Zoning can also be used to control the size and density of housing. Homelessness arises as a form of housing policy. Functionally, it is the policy of some parts of the country to allow people to be homeless. There are government interventions and policies that can eliminate homelessness, but whether they are pursued or not is a funding and political issue. There are housing policy efforts to address this issue that do not depend on state funds, such as leaving the issue to nonprofit shelters. This is commonly experienced in the United States as states and local governments face budget

deficits. The political side of this is that homelessness is not often the top priority of those making housing policy. One cause of homelessness is due to foreclosures on homes by banks. Another cause is joblessness and poverty, and yet another cause is mental illness. All of these are serious societal concerns. An important indirect policy is the legal minimums on parking spots. Some cities are experimenting with lowering or eliminating these requirements. In most cases, parking spots take up space that could be used for housing. This can create denser housing, requiring more people to live in a given area. Housing policies compete with other priorities. There are right of way laws, for example, airplanes around airports. There are competing uses for the land. A commonly seen confrontation of priorities is in downtown waterfront renovations, where competing interests want different outcomes. Similarly, efforts at historic preservation can conflict with efforts to expand housing with new construction. Parks and nature preserves similarly restrict where housing can be built. Restrictions on building heights are common in capitals, such as Madison, Wisconsin, or Washington, D.C. Housing policy is the creation of competing priorities. Locally, there is a tension between homeowners, renters, landlords, and the construction industry. These tensions scale up to state and federal levels through associations and lobbyist groups. Governmental agencies have similar tensions, such as budgetary constraints. Tensions exist between housing and nonhousing goals for these agencies. Even among housing goals, tensions exist beyond just limited budgets. Efforts to promote homeownership naturally conflict with the interests of landlords and efforts to promote denser housing. This can create policies that operate at cross-purposes, but this is also a result of distributed policymaking across varied jurisdictions. Controls One partial solution to these competing priorities is with dual-use zoning, such as storefronts on the first level and housing on the remaining levels. This was more common in the 1800s and early 1900s, but it is making a revival with efforts such as new urbanism. Another option is simply to entirely relax zoning restrictions, favoring a more free-market approach. Another solution to these competing priorities is



through national nonprofits that seek to establish better policies throughout the nation. Though nonprofits can be just another entity seeking to influence policy, they can also lead to knowledge sharing and the spread of best practices nationwide in housing policy. They are some of the few forces for smoothing out housing policy across the country. In the United States, one measure of a person’s wealth has historically been the size of his or her home and the location. Traditionally, the bigger the home, the wealthier the person. There was a clear divide between the wealthy and the poor based on location of homes, with expressions such as “the wrong side of the tracks” indicating that someone came from the wrong part of town. One practice that has been outlawed is “redlining” where distinct areas were marked on a map that created racial segregation via restricting access to loans. Housing policies in the United States place a number of controls to maintain a basic level of quality in housing. These standards in housing are enforced both passively, through policies such as building codes, and actively through inspectors who are enforcing building codes. Most municipalities require building permits for remodeling, new construction, and adding on to existing structures. These policies help to ensure the safety of the residents and their neighbors. Homeownership is a subset of housing policy, and not all people are or aspire to be homeowners. The early 2000s saw a push to increase home ownership, but this created a bubble that burst with the crash of the later 2000s. Although it may not immediately strike someone as a part of housing policy, the Federal Reserve controls the interest rate, a strong determinant of how many people can afford a loan for a house. Time lags are a major concern in housing policy. Major construction can take years to finish, and might not be paid off for years after that. If the new construction fails to accrue demand, the construction might end up vacant. This issue can affect both private and public finances, and governments can end up having built roads into subdivisions that remain unused. Balancing expansions are thus a tricky matter involving imperfect predictions of the future. What to do with these constructions is a further concern, whether because they are merely budgetary holes, excess capacity, or lead to blighted communities. Detroit is a good example of changing

Human Genome Project

703

demand on areas, as although the metropolitan area’s population has stayed relatively steady since 1970, the distribution within that area has changed drastically. Federal housing policy has gone through major changes in the recent decades. Throughout the 1960s and 1970s there were major advances in federal housing policy to advance racial and socioeconomic equality in housing. This was backed by similar state and local policies. The Americans with Disabilities Act (ADA) is one such example of federal housing policy. The ADA requires that people with disabilities are afforded the same opportunities as people without disabilities. The ADA also requires that accommodations be made to homes to make living easier for people with disabilities. Housing policy helps to maintain the integrity of what a society likes to think of as a home: a safe place for people to reside in and to enjoy life. Like moms and apple pie, homes are important to one’s way of life, and homes are part of the “dream.” Housing policies help to ensure safety and equity for everyone who is fortunate to have a home. Janice Elizabeth Jones Cardinal Stritch University Evan Emmett Diehnelt University of Wisconsin–Madison See Also: Family Housing; Home Mortgage Deduction; Homelessness; Housing Crisis. Further Readings Erickson, David James. The Housing Policy Revolution: Networks and Neighborhoods. Washington, DC: Urban Institute Press, 2009. Husock, Howard. America’s Trillion-Dollar Housing Mistake: The Failure of American Housing Policy. Chicago: Ivan R. Dee, 2003. Schwartz, Alex F. Housing Policy in the United States. New York: Routledge, 2006.

Human Genome Project The discovery of the nature of DNA revealed to scientists that it would be possible, albeit complex and time-consuming, to identify the different types

704

Human Genome Project

of DNA within the genes of a living organism. The genes are the chemical substances that give structure and organization to living matter. They reside in the nucleus of each cell and determine how proteins and the cells develop. Genes are made unique by the distribution of the four chemical bases (abbreviated as A, C, G, and T) that are repeated numerous times within each gene. There are thought to be around 20,500 genes in the human body (although this number varies as new research results are published), and the genome, which is the total of all genetic material, contains some 3 billion pairs of the chemical bases. Since many medical conditions are caused by issues within specific genes, the ability to treat these conditions will begin with understanding genes and their specific functions. The Human Genome Project (HGP) was an attempt to map all the different base pairs of the genes within the human body. That goal has been achieved, after 13 of the scheduled 15 years’ effort, involving the average expenditure of some $200 million per year, and new research goals have subsequently been set. Although research into the human genome has been international in scope and involved collaboration between large numbers of universities and research agencies, a central part of the HGP took place in the United States as a partnership between the Department of Energy (DOE) and the National Institutes of Health (NIH). It began in 1990, and technological advances enabled it to be completed in 2003, ahead of the scheduled completion date of 2005. The partnership had operational objectives in the following areas: (1) identifying all the genes in human DNA; (2) identifying and sequencing the 3 billion chemical base pairs; (3) storing the information obtained in suitable formats; (4) improving the quality of research tools to improve analytical ability; (5) creating means by which the fruits of the research could be transferred to the private sectors; and (6) addressing the various ethical, legal, and social issues (ELSI) created by the previous activities. Conceived during the Reagan administration (1981–89), the HGP was, in addition to its scientific goals, a part of neoliberal ideology that sought to capture economic benefits available by converting something that previously belonged to everybody (i.e., the mysteries of the human body) and codifying it in a form that could be considered private

property, and hence, belonging to the few. This process has become a central element in the ELSI, alongside the interaction between the ability or the potential ability to alter the human body through medical procedures and the right to do so. The involvement of the DOE in the original partnership resulted from the role of its predecessor in the development of nuclear weapons and the need to understand the impact on the human body of various types of radiation, and what might be done to mitigate that impact. Applications of Human Genome Technology An undertaking such as the HGP, with its numerous contributors and stakeholders aimed at generating and recording knowledge over the course of many years, is an indication of how much better the public sector works in such cases than the private sector, which ultimately benefits. The similar work that has taken place internationally has done so under a variety of relationships between universities and public and private sectors, some of which have been more successful in returning tax revenue and other benefits to the government. However, HGP applications are more commonly measured in the field of medical science. Assessments of the changes that have become possible and will become possible in the future are usually made on the basis that no insuperable technical or engineering issues prevent the envisaged benefits from being realized. This can lead to some overdramatic predictions of large-scale change. In reality, changes tend to take longer to materialize because of operational issues that may not be easily visible to outsiders. Nevertheless, the benefits of the HGP appear startling in size and scope. In the first place, the diseases and conditions resulting from genetic issues, which previously could be directly treated, can now be tackled on a systematic basis. As it becomes possible to identify specific molecular pathways that are disrupted by a particular disease, it will be possible to create designer drugs that target these issues. It is assumed that as knowledge of how to create and implement these medical innovations deepens in the pharmaceutical industry and research partners, the number of conditions that can be treated will improve, and severity of those conditions increase. A second area involving the ability to understand the proclivity of certain individuals to suffer from



Human Genome Project

705

particular nongenetic medical conditions will also be enhanced. It is likely that, for example, some genetic traits predispose some individuals to obesity that might then result in higher incidences of heart disease and cancers. Uncovering the links between specific genetic configurations and predisposition to conditions will be a complex and longterm process that may only offer partial information in particular cases, but may revolutionize diagnosis, surveillance, and therapy for a wide range of issues. Using this information would cause a range of significant ethical issues. The third area of change may in time become the most significant of all: the development of extensive databases that are convenient to access will enable local and community practitioners to be linked with a wealth of useful data that was simply not available before. Such practitioners have the ability to become specialists in genetic medicine to an extent that would not otherwise have been conceivable. ELSI: Ethical, Legal, and Social Issues Genome sequencing technology has developed to the extent that mapping the DNA of a specific individual can now be completed at around $5,000, compared to the $2.7 billion it took at the birth of the technique. This has brought aspiration for having one’s genome sequenced within the realistic ambitions of many millions of people. If the reduction of costs continues at such a rate, then it is not difficult to imagine a situation in which it becomes standard practice to sequence people’s genomes, perhaps at birth, in developed countries and even for this to be a mandatory procedure for application to certain forms of employment. At the moment, it is mostly restricted to those who believe that they may be susceptible to a particular form of genetic disorder and wish to be forewarned about any risks. The international film actress Angelina Jolie voluntarily underwent preventative double mastectomy upon learning that she had an elevated risk of breast and ovarian cancer. This high-profile incident inspired a great deal of public debate about whether people should have access to such information and the extent to which they should use it to make decisions about medical care for themselves or their dependents, and more pointedly, the degree to which external organizations should be able to obtain access to data and use it to determine health insurance premiums, mortgage payments,

The Wellcome Collection in London houses an impressive printed version of the human genome, for which an entire bookcase was needed. To the right of the bookcase, some fur from the cloned sheep Dolly is displayed.

employment, or scholarship results. In the United States, this debate has played out in the context of ongoing revelations about the extent to which the state has been collecting and monitoring the private communications of individuals. Because large data-holding companies such as Google, Facebook, and Yahoo have given access to the government to undertake data collection on private individuals on an unprecedented scale (although the companies continue to issue denials of impropriety), it appears that people must recognize that the confidentiality of any information, including medical records, has been significantly compromised. One of the central ethical issues in this case has been based on the supposition that it will one day be possible to predict and manipulate the genetic configuration of children. There are two principal elements to the arguments, which are far from resolved. The first concerns the situation in which the child faces a potentially low quality of life as a result of a genetic disorder; if this is the case, should

706

Human Genome Project

the parent or parents consider an abortion to prevent future suffering? The emotions of many people when it comes to abortion and related human rights issues can be vivid, and people divide into antagonistic camps, between which little constructive discourse is likely to take place. If the procedure is considered acceptable at all, then it opens questions about other grounds on which abortions may be permitted. This debate goes ahead in the knowledge that, in China and India in particular, abortion of unwanted daughters has led to tens of millions fewer girls being born. The second element of the argument concerns the ability to change the genetic configuration of children for what is considered the better. This ability, insofar as it will ever exist, will be restricted to the wealthy at first, and is likely to be linked to the science of eugenics and the many distasteful episodes of its past. The children of the rich will therefore not only be healthier than those born to other conditions, but are also likely to have additional benefits such as higher IQ, higher capability for physical activities, and more physical beauty. It might also be possible for successful rich people with the physical characteristics of a group that faces discrimination in society to engineer their children so that the distinctive characteristics have been removed. A host of controversies could derive from such possibilities. The implications would spread much further than the individuals directly concerned because it might lead to the codification of superiority and inferiority of certain types of people in common practice, and thereby entrench discrimination. A final issue of concern relates to the ownership of genomic data and its application. A variety of laws and legal precedents exist to privilege corporations claiming intellectual property rights (IPR) over what used to belong to the commons—that is, those items and rights that were held to belong to anybody who cared to use them. IPR in the case of the human body converts, people into things that can be owned and controlled, and this is another way of describing slavery. Because the HGP has revealed the unexpectedly large amount of diversity of genetic material in humanity (and few people have so far been

investigated), it is quite possible for some people to have valuable variations naturally occurring in their bodies. If this variation were captured and marketed, the issue arises of whether that part of the body affected in other people now belongs to the corporation or what level of rights the corporation would hold. John Walsh Shinawatra University See Also: Evolutionary Theories; Genealogy and Family Trees; Genetics and Heredity; Health of American Families. Further Readings Cadwalladr, Carole. “What Happened When I Had My Genome Sequenced.” The Guardian (June 13, 2013). http://www.theguardian.com/science/2013/jun/08/ genome-sequenced (Accessed December 2013). Collins, Francis S. and Victor A. McKusick. “Implications of the Human Genome Project for Medical Science.” Journal of the American Medical Association, v.285/5 (February 7, 2001). Dickenson, Donna L. Property in the Body: Feminist Perspectives. Cambridge: Cambridge University Press, 2007. Greenwald, Glenn and Ewen MacAskill. “NSA Prism Program Taps in to User Data of Apple, Google, and Others.” The Guardian (June 7, 2013). http://www.the guardian.com/world/2013/jun/06/us-tech-giants-nsa -data (Accessed December 2013). Happe, Kelly E. The Material Gene: Gender, Race, and Heredity After the Human Genome Project. New York: New York University Press, 2013. Human Genome Information Archive, 1990–2003. http://web.ornl.gov/sci/techresources/Human _Genome/hg5yp/index.shtml (Accessed December 2013). Jolie, Angelina. “My Medical Choice.” New York Times (May 14, 2013). http://www.nytimes.com/2013/05/ 14/opinion/my-medical-choice.html (Accessed December 2013). Sawicki, Mark P., Ghassan Samara, Michael Hurwitz, and Edward Passaru Jr. “Human Genome Project.” American Journal of Surgery, v.165/2 (February 1993).

I Immigrant Families Immigrant families make up a considerable proportion of the U.S. population. There is enormous variation among immigrant families in terms of country of origin; family members’ nativity status (such as U.S. versus foreign-born); immigrant generational status (first, second, or third generation); legal immigrant status (such as citizen, legal permanent resident, temporary migrant, or undocumented immigrant); and acculturation level. According to the Census Bureau, the U.S. foreign-born population has undergone dramatic changes in the last 50 years. Whereas about 50 years ago only one in 20 individuals was born outside the country, now one out of every eight people is foreign-born. Furthermore, in contrast to earlier waves of immigrants who came from Europe, more recent immigrants hail from Latin and South America and Asia. Immigrant families may also differ as a function of family members’ pre- and postmigration experiences and reasons for immigration. For example, families consisting of highly educated and wealthy parents and their children who leave their home countries behind because of political or religious persecution will have different life experiences and life chances in the United States than families made up of poor parents with limited education who enter the United States in search of economic opportunities. Additionally, immigrant families

vary with regard to residential and geographic location. Although many immigrant families settle in ethnic enclaves (i.e., neighborhoods with high concentrations of immigrants), many families choose to live outside predominantly immigrant communities. Family experiences also differ depending on whether an immigrant family resides in an urban, suburban, or rural area. Historically, immigrants gravitated toward immigrant gateway cities such as New York, Los Angeles, or Miami. Recently, however, more and more immigrant families opt to settle outside these traditional gateway communities. Country of Origin Early immigrant families hailed primarily from European countries such as Italy, Ireland, Hungary, Russia, or Greece. Recent decades, however, have seen a shift in immigrants’ region of origin. According to recent census estimates, the majority of foreign-born people who make up 13 percent of the U.S. population are immigrants from Latin or South America, Asia, and the Caribbean. Over half of immigrants hail from Latin America and the Caribbean (9 percent); almost one-third of immigrants come from Asian countries such as India and China, and another 12 percent are from Europe. Only 4 percent of immigrants were born in Africa. More than 50 percent of immigrants from Latin America are of Mexican origin. This variation in immigrants’ region of origin manifests in 707

708

Immigrant Families

substantial linguistic, cultural, and ethnic diversity across immigrant families. Nativity Status Immigrant families also vary greatly in terms of family members’ nativity status, that is, whether family members were born in (native-born) or outside the United States (foreign-born). In some families, every family member (e.g., parents and all children) are foreign-born; in others some family members were born abroad, while others are U.S.born. Family members’ nativity status matters because it has implications for individuals’ legal immigrant status, which in turn shapes the life chances (i.e., the opportunities individuals have to live a good life) of each family member separately and the family as a whole. Nativity status also signifies families’ exposure to multiple sets of cultural value systems. In a family where parents are born and raised outside the United States, children will be powerfully shaped by the values and traditions that their parents internalized in their culture of origin, even if these children are born in the United States. Immigrant Generational Status Immigrant generational status is determined based on one’s nativity status and age at immigration to the United States. Thus, a foreign-born individual who immigrates to the United States as an adult is considered to be a first-generation immigrant. U.S.-born individuals with at least one foreign-born parent are called second-generation immigrants. U.S.-born individuals of U.S.-born parents who have foreign-born grandparents represent third-generation immigrants. Immigrant generational status has implications for family life and children’s development because it contributes to variation in family members’ legal status, language proficiency and preference, cultural values and traditions, socialization experiences and goals, and gender and family role expectations. Furthermore, it has long-term consequences for family members’ economic success, and in turn physical and cognitive development, as well as social-emotional well-being. Recent census estimates indicate that approximately two-thirds of foreign-born immigrants moved to this country in the last two decades (1990 or later). More than one-third of individuals born outside the United

States arrived in America in the last decade. Furthermore, about 7 million, or 17 percent, of foreign-born immigrants entered the United States after 2005. The census refers to this subgroup of the foreign-born immigrant population as “newly arrived immigrants.” Finally, another significant subgroup of the immigrant population is made up of what social scientists refer to as the “oneand-a-half generation” immigrants. These include foreign-born children who moved to the United States with their parents at a young age. Legal Immigrant Status There is great variation within and across immigrant families with regard to immigrant family members’ legal status. In some families, every member is a U.S. citizen, whereas in others every member is undocumented (i.e., does not hold a legal status). A rising number of immigrant families are considered mixed-status families. Mixed-status immigrant families consist of family members of varying legal status. For example, a mixed-status family may be made up of one legal permanent resident mother, an undocumented father, a U.S.-born citizen child, and an undocumented foreign-born child. The life experiences of mixed-status families vary depending on whether undocumented members of the family have the option to obtain legal status. In families where undocumented family members cannot attain legal status, parents and children experience considerable stress as they live in fear of deportation and without access to resources available to legal immigrants and citizens (e.g., driver’s license, college loans, employment protection, and social security benefits). Pre- and Postmigration Experiences Immigrant families also differ in terms of family members’ experiences prior to and after immigration. Families whose members are highly educated, speak English well, have financial means, and those who have been exposed to Western cultural systems will have an easier time adjusting to life in the United States and succeeding economically. Families in which parents are poor, uneducated, lowskilled, lack English language skills, and have not had exposure to Western values and practices will have a more difficult time adapting to the American way of life. In addition, the type of reception immigrant families receive on their arrival in the



Immigrant Families

709

A family demonstrates together during a rally for immigration reform held in San Francisco in October 2013. For families in which undocumented members cannot attain legal status, parents and children experience considerable stress as they live in fear of deportation. They also suffer the consequences of a life without access to resources available to legal immigrants and citizens, such as college loans, employment protection, and social security benefits.

United States represents an important source of advantage or disadvantage for their long-term adjustment and success. Families who arrive in the United States in an era of anti-immigrant sentiments or economic downturn face more challenges than families who enter the United States at a time of economic prosperity when employment opportunities abound and when social acceptance of immigrants is widespread.

and educational opportunities, political stability, freedom of religion, freedom of speech, and family already in the United States. Families who leave behind a war-torn country or who suffered because of their religious beliefs will embrace the U.S. culture in a different way than those who arrive in search of economic opportunities only to find out that the economic downturn and institutional discrimination undermine their life chances.

Reasons for Immigration In understanding immigrant families, including their success or hardships, one must take into account the forces that propelled them to leave their home countries (also known as push factors) and the forces that attracted them to the United States (also known as pull factors). Push factors of immigration include poverty, warfare, political instability, widespread crime, natural disasters, religious or political persecution, and lack of educational opportunities. Pull factors include economic

Residential and Geographic Location Contrary to earlier immigrant families who settled in the Midwest and Northeast upon their arrival in the United States, recent families of immigrant background have increasingly moved to the southern and western regions of the country in the last two decades. Additionally, there has been a considerable shift away from traditional gateway cities such as New York or Miami to communities that had not seen a large influx of immigrants previously. Traditional gateway cities typically have large

710

Immigrant Families

ethnic enclaves that make adjustment to the new country easier for newly arrived immigrant families. As such, ethnic enclaves represent a considerable source of social, cultural, and even economic capital for immigrant families. In ethnic enclaves, newly arriving immigrants are not confronted with a language or cultural barrier and often find employment without having to speak English or understand the U.S. culture. However, ethnic enclaves may be somewhat limiting in the long run in that opportunities for economic advancement and professional development may be limited. Because immigrants may not have the opportunity to develop their English skills or to be exposed to the mainstream culture, they may face significant challenges when they try to relocate and succeed outside the ethnic enclaves. On the other hand, communities outside traditional immigrant gateway cities may not be well prepared to welcome immigrant families. Living outside traditional immigrant gateway communities may present immigrant families with several challenges. For instance, they may not be easily accepted into the community; they may have very limited social capital, and it may be difficult for them to build a strong social support system. Finally, they may not find social support programs that cater to their needs or that are culturally appropriate. Acculturation Acculturation is the process by which immigrants adapt socially and culturally to the host country. This adaptation involves negotiating the norms of the host culture and the country of origin. Acculturation occurs over time during which immigrants can experience acculturative stress, that is, psychological distress associated with the process of reconciling and integrating two or more social and cultural systems. Part of the acculturative process involves developing or adapting one’s ethnic identity in consideration of the host culture and one’s culture of origin; understanding culture-specific meanings; meeting behavioral expectations in culturally appropriate ways; and coping with prejudice and discrimination. For immigrant families who come from countries with a different social stratification system, the process of acculturation also involves understanding the U.S. racial hierarchy, finding their place in it, and developing coping strategies for themselves and their children to survive and thrive in an environment that often

disadvantages ethnic/racial minorities. Acculturation may challenge family life because family members may not acculturate simultaneously or at the same rate. Youth and children often acculturate at a faster pace than their foreign-born parents because they learn English more easily and rapidly and because they interact with the mainstream cultural system more frequently than their parents. School attendance in that regard facilitates immigrant children’s acculturation. Differential rates of acculturation can also lead to intergenerational conflict and culture clash between parents and children. Children may resist parental attempts at socialization because they may perceive the values of their parents’ culture of origin to be in stark contrast to the cultural value system of the U.S. mainstream. Consequently, children may not see the utility in behaving according to their parents’ cultural norms. Children in Immigrant Families Children who have at least one foreign-born parent are considered to be of immigrant origin. The Anne Casey Foundation estimated that in 2011, 24 percent of U.S. children under 18 were either born outside the country or resided with at least one foreign-born parent. According to the U.S. Census Bureau, young children from Latin America, the Caribbean, and Asia make up over 80 percent of immigrant children. Immigrant children face several challenges associated with their immigrant background. They must develop an ethnic identity that allows them to thrive in the U.S. mainstream culture. At the same time, their ethnic identity needs to acknowledge their immigrant background and facilitate their relationships with parents, extended family, and people who share their background. Research has shown that a positive ethnic identity can protect immigrant children from the deleterious effects of prejudice, discrimination, and negative stereotypes. Because of their parents’ limited English skills, immigrant children often function as interpreters, frequently in official matters, which is known as language brokering. When children take on the language broker role, power imbalance in the parent-child relationship may ensue, leading to parent-child conflicts. Despite these challenges, immigrant children, particularly first- and second-generation children, tend to do better than subsequent immigrant generations (also known as immigrant paradox). Immigrant children who develop the

Immigration Policy



ability to function effectively in both the U.S. mainstream culture and their parents’ culture of origin are said to have bicultural competence. Bicultural competence has been found to benefit immigrant children’s development. Annamaria Csizmadia University of Connecticut See Also: Acculturation; Ethnic Enclaves; Immigration Policy; Language Brokers. Further Readings Lofquist, Daphne, Terry Lugalia, Martin O’Connell, and Sarah Fellz. “Households and Families: 2010.” 2010 Census Briefs (April 2012). Mather, M. “Children in Immigrant Families Chart New Path.” Reports on America (2009). http://www.aecf .org/KnowledgeCenter/Publications.aspx?pubguid _{11F178AD-66BF-474E%E2%80%9384B2-2B7E93 A8877F (Accessed December 2013). Sirin, Selcuk R., Patricia Ryce, Taveeshi Gupta, and Lauren Rogers-Sirin. “The Role of Acculturative Stress on Mental Health Symptoms for Immigrant Adolescents: A Longitudinal Investigation.” Developmental Psychology, v.49 (2013). Walters, Nathan P. and Edward N. Trevelyan. “The Newly Arrived Foreign-Born Population of the United States: 2010.” American Community Survey Briefs (November 2011).

Immigration Policy The first federal law specifically regulating immigration, the Page Law, was enacted in 1875; however, before 1875, the states and the federal government created policies and practices that both induced and dissuaded immigration. These policies and practices affected families, or prospective families, differently based on race, class, gender, age, ability, and the caprices of immigration inspectors. The 1808 Act Prohibiting Importation of Slaves sharpened the greatest divide between American families and represented a notable exception to the states’ dominance before 1875. Although illegal importation continued into the 1850s, the 1808 act ended official U.S. participation in the international

711

slave trade and shifted the slaveowners’ focus from young men in Africa to the women they already owned in America as the source of future slaves. Though technically not an immigration policy because slaves, as property, were migrated or imported, the 1808 act widened the divide between families whose children could be citizens and families whose children could be property. Between 1808 and 1875, Congress complemented state regulation of immigration. During these years, federal policies aimed mainly at attracting a particular class of immigrants by setting limits on the number of passengers a ship could carry—indirectly raising the cost of tickets—and by attractive land policies promising a plot suitable for a family farm. Although federal policies, at least indirectly, attracted family immigration, state regulations and practices placed qualitative limits on immigration that attempted to keep paupers and morally dangerous immigrants from threatening the well-being of American families. The statebased regulations discriminated against blacks, convicts, and those immigrants “likely to become public charges” (LPC). Perhaps more than any other designation, the classification of immigrants as LPC that began with state laws has shaped families’ encounters with immigration policy. State regulations against paupers targeted women and children, both at the border and after their entry. Many prospective immigrant women, particularly Jewish and Irish women, had marketable skills that helped support themselves and their families. Yet, women without guaranteed jobs or established family connections—to pay bonds and influence inspectors—could easily be turned away as a threat to state poor rolls. To protect these funds, unwed immigrant mothers and their children also faced the threat of deportation, as an 1855 Massachusetts case illustrated, even when the mother had worked to support herself and the child was a jus soli American citizen. Such regulations meant that women and children especially benefited from extended family household formations that could lobby for their entry and ensure they remained in the country. Following the Civil War, the federal government expanded and exerted its power over the states. The 1875 Page Law, the first federal restriction against immigration, barred convicts, contract laborers, and prostitutes. While the first two categories went

712

Immigration Policy

unenforced, immigration inspectors in California used the Page Law to investigate every female Chinese immigrant. This application proved so successful that it helped to create a bachelor-dominated Chinese American community, but, since merchants retained treaty rights to migrate their wives and children, one influenced by class. After a 1902 court decision, jus soli Chinese American citizens could also migrate their wives and children. Following the 1906 San Francisco earthquake and fire, this practice helped create “paper families,” in which Chinese American men claimed both their own citizenship and to have fathered jus sanguinis citizen children in China. General federal immigration regulations also expanded. The 1882 Immigration Act barred convicts, lunatics, idiots, and LPC migrants. In 1885, Congress created a paradox for immigrants by barring all contract labor, meaning that migrants had to demonstrate they were not LPC without indicating they had a job waiting for them. This paradox increased the preference for able-bodied young men or breadwinners, making women and children more dependent on nuclear family ties. Boards of Special Inquiry further barred and deported selfsufficient men because of “feminism” and also excluded men and women because of a “lack of sexual development” and hermaphroditism. Turn-ofthe-century immigration policy attempted to create a eugenic nation by selecting immigrants already in or capable of forming economically, physically, and morally sustainable American families. Policy in the Twentieth Century The Expatriation Act of 1907 and inspectors’ application of the literacy test requirement in the Immigration Act of 1917 demonstrated the evolving patriarchal character of immigration policy. Although immigrant women’s citizenship had been tied to their husbands’ citizenship in 1855, in 1907 the Expatriation Act made such citizenship a factor for families of immigrant men and American-born women, who lost their citizenship if they married noncitizens; and, though the 1922 Cable Act ameliorated the Expatriation Act’s extremes, the patriarchal logic remained. Furthermore, the literacy test, introduced in 1917 as part of a move from regulation to restriction, applied only to men—or women and children migrating outside family groups. If a wife and child—or even elderly parents—migrated

with or to a literate man, then they were exempt from the test. If an immigrant wife could read and her husband could not, however, the entire family was excludable. As in Chinese exclusion, general immigration restriction policies reinforced patriarchal power and family formation. From 1908 through 1920, Japanese immigrants used the patriarchal notions at the heart of immigration laws to circumvent attempts at restriction. Although the Japanese were not subject to statutory exclusion until 1924, an informal agreement led to Japan restricting the emigration of laborers and others barred by U.S. policies. Because Japanese were not racially barred by statute, they were able to form families in the United States by sending for “picture brides” who married their husbands on the dock when they arrived. More than 10,000 Japanese and an additional 1,000 Korean young women married often much older husbands they had never seen. The “Ladies Agreement” in 1920 informally ended the picture bride practice and the Immigration Act of 1924 definitively closed this loophole. Extending the screening that began with Asians to almost all immigrants, the Immigration Act of 1924 established rigid quotas for sending countries based on the “national origins” of the 1890 U.S. population and also explicitly redefined the legitimacy of family relationships for immigration law. The act did not grant adoptive families the same rights as biological families. It also eliminated the proxy marriage practices for all groups. Before 1924, family relations and immigration claims had been important, primarily, to groups that received extra scrutiny because of class, race, or disability. The Immigration Act of 1924 expanded the importance of family to all groups, though not equally. The act allowed unmarried children under 18 and wives of citizens to enter outside the quota, so long as those wives and children could become citizens. Paradoxically, under this provision Chinese merchants could migrate their families because of treaty rights, but American citizens of Chinese descent could not bring their wives who were, as Asians, racially ineligible for citizenship. Enacted during the height of eugenic fervor, the Immigration Act of 1924 established the United States as a near-universal gatekeeper nation with families as both keys to the gate and a key intended product of the gate.



The one notable exception to the Immigration Act of 1924’s restrictions, and most of the previous restrictions as well, remained Western Hemisphere immigrants. The ease with which these migrants, primarily Mexican, could cross and recross the border created mobile family units that migrated together. As a loophole around Asian exclusion, these families sometimes contained South Asian or Chinese men who “passed” as Mexican and assimilated into Mexican American communities. Even though Western Hemisphere migrants were not subject to the same quotas as those from Europe and Asia, the 1924 Immigration Act contributed to heightened concerns about formal documentation. Thus, by the 1920s, the once fluid border between the United States and Mexico became much more rigid. Along with this increased border scrutiny emerged a new category—the “illegal alien”—used to describe predominantly Mexican migrant laborers without official paperwork. Further reinforcing this two-tiered status was the bracero program. Launched in 1942 as a way to combat labor shortages during World War II, the program established a long-term contract labor force from Mexico to support western and southern agribusiness. Before the program ended in 1964, approximately 200,000 laborers came annually to supplement the U.S. supply of farmworkers, enduring discrimination, poor wages, and harsh conditions for the opportunity to better support their families in Mexico. During World War II, popular propaganda lauded the United States as a nation committed to racial, ethnic, and religious diversity. To underscore American commitment to ideals of cultural pluralism, the U.S. government initiated a wave of immigration reforms beginning with the repeal of the Chinese Exclusion Acts in 1943. Three years later, Congress lifted restrictions against Filipino and South Indian immigration and amended the War Brides Act in 1947, permitting all Asian American servicemen to bring their Asian-born wives to the United States. These changes culminated in the 1952 McCarran-Walter Act, which for the first time allowed Asian immigrants to become U.S. citizens, established immigrant quotas for Asians, and included a provision for family reunification for persons applying for entry from Asia. More expansive policies toward Asian immigrants dovetailed with increasingly race-blind refugee laws and policies that promoted adoptive

Immigration Policy

713

families. Beginning with the Displaced Persons Act of 1948 (DPA), legislators incorporated orphan provisions into refugee legislation, even though the creators of the bill specified from the outset that orphans were not technically considered refugees. The DPA’s primary focus was to quell European unrest created by an excess of 1 million displaced persons, primarily from Germany, Austria, and Italy. Offering 205,000 visas for displaced persons, this legislation was the first to provide relief because of persecution. Under its provisions, from 1948 to 1953 U.S. citizens adopted 4,052 European children and 466 Asian children (mostly Japanese American). Yet legislators intended the DPA to relocate only European orphans, upholding the restrictive quota system already in place. Few Japanese American orphans qualified for the existing visas, and those who did were adopted by military families. Then, in 1953, Congress passed the Refugee Relief Act, which offered 4,000 nonquota visas for overseas orphans regardless of origin country, making it possible for Americans to adopt thousands of children from Korea throughout the 1950s and 1960s. The most sweeping immigration reform since the 1920s came in 1965 with the passage of the Hart-Celler Immigration Act. This law eliminated the national origins quota system and capped annual totals at 290,000 migrants. While the act appeared to make immigration policy race neutral, some scholars have interpreted Hart-Celler as fairly conservative, primarily because it restructured and capped immigration from the Western Hemisphere t 120,000 annually, a 40 percent reduction from the previous decades. In so doing, the act set an unrealistic ceiling, given U.S. farm labor needs, and ensured that few Mexican workers would be able to migrate legally. By emphasizing family reunification, the act further allowed immigrants to use the measure to bring extended family members to the United States, greatly increasing immigration from Asian countries in the 1970s and 1980s. Contemporary Policy In the last several decades, immigration policy has focused on reinforcing borders, particularly the U.S.–Mexico border, to keep undocumented migrants out while also instituting a series of laws to minimize the integration of such immigrants into U.S. society. Starting with the 1986

714

Incest

Immigration Reform and Control Act (IRCA), Congress increased border policing and stipulated penalties for employers who knowingly employed undocumented migrants, although this provision was seldom enforced. In 1994, the Immigration and Naturalization Service (INS) initiated Operation Gatekeeper, policing the San Diego–Tijuana border so that migrants were forced to cross in rural—and more dangerous—areas. Since the events of September 11, 2001, further efforts to make the border even stronger, such as the 2001 USA PATRIOT Act and the 2006 Secure Fence Act, and threats to national defense have been under the auspices of department of Homeland Security. Migrants living illegally in the United States have also been eligible for fewer state and federal resources, policies that especially harm women with dependent children. In 1996, Congress passed two laws—the Illegal Immigration Reform and Immigrant Responsibility Act and the Personal Responsibility and Work Opportunity Reconciliation Act—that made undocumented migrants ineligible for welfare benefits and heightened scrutiny in higher education. The 2005 Real ID Act denied unauthorized migrants the opportunity to obtain state-issued driver’s licenses. Still, there have been periods of reprieve. The IRCA extended amnesty to 2.7 million unauthorized immigrants in 1986 as an acknowledgment that many U.S. households relied on the farm, service, and domestic labor Mexican migrants provided. Recently, the secretary of Homeland Security announced that some undocumented migrants, who had come to the United States as children, would be eligible for a two-year period of deferred action in which they could apply for work authorization and long-term legal status. As such policies show, notions of racial difference, family norms, and class continue to inform U.S. immigration policies. Rachel Winslow Westmont College Jason Stohler University of California, Santa Barbara See Also: Asian American Families; Central and South American Immigrant Families; Chinese Immigrant Families; DREAM Act; German Immigrant Families; Immigrant Families; Irish Immigrant Families; Italian Immigrant Families; Japanese Immigrant Families;

Korean Immigrant Families; Latino Families; Mexican Immigrant Families; Middle East Immigrant Families; Polish Immigrant Families; Poverty and Poor Families; Slave Families; Vietnamese Immigrant Families; Welfare; Welfare Reform. Further Readings Lee, Erika and Judy Yung. Angel Island: Immigrant Gateway to America. New York: Oxford University Press, 2010. Neuman, Gerald L. Strangers to the Constitution: Immigrants, Borders, and Fundamental Law. Princeton, NJ: Princeton University Press, 1996. Ngai, Mae. Impossible Subjects: Illegal Aliens and the Making of Modern America. Princeton, NJ: Princeton University Press, 2004. Winslow, Rachel. “Immigration Law and Improvised Policy in the Making of International Adoption, 1948–1963.” Journal of Policy History, v.24/2 (April 2012).

Incest Incest has been a part of the human family from the beginning of its history. The prevalence of incest within the American family is not known because of the silence associated with it as well as the problem of underreporting. However, incestuous relations continue to exist within dysfunctional family environments. Male and female victims from infancy to adulthood have endured unwanted sexual encounters for years and suffered deep and unprecedented psychological repercussions as a result. Definition Incest may be defined as sexual relations or marriage between persons who are related by blood. Such relations are forbidden by law and are not generally sanctioned by religion. The relations may include various sexual forms and expressions such as touching, lewd disclosures, fondling, intercourse, rape, and sodomy. The sexual contact may or may not be by consent, and may include explicit or implicit force or disparity in authority. Sexual relations are also considered incestuous where kinship is acquired or established through marriage, family roles, and functions.



Colonial America Early American families tended to be patriarchal in orientation and style. The influence of the colonizing nation, enshrined in the legal code, prescribed strident penalties for sexual indiscretions such as incest. However, issues related to women’s sexuality, rights, and privileges were not given much prominence or attention. Strong emphases were placed on moral purity and the domestic proficiency of women. The normative societal life with habits, governance, law, and even matters of morality was skewed in favor of the dominant gender. It was difficult to prove incest in a society that favored males. Often women were chastised and found guilty for the sexual crimes perpetrated against them. The reporting of incest scarcely entered the public discourse in a society where women were exposed to scrutiny, shame, and embarrassment for sexual impropriety. For instance, women were financially responsible for pregnancy that occurred out of wedlock. No responsibility, guilt, or reprimand was attributed to males. Children who were victims of incest were charged with enticement and seduction. There was a tendency to affix blame on victims, making them responsible for some incestuous relations because adolescent girls were said to be flirtatious. During the 1800s and 1900s it was not uncommon for men in certain parts of the country to father children with their own relatives and the relatives of their wives. Sexuality and sexual expression occupy a unique place in the history of the American family. Sexual abuse in general and incest in particular are elements of sexuality that were not part of the public discourse. That is because human sexuality was a subject guarded by strict boundaries and taboos. Sexual deviance therefore was a reprehensible deed that engendered public scrutiny, shame, and condemnation. These defining particulars allowed incest to exist in secrecy. The sexual revolution of the mid-1970s brought the subject of incest from obscurity and introduced a sense of sexual freedom and liberty of sexual expression. The feminist movement of the 1960s and 1970s through agencies and organizations worked feverishly to empower women. That drive gave voice to women’s issues that were otherwise silent. Women who suffered various types of abuse (battering, rape, and incest) in silence were emboldened to talk openly about their plight and pain.

Incest

715

Sexual violence was one of the primary issues given attention. Consequently, the reporting of incest and other forms of sexual violence against women become somewhat easier. The feminist movement, mingled with the spirit of the sexual revolution, gave rise to greater social consciousness and empowered women to report sexual violence in families. That freedom continues to influence the timely and greater reporting of incest and other forms of sexual violations against women. The Family The available data reveal that intrafamilial sexual abuse takes place in normal families without respect to setting, ethnicity, socioeconomic status, or structure. However, there are some predisposing factors that are inherent and descriptive of such family environments. The factors that typically render the family environment unstable or dysfunctional include substance abuse, mental illness, family violence, marital discord, and lack of emotional support. Usually, father/daughter and sibling incestuous relationships are not one-time events. They are patterns of reoccurring behaviors that are reflective of chaotic family environments. Types of Incest There are various combinations of heterosexual and same-sex relations within the family that are considered incestuous—father/daughter, brother/sister, cousin/cousin, mother/son, stepfather/daughter, grandfather/granddaughter, etc. All sexual relations between adults and children are exploitative, even if the minor is a willful participant. The context and quality of the relations among consenting adults determines whether it is exploitative. Thus, sexual contacts and advances among family members that are exploitative in nature constitute abusive incestuous relations. All intrafamilial sexual intercourse between adults is considered incestuous, even in cases where the parties conjugate as an expression of their sexual freedom. There is a power differential in relationships involving an adult and a minor that places the minor at a disadvantage. The older person necessarily would have to switch roles, which is a betrayal of trust and abuse of power. The normative parental role of father implies caregiving, nurture, and protection. Any father/child incestuous relationship

716

Incest

constitutes a violation of that relationship and a deviation from the standard roles. Father/Daughter Incest Father/daughter incest is one of the most prevalent forms of incestuous relations. The dynamics within the family environment are significant contributing factors to father/daughter incest. The quality of the marital union or father/mother relationship may be the primary variable. Other contributing factors that determine the availability of the wife to satisfy sexual needs and desires of the husband include divorce, extended illness, and other conditions that render the wife unavailable. Sibling Incest Some incidents of sibling incest may have developed out of sexual curiosity and exploration. There is a measure of normalcy to individual and even sibling exploration of the body in the early years. That innocent and natural, developmentally appropriate curiosity falls outside the boundaries of incest. There is no uniform or standard practice that characterizes the beginning of incestuous sibling relations. However, behaviors that are intentionally sexual may be determined incestuous. There are two predisposing or motivating factors that give rise to sibling incest. Older siblings are often forced by circumstances to assume the role of caregiver in disruptive family environments. In some situations, they extend that role to the point where it results in inappropriate sexual relations with their siblings. The other is a blatant abuse of power derived from sibling position, role in the family, or physical advantage. In most cases brothers have used force, bribery, threats, and various forms of coercion to get their sibling to comply with their wishes. There is an assumed mutuality in sibling sexual relations that distinguishes it from adult/child incest. That mutuality may be due, in part, to a natural inclination to explore and experiment. Sexual relations that result from the innate tendency or desire to investigate one’s environment may be less traumatic than other forms of intrafamilial intercourse. This variant of sibling sexual contact does not distinguish the participants as offender and victim. This conceptualization might be one factor that places restraint on the disclosure of sibling sexual behaviors.

Maternal Incest Over the years, there have been growing numbers of reported cases of maternal incest. Women who were victims of incest and sexual abuse are more likely to be perpetrators of incest than women who were not victims. Some women perpetrators fail to abdicate personal hygiene chores for the child as a means of satisfying an emotional void. The behavior and bond affords them the opportunity to groom the child and orchestrate the incestuous relations. The mother/son incest can have long-term negative impact on the sexual orientation of the child. These impacts are worse when the sexual acts are done in collusion with another perpetrator. Maternal incest with preadolescent males exposes these children to a host of long-term psychological problems. Consequences of Incest Children who are victims of incest sustain various types of social and psychological injuries. These injuries may continue to have negative influences on the victims’ lives. The severity of the symptoms that children manifest are influenced by the quality of the relationship between the child and the perpetrator, the duration of the abuse, type of sexual relations, and the violent nature of the encounters. Children have to contend with the aftermath of the atrocities along with their developmental challenges. When they become adults, they are 3–5 times more likely to suffer from depression than women who were never victimized. They have difficulty trusting people and building relationships. In addition, self-concepts are damaged and some women can only equate self-worth in the context of sexual encounters. Age and developmental stage of the victim are variables that tend to influence the nature and intensity of the psychological problems that result from incest. Preschoolers may exhibit a range of internalizing and externalizing behaviors, such as nightmares, anxiety, depression, and post-traumatic stress disorder. The symptoms that have been identified with school-age children are fear, neurotic and general mental illness, aggression, defiance and conduct problems at school, nightmares, hyperactivity, and regressive behaviors. The patterns of symptomatic behaviors among adolescent victims are social withdrawal, depression, suicidal acts, self-injurious behaviors, somatic complaints,



eating disorders, and some delinquent behaviors that may include sexual promiscuity, running away, and substance abuse. Risk There are several family environmental factors that may expose females to the risk of incestuous relations. These include the presence of a stepfather in the home, socioeconomic status of family, emotional availability of mother, frequent maternal absence from home, spousal abuse in the family, the number and quality of close friends the victim has outside the home, and lack of appropriate paternal affection. Similarly, there are several characteristics that increase the proclivity of an assailant for adult/child incest offenses. These include history of substance abuse, spousal abuse, lack of communal involvement, and history of other forms of child abuse. Adjustments Not all victims of incest experience severe repercussions. Some are able to adjust and live relatively normal lives. The two primary factors that determine the nature of such outcomes are the descriptive qualities of the incestuous relation itself and the quality of the family environment within which the incest took place. First, the inherent nature of the incestuous relations includes age when abuse started, regularity of sexual episodes, brutality of sexual episodes, and how long the incestuous relations lasted. The second factor includes the biological tie and the quality of the relationship between the assailant and the victim, the familial role of the assailant, and the quality of the family environment to promote safety and facilitate healing. Reporting It is widely accepted that there are more incestuous relationships than are reported to authorities. The actual prevalence of incest in the American family is not known because of underreporting. There seems to be a culture of silence that prevents disclosure of incest. The conspiracy of silence runs deep in families. Women and children feel obligated to maintain family secrets. Some of the factors that encourage the secrecy are the attitude and response of the non-offending parent. However, the failure to report the abuse incidents encourages the repetition of the sexual abuse for years.

Incest

717

Some societal and cultural factors may contribute to the culture of silence that is associated with incest. This is particularly true for male victims. The societal expectations and variables that conspire against reporting and promote secrecy include the socialization of men to be independent and selfreliant, society sanctions for boys to be sexually active earlier in life than girls, the stigmatization and shame that were associated with homosexuality in cases where the perpetrator was male, and the notion that masculinity must not be associated with weakness and vulnerability. Legal Contemporary Position Laws prohibiting incest and child abuse in general are enacted throughout the nation. The drive to enforce the law and protect children requires mental health and other service providers to report abusive acts against children. Large numbers of children are removed from homes and families to avoid victimization and abuse. To some extent, the public discourse on incest is influenced by the new approach to human sexuality. There are challenges mounted against the legality of labeling incest as a criminal offense where it is effected on the basis of consent. The position posits that autonomy and freedom of choice are fundamental human rights that give legitimacy to acts of volition. It is argued that no criminal prohibition should be affixed to sexual consanguineous relations when there is consent among adults. Thus, legal prohibition should only be applied to protect the vulnerability of people who are below the age of consent. St. Clair P. Alexander Loma Linda University See Also: Child Abuse; National Center on Child Abuse and Neglect; Rape. Further Readings Finkelhor, David. Sexually Victimized Children. New York: Free Press, 1981. Patton, Michael, ed. Family Sexual Abuse: Frontline Research and Evaluation. Newbury Park, CA: Sage, 1991. Sacco, Lynn. Unspeakable: Father–Daughter Incest in American History. Baltimore, MD: Johns Hopkins University Press, 2009.

718

Indian (Asian) Immigrant Families

Indian (Asian) Immigrant Families Although only about 2,500 Americans who had emigrated from subcontinental India lived in the United States in 1900, this number had swelled to more than 3 million by the 2010 census. The third-largest group in the United States with Asian ancestry (after Chinese Americans and Filipino Americans), Indian immigrant families tend to be very successful economically, and their educational attainment levels are among the highest in the nation. This economic success has affected other groups, as many Americans work for Indian immigrant–owned firms and businesses. Quite diverse in terms of ethnicity and religion, Indian immigrant families have made and will continue to make significant contributions to the nation. Background The Republic of India is a nation located in subcontinental Asia. With a population exceeding 1.2 billion citizens, India is the second most populous nation in the world, trailing only China. In terms of area, India is the seventh-largest nation, bordered by the Indian Ocean to the south, the Arabian Sea to the west, and the Bay of Bengal to the east, with shared borders with Bangladesh, Bhutan, Burma, China, Nepal, and Pakistan. India is also adjacent to the island nations of the Maldives and Sri Lanka. The world’s largest democracy, India boasts the globe’s 10th-largest economy as measured by nominal gross domestic product (GDP), and is considered to be a newly industrialized economy. The service sector accounts for over half of India’s GDP, with the industrial sector accounting for over 26 percent and the agricultural sector for over 18 percent. Although it is one of the world’s fastest-growing economies, India struggles with poverty, health care, infrastructure, and environmental degradation. Under the influence or control of the United Kingdom for more than two centuries, India became independent of British control in 1947. Organized as a parliamentary democracy, India is composed of 28 states and seven union territories. Twenty-six languages are recognized as widely spoken by Indian citizens. Hindi and English are the most widely spoken languages in India, however, and comprise much of the language of commercial business

transactions. English is especially important in education, and many institutions of higher education conduct classes in that language. Historically defined by a rigid social hierarchy, India has made efforts to become more egalitarian over the past 60 years. Its rapidly expanding economy has allowed India to double its minimum wage since 2000, which has contributed to an increase in the size of the middle class. India’s historical links with English have permitted it to become a popular outsourcing place for many American call centers, technology support hubs, and other service-oriented concerns. Immigration Patterns to the United States At the turn of the 20th century, approximately 2,500 Americans of Indian descent lived in the United States. This number remained little changed through 1946 as a result of severe restrictions on immigrants from Asia. In 1946, however, the LuceCeller Act was passed by the U.S. Congress. The Luce-Celler Act removed restrictions that had barred immigration of Indians to the United States. Despite this, it permitted only 100 Indians to come to America each year. In 1952, however, President Harry Truman signed the Immigration and Nationality Act, which removed racial restrictions on immigration to the United States. Despite this, a quota system for nationalities and regions was retained, establishing preferences for immigrants from certain nations over others. The act gave a preference to the relatives of citizens of the United States or immigrants who were already there—as Indians had been barred from immigrating before 1946, this preference did them little good. Although immigration policy was modified over the years, it was not until the tech boom of the 1980s that large numbers of Indian immigrants began entering the United States on an annual basis. By 1980, there were more than 350,000 Americans of Indian descent, a number that increased to more than 800,000 by 1990, more than 1.6 million by 2000, and more than 3 million by 2013. With more than 50,000 immigrants coming to the United States from India annually, emigration from that nation is at its highest rate ever. This growth has caused Indian Americans to represent the thirdlargest group of Asian Americans, trailing only the Chinese and the Filipinos. India’s strengths in the science, technology, engineering, and mathematics (STEM) fields has contributed to this influx, as



Indian (Asian) Immigrant Families

719

members of Indian American immigrant families a source of moral and practical support in many areas of their lives. Family authority and harmony are highly valued, and most family members are socialized to accept the authority of those above them in the family hierarchy. Ties between spouses and parents and their natural children are often de-emphasized to enhance a wider sense of family harmony, which results in a strong kinship circle. Traditionally, business matters and property were controlled by males, which affected how family resources were utilized. Certain life passages—such as birth, marriage, and death—are highly significant and often the focus of celebrations or ceremonies. Explicit rules often control diet, dress, occupations, and other aspects of family life.

A woman wearing a sari, a traditional Indian form of dress. Although most Indian Americans in the United States have adopted Western apparel, traditional Indian clothing is sometimes worn at festivals, weddings, and other celebrations.

workers with skills in these areas are highly coveted by employers. Studies have indicated that more than one-third of the engineers working in California’s Silicon Valley are of Indian descent, and that Indian immigrants have founded more technology companies than have immigrants from China, Japan, Taiwan, and the United Kingdom combined. As might be expected from a culture with such incredible class, economic, ethnic, linguistic, regional, and religious diversity, it is difficult to generalize about Indian American immigrant families. Urban/rural differences and immense gender distinctions also make sweeping statements about family life for this group complicated. That being said, certain themes do often appear in Indian American immigrant families. Many Indian families are quite hierarchical in nature, with the sense of hierarchy stemming from a variety of factors, including caste, wealth and power, gender, and family connections. A tremendous sense of social interdependence also pervades Indian immigrant families, with many individuals feeling a deep sense of connection with their families, clans, castes and subcastes, and religious communities. These ties serve to give

Educational and Economic Attainments Immigrants to the United States from India often fare very well, as their educational backgrounds would suggest. While approximately 28 percent of the total U.S. population holds a bachelor’s degree or higher, over 70 percent of Indian Americans have obtained this degree of educational attainment. The next closest group, Chinese Americans, have an educational attainment rate of 52 percent. Over 40 percent of Indian Americans hold a master’s degree, doctorate, or other professional degree, a rate that is 500 percent greater than the national average. Because so many Indian American adults have a college degree, their children often perform well in school, as parents’ educational attainment is a key indicator of a child’s academic success. Census data collected in 2010 by the Census Bureau indicate that Indian Americans also do well economically, having the highest household income of all ethnic groups in the United States. Of Indian Americans, over 72 percent are members of the workforce, with nearly 60 percent of those employed in managerial or professional roles. There are more than 35,000 Indian American physicians, and many immigrant families own businesses here. More than 250,000 Asian Indian–owned businesses exist in the United States, generating more than $100 billion in annual revenue and employing more than 600,000 workers, many of whom are not of Indian descent. The median household income of Indian Americans was nearly $90,000 per year in 2010, which was higher than the U.S. average of approximately $50,200 per household.

720

Indian (Asian) Immigrant Families

Religious and Cultural Practices While popular perception often assumes that all Indian Americans are Hindus, in reality the community shows great diversity in religious beliefs. Approximately 50 percent of Indian Americans identify as Hindus, while 18 percent consider themselves Christians, with 66 percent of these identifying as Protestant, 20 percent as Roman Catholic, and the rest as members of other Christian denominations. About 10 percent of Indian Americans identify as Muslim, 5 percent as Sikh, 2 percent as Jain, and 10 percent as not affiliated with any religion. As Indian Americans live in all 50 states and the District of Columbia, Sikh gurudwaras and Buddhist, Hindu, and Jain temples exist in all of these jurisdictions. With a culture that spans nearly 5,000 years, Indian Americans have taken many contributions from their homeland and brought these to the United States. Indian cuisine, which is often considered one of the four great cuisines in the world (with Chinese, French, and Italian), varies greatly by region of India from which it originates. The large number of vegetarians in India has resulted in a variety of cooking techniques and spice combinations that are unique to that nation, and Indian food is growing in popularity in the United States. Many communities now have Indian restaurants, and larger metropolitan areas often have Indian grocery stores. Traditional Indian clothing includes the sari for women and the dhoti for men, both of which are comprised of loose-fitting fabrics that are draped over the body in the case of women and worn as a skirt in the case of men. Although most Indian Americans have adopted Western apparel, traditional Indian clothing is sometimes worn in the United States, especially at festivals, weddings, and other celebrations. Indian literature traditionally revolved around epics that explained the rationale for the Hindu way of life. Modern Indian literature is sometimes written in English and sometimes in other languages. Themes explored in Indian American literature include the diaspora and challenges faced by immigrant families adjusting to a new way of life. India’s burgeoning film industry, sometimes referred to as Bollywood, has become a global industry, with expatriates in the United States and Europe opening up access to these cinematic offerings to nonIndian patrons. In number of productions, India is now the world’s largest producer of films, and the

Indian government has done much to support its film industry in the United States, sponsoring film festivals and promoting Indian actors, directors, and other filmmakers for awards and other recognition. The success of Indian cinema has spread to popular music, with Indipop (popular music performed by Indian musicians and singers) accounting for over 70 percent of the sales of recorded music in India. Many Indian Americans listen to Indipop, and its presence is becoming known to a wider audience in the United States. Traditional forms of Indian music and dance are also common in communities of Indian Americans in the United States. Indian dance is a traditional form of expression of inner beauty and the divine. Carefully choreographed, traditional Indian dance contains various classical forms, many of which have mythological significance. Traditional Indian music, which traces the origins of some of the songs performed back more than 3,000 years, often accompanies Indian dance performances. Indian music and dance are popular with immigrant communities and have been growing in popularity with non-Indian populations as well. To meet the needs of Hindi-speaking Indian Americans, radio stations broadcasting in that language have appeared in American metropolitan areas with large Indian American communities. Chicago, Dallas, Houston, New York, and San Francisco all have Hindi radio stations, and some communities also have stations broadcasting in Tamil and Telugu. Some cable television providers have begun to offer viewers Indian channels, which often play Bollywood movies and other programming from India. Larger metropolitan areas sometimes also have specialized movie theaters that play Indian movies. Other Issues Although the term Indian American is commonly used to describe immigrants from subcontinental Asia, this is in some ways a misnomer. India comprises a variety of cultures, values, languages, viewpoints, and appearances. While English has generally served to diminish barriers between the different groups that live in India, this is more true among the educated than the public at large. Specific organizations in the United States exist to unite Indian Americans based on their language affiliation, with specialized groups existing for speakers of Bengali, Orissa, Tamil, and other languages.



Certain Indian Americans have complained of discrimination by other Americans. Some of this discrimination has been overt, including harassment and verbal confrontations. In certain innercity neighborhoods, where a variety of stores and other businesses are owned and operated by Indian American families, residents have targeted the proprietors of these businesses for hate crimes. When media coverage regarding terrorism was broadcast following the September 11, 2001, attacks, members of a variety of Sikh communities were targeted by white supremacists who believed them to be of Arabic descent, beating and murdering several innocent persons. To address some of these issues, the India Anti-Defamation Committee has been formed to address violations of Indian Americans’ civil and human rights. Some children of Indian American immigrants complain about treatment as members of a “model minority,” caused in part by the financial success and educational attainments of many Indian American families. Although the stereotypes associated with being members of a model minority are largely positive (i.e., hardworking, intelligent, and successful), they are actually detrimental to members of the Indian American community. The model minority stereotypes are harmful to Indian American individuals and families because they are sometimes used to justify and excuse discriminatory behavior directed at them. When Indian Americans are denied admission to a graduate program, for example, or denied public assistance because of perceptions that individuals do not “need” this, it harms individuals. Model minority stereotyping also encourages discord between members of different minority groups, as it is implied that nonmodel groups are to blame for failing to assimilate and achieve financially and educationally. As the immigration of Indian nationals to the United States continues, Indian Americans have become one of the nation’s fastest growing ethnic groups. Since emigration from India to the United States swelled during the 1990s in response to the dot-com boom, the process has continued unabated. This has resulted in large numbers of Indian immigrants seeking green cards, and the wait list for Indians to receive a visa tops 350,000 persons, trailing only Mexico and the Philippines. Because of this continued desire of Indians to come

Individualism

721

to the United States, their influence will certainly grow in the years to come. Stephen T. Schroth Knox College See Also: Acculturation; Asian American Families; Assimilation; Child-Rearing Practices; Chinese Immigrant Families; Education/Play Balance; Ethnic Enclaves; Family Businesses; Immigrant Families; Multigenerational Households; Parenting Styles. Further Readings Daniels, R. Coming to America: A History of Immigration and Ethnicity in American Life, 2nd ed. New York: Perennial, 2002. Foner, N. From Ellis Island to JFK: New York’s Two Great Waves of Immigration. New York: Russell Sage Foundation, 2000. Foner, N. In a New Land: A Comparative View of Immigration. New York: New York University Press, 2005. Kasinitz, P., J. H. Mollenkopf, M. C. Waters, and J. Holdaway. Inheriting the City: The Children of Immigrants Come of Age. New York: Russell Sage Foundation, 2010. Waters, M. C. Ethnic Options: Choosing Identities in America. Berkeley: University of California Press, 1990.

Individualism Individualism is a cultural value that favors individual interests over group, institutional, societal, or higher social interests, resulting in greater individual freedom and self-expression. Individualism stems from satisfaction of basic survival needs (e.g., food, shelter, and safety), which reduces family size and stimulates migration to high-growth areas. Together, these changes reduce family ties and foster individual decision making, thereby facilitating people’s pursuit of individual interests. Individualistic societies reproduce individualism through family interactions and institutional practices. A person with individualistic values favors self-reliance when setting goals, making decisions, acting on them, and evaluating the outcomes. These attitudes

722

Individualism

and thinking processes enable people in individualistic societies to join and leave groups relatively easily; as a result, they can form large organizations comprised of strangers and join multiple groups to fulfill different needs. To codify individualism values, these societies often create and enforce laws to guarantee specific rights and freedoms. Sources of Individualism When people face major external threats (e.g., war, famine, and poverty), they often accede to group needs to enhance their mutual survival. Without these external threats, economies often grow faster, enhancing economic, political, and social security. After their concerns about survival are assuaged, people look beyond security to satisfy their individual interests (e.g., surfing, poetry, and dance). Developing economies also increase service jobs (e.g., salespeople, nurses, and lawyers) and knowledge jobs (e.g., scientists, professors, and journalists), reduce family size, and encourage migration away from family members, all of which foster individualism. Lastly, individualistic societies reproduce individualism through economic, educational, and cultural institutions. Economic growth brings greater demand for labor and human capital. To satisfy this demand, nations build schools to enhance their citizens’ education and autonomous decision making at work. For example, advanced economies require skilled professionals such as doctors and bankers, who make many daily decisions individually. Furthermore, people with more education and greater job responsibilities are typically more productive and can demand greater incomes. When nations become wealthy enough, they may choose to ensure that all of their citizens have sufficient food, shelter, and health care to survive, regardless of their ability to pay for them (e.g., welfare states such as Sweden and Norway). Economic growth also enhances political and social security. Societies with extra economic resources can spend money on armed forces (e.g., 2013 U.S. military budget of $673 billion), use trade to create economic alliances (e.g., Association of Southeast Asian Nations [ASEAN]), provide foreign aid to build political alliances (e.g., $51 billion in U.S. foreign aid), and use diplomacy to foster peaceful international relationships (e.g., United Nations), all of which reduce the likelihood

of war. With greater economic and political security, citizens face fewer risks when they trust one another, and thus are more likely to do so. Greater trust fosters greater participation in politics (e.g., organizing community coalitions) to advocate specific policies, which reduces their deference to authorities. Having satisfied their economic, political, and social security, people pursue their subjective wellbeing, seeking better quality of life, greater selfexpression, and novel experiences. For example, rather than viewing food and housing as basic necessities, people seek delicious dishes and beautiful décor. As incomes grow, people can spend more time on their individual interests and express themselves in their arts, hobbies, and crafts. Eventually, some feel sufficiently secure to pursue careers in their areas of interest. Moreover, individuals begin seeking new experiences, traveling to see other cultures and ways of living, and embracing cultural diversity rather than fearing it. All of these support individualism. As economies mature, more people work in jobs in the service or knowledge sectors. In these sectors, they work with people (such as counselors) and ideas (such as economists), which require freedom of judgment and innovations to be successful. For example, a counselor must evaluate clients’ states of mind and decide the best course of action to help them. Likewise, economists analyze changing economic conditions and try to create mathematical models to account for different situations. As more people become accustomed to individual decisions and creativity at work, they often seek out similar autonomy and self-expression during their leisure. Greater economic security also reduces birthrates (demographic transition), resulting in smaller families, more time spent alone, and more individual decision making. As economies grow, improved health care raises children’s survival rates and life expectancies. As a result, parents have fewer children, expecting at least one of them to survive and support them when they retire. As incomes increase further, individuals can save enough money to support themselves during their retirement, thereby reducing their reliance on their children to care for them. As family size decreases, children have fewer siblings. As a result, they spend less time attending to others; they spend more time pursuing their own interests



and making their own decisions, habits that they continue as adults. As economic development is often geographically uneven, extensive labor migration to better jobs weakens family ties and increases individualism. Economic growth often occurs at higher rates in some areas than others; typically, cities attract and accumulate skilled laborers and entrepreneurs, resulting in much higher growth than in small towns. As a result, many people move away from their families and religious institutions in slowgrowing towns to better jobs in fast-growing cities. As people live farther away from family members and their congregation, they tend to meet less often and talk less often, which weakens family ties and religious ties. As people have less contact with their family and their congregation, they tend to consider family and congregation interests less often, pursue individual interests more often, and make more decisions individually. Individualistic nations also reproduce individualism values through family interactions and institutional practices. Families with individualistic values give their children (even at early ages) responsibilities and corresponding autonomy to fulfill them. For example, in exchange for completing regular household chores, these children often receive money (allowance) that they can spend or save as they wish. Furthermore, they observe their siblings and relatives make major decisions (which university to attend? whom to marry?) without asking for elder relatives’ permission (though they may consult them). As a result, these children learn to act independently without adult permission and are less likely to defer to parents or other authorities. Practices at work, at school, and at cultural institutions also reproduce individualistic values. In individualistic societies, each person typically signs an individual contract, is assessed individually, and rewarded for his or her own achievement. When a company is losing money, for example, it typically lays off one or more individuals rather than reducing all team members’ salaries. Similarly, schools in these societies reinforce individualist values through their daily practices. For example, U.S. high school students have individual schedules and often have different classmates in each of their classes. Lastly, cultural institutions, entertainment, and media in these nations often reflect

Individualism

723

individualistic values. For example, many museum exhibits, films, and newspapers focus on individuals (presidential biography, lone Western hero, and murderer) rather than groups or institutions as in other, less individualistic countries (government politics, multigenerational family intrigue, or conglomerate exploitation). Individualistic Processes A person with individualistic values primarily relies on oneself (to identify goals, make decisions, act on them, or evaluate outcomes) and often treats others as separate individuals rather than as members of a group. Such a person prioritizes individual interests over group interests (such as pursuing one’s hobby of sculpting rather than attending a distant cousin’s wedding). While considering others’ interests may benefit oneself, one primarily pursues one’s shortterm and long-term interests, with little consideration of whether it is beneficial, neutral, or detrimental to others. (For example, of two actions with identical benefits to oneself, the first with no effect on others is preferred over a second action that harms another person, which might tarnish one’s reputation or invite retribution.) While this person may consult with others, he or she decides how to pursue individual interests, placing less value on tradition or others’ opinions. Hence, appeals to a tradition of all family members attending a wedding or appeals to listen to clan elders are less likely to sway a person with individualistic values than otherwise. Furthermore, this person relies on his or her internal reference standard for evaluating his or her own behavior (guilt society), not an external reference standard (shame society). Hence, appealing to a person’s sense of morality and guilt (“she flew all the way from Africa to attend your wedding”) is more likely to succeed than appeals to tradition or authority. When interacting with others, a person with individualistic values primarily views them as separate individuals with different interests. For example, such a person is more likely to attend to a new acquaintance’s idiosyncratic attributes than his or her family background. When such individuals have different views, they prefer separate choices rather than a consensus compromise (e.g., separate favorite dishes rather than shared compromise dishes). In addition to tolerating different views, they are willing to disagree openly—even arguing

724

Industrial Revolution Families

in public—without feeling compelled to conform or reach consensus. Instead of viewing others as members of groups that are likely to help us (insiders) versus harm us (outsiders), viewing them as individuals facilitates development of working relationships and trust of acquaintances and strangers (otherwise typical outsiders). Individualistic Structures In individualistic societies, people can join large organizations and multiple groups that do not rely on family ties. With low expectations of commitments or ties to nonfamily groups, these individuals can join a new group easily. For example, they can join a basketball league even if they do not know any of the current members. Furthermore, such people can build trust and working relationships with diverse acquaintances and strangers. As a result, strangers in individualistic societies can work together effectively in large, diverse organizations (e.g., corporations, government, etc.). As people in such organizations typically have weaker ties than family ties, they can also leave more easily for better opportunities elsewhere, in contrast to family-based organizations. In individualistic societies, people are less dependent on their immediate families than otherwise, so they are often members of multiple groups outside their family. Such people often live far away from their immediate or extended family, which reduces their access to their family resources. By joining other groups (e.g., companies, friends, professional associations, etc.), they can fulfill limited responsibilities in exchange for access to resources that meet specific needs. Access to resources in other groups further reduces dependence on family support. Furthermore, their memberships in other groups may compete with family relationships for time and individual resources, which can further weaken family ties. Individualism and Society In individualistic societies, people have fewer obligations, greater autonomy, and more state-sanctioned rights and freedoms, which tend to foster individualistic political views. These societies are less likely to impose religious views or compulsory military service (or work in any specific industry or company). Instead, these people typically have greater autonomy to associate with others (including

intimate relationships) and experiment with nontraditional practices (e.g., smoking marijuana). Many individualistic nations codify rights and freedoms into their laws. For example, the U.S. Bill of Rights includes the right to trial by jury, the right to counsel, and the right to bear arms. Likewise, its enumerated freedoms include freedom of speech, freedom of the press, and freedom of religion. Ming Ming Chiu State University of New York, Buffalo Gaowei Chen University of Hong Kong See Also: Collectivism; Me Decade; Me Generation. Further Readings Chiu, Ming Ming and Bonnie W.-Y. Chow. “Culture, Motivation, and Reading Achievement: High School Students in 41 Countries.” Learning and Individual Differences, v.20 (2010). Chiu, Ming Ming, Bonnie W.-Y. Chow, and Catherine McBride-Chang. “Universals and Specifics in Learning Strategies: Explaining Adolescent Mathematics, Science, and Reading Achievement Across 34 Countries.” Learning and Individual Differences, v.17 (2007). Hofstede, Geert. Culture’s Consequences: Comparing Values, Behaviors, Institutions and Organizations Across Nations. Thousand Oaks, CA: Sage, 2001. House, Robert J., Paul J. Hanges, Manour Javidan, Peter W. Dorfman, and Vipin Gupta. Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies. Thousand Oaks, CA: Sage, 2004. Inglehart, Ronald and Wayne E. Baker. “Modernization, Cultural Change, and the Persistence of Traditional Values.” American Sociological Review, v.65 (2000).

Industrial Revolution Families The Industrial Revolution began in England by transforming the economy from one based on handicrafts to an economy dominated by industry and machine manufacturing. The Industrial Revolution removed the honing of personal skills and labor



within the family structure and instead placed the skills and labor upon the manufacturing process. The necessity for the United States to enter into, and become a large part of, the Industrial Revolution began with the War of 1812 with Great Britain; this event is often referred to as America’s second war of independence. The origins of the war date back to the passage of the Embargo Act of 1807 (issued by U.S. Congress against the British and French), the issue of impressment, and the British firing on the Chesapeake when they were not permitted to search the ship. The British also seized four Americans from the ship, hanging one for desertion. The passage of the Embargo Act stopped the export of American goods and the importation of goods because of the U.S. blockade that was imposed. Because of this policy, Americans were punished more than those it was intended for—those at war in Europe. To become self-reliant, the United States then entered into an Industrial Revolution of its own. While the War of 1812 is often considered the catalyst for this event, other events were occurring that would move the Industrial Revolution forward, ultimately having an impact on the American family. As America progressed into the Industrial Revolution, it became necessary to seek out the alternative power sources needed to fuel the assembly lines and factories. In 1832, Michael Faraday discovered that an electric current could be produced when a magnet was placed in close relationship to a coil of wire, which then caused the magnet to move. With this discovery, Faraday felt this method had the potential to be used in several ways. For example, the coil could be moved while the magnet stayed in place or the electromagnet could be controlled by a switch as simple as on or off. To accomplish the industrialization, however, the United States first had to expand its ability to transport goods within its own nation; second, effectively put electricity to use in factories and plants for manufacture; and third, find an efficient manner to increase production of goods. Thus, the three main features of the Industrial Revolution were technological, socioeconomic, and cultural. Technological, Socioeconomic, and Cultural Changes The word technological today conjures ideas of computers, cellular telephones, and computerdriven vehicles. This, however, was not the case

Industrial Revolution Families

725

in the 19th century. New materials were being used, including iron and/or steel, and new energy sources such as coal, the steam engine, electricity, and petroleum were in use in this period. Because of the use of these items, other inventions quickly followed, which included the power loom, the steam locomotive and steamboats, the cotton gin, and the telegraph. All of this new technology gave the United States the ability to produce larger quantities of necessary goods with less human power. With the ease of transporting goods throughout the countryside, the wealth of the individual shifted from land ownership to the ability to own and produce goods. Cities developed with nonagricultural abilities, as they were able to transport food and other necessities into the marketplaces through various means of more affordable mass transportation. The culture of the population was also realized by the shift in political changes moving the economic powers to those in control of the industries, which ultimately developed the working class within society, resulting in new powers of authority and control. Each worker developed his or her own skill and, rather than work with hand tools, became machine operators. More often than not, the machine operators worked assembly lines, thus becoming a small part in the production of each object being massproduced. This cultural shift was significant. For the first time, workers were now mass-producing goods not for their own use but for export to market. Impact on Family Life As the original settlers altered their families by leaving their homeland and establishing a new colony an ocean away, any change may affect the family as one views the concept of the family. Pre–Industrial Revolution families were codependent on one another and close-knit. They not only relied on one another for the goods each produced but also had a close social connection within the family and local community. These pre–Industrial Revolution families were also significantly larger as they were comprised of what would be considered the extended family unit, not just children and parents. Before the revolution, the children worked the fields and learned the trade of their fathers. The women traditionally kept the home along with the

726

Industrial Revolution Families

Workers outside the Bibb textile mill in Macon, Georgia, in 1909. The Industrial Revolution created opportunities for women to work outside the home, changing the structure of the American family. However, the majority of the jobs created paid very low wages, resulting in the workers and their families existing in less than desirable situations, such as in shabby housing.

traditional duties of cooking and cleaning, teaching the female children to do the same. The revolution uprooted and altered the number of hours a family spent together. They were no longer involved in a family-based enterprise, but now were a part of larger-scale industrialization. Socially, the family often spent long hours apart from one another, and more hours within their newer, larger world of production. The families were not even required to be reliant on one another or to reside within the same home or town. The children left the farms to work in the factories, often in another geographical area, thus eliminating the extended families that resided together. In fact, the younger generation was free to explore other venues of trade rather than the traditional “follow in the parent’s footsteps” point of view. Rather than work on the homestead with only domestic tasks, women were also now given the

same opportunity as men to earn a living, although it was usually considerably less than their male counterparts earned. One of the early approaches to the use of women in the workplace is attributed to the group of businessmen known as the Boston Associates. They recruited only farm girls from New England to work in the factories operating machinery on assembly lines. The Boston Associates preferred female labor, also known as the Lowell Girls, because they could be paid less than men could. Although genderrelated pay differences are viewed more as a 20thcentury fight for equality, the first strike with regard to wage differences occurred in the textile mills in 1824; Lowell mills experienced larger strikes in the 1830s. Even though the thought of gendered pay scales might be admonished, it paved the way for the new independence of the American woman to shake off the shell of male dominance in the working world as well as within the family itself.



Economic Changes to the American Family While the Industrial Revolution provided jobs to thousands of citizens, which resulted in the establishment of the middle class, the reality is that the majority of jobs paid very low wages. These low wages, and the increased cost of the goods necessary to sustain a household, barely allowed the workers and their families to exist in little more than shabby conditions and less than desirable means. Tenements sprang up in many places and replaced what were originally the more desirable homesteads that were clean and free of debris and rodents. The groups of people to prosper from the Industrial Revolution were the investors and bankers as well as the owners of the factories/companies doing the manufacturing, rather than the employed workers. Through the economic hardships came the generally collective understanding of the underpaid workers’ need to overcome the appalling conditions and low pay through the beginning of unions, even though the beginnings of effective organized labor would not be seen for several years. Through the unions, the workers felt they had a healthier approach to resolving the problems generated by the conditions and low pay with a louder voice and resolve—a form of family itself. A positive role for the companies/factories was to employ the new immigrants coming to the United States attempting to flee oppression, famine, and other deplorable conditions in their homelands. The low pay was not a deterrent for these workers as most had left a country that did not have an income available to them. However, some of the new immigrants were often viewed with hostility for a multitude of reasons. The Protestants, for instance, often despised the migrating Catholics because of religious backgrounds and beliefs. They often accused the migrants of taking jobs away from citizens and driving down wages. Although this disparity led to the formation of elitist clubs and organizations, it often led to the formation of stronger familial bonds among their immigrant neighbors and communities. The Cotton Gin’s Effect on the Slave Family Although the Industrial Revolution was introduced and had become primarily concentrated in New England, the invention and the patent of the cotton gin (short for engine) in 1794 by Eli Whitney allowed the Industrial Revolution to take hold in other parts of the country. This invention was extremely useful

Industrial Revolution Families

727

in the process of removing the seeds from the cotton fiber and greatly increased the amount of work that could be completed. This, in turn, gave the southern plantation owners an additional excuse to expand the slave trade, which was the predominant method of labor in the southern United States. Additionally, because the tobacco industry in Virginia and surrounding Chesapeake areas was waning, some of the owners had proffered the idea of eliminating slavery. With the expanded need for speedier production of the cotton and the ability to do so with the new invention, the slave market expanded to provide the cotton needed by the textile industries already flourishing in the New England states. With both the technological advances and the use of the slave market, the south became the largest producer and exporter of cotton in the 19th century. However, this came at a great cost and tragic events with the slaves and their families. By 1820, all of the northern states had outlawed slavery, but the south had greatly expanded slavery as the need for more cotton increased. The demand for more slaves grew so large that the southern states had imported 250,000 new slaves during the period from 1787 to 1808. With the importation of these slaves, families in Africa were most often separated by an ocean rather than by will. In 1810, by federal count, there were 1.2 million slaves, which accounted for the majority of all work performed in the south. By 1820, 95 percent of the African American population in the south were enslaved. As the cotton industry grew, the tobacco plantation owners made their profits by selling their slaves to the plantations growing cotton. For the plantation owners, this was nothing but a win-win situation; for the slaves and their families, this was anything but progressive in nature. However, to feel united in their attempt to survive horrendous conditions in physical well-being, as well as socially and psychologically, the African American slaves created communities that established a sense of family. Family and religion were the two nuclei of their communities, offering comfort and support to one another, as this was nonexistent from the plantation/slave owners. The younger adults were often sold, which resulted in the separation of parents and children; over a third of all marriages were broken because of a sale. Children were removed from the parents’ home and sold or traded as soon as they could be

728

Infertility

put to work. With the severity of the slaves’ treatment, this encouraged the development of kin relations. The 1793 Fugitive Slave Act also endangered the free African Americans living in the north. They, too, could be declared fugitives under this act, kidnapped and torn from their families and home, then enslaved in the south. Although torn from families and living in unhealthy and deplorable conditions, the slaves stood firm in their social commitment to survive as a form of family, natural or created. The New Industrial Revolution In terms of basic materials, new technologies, and humankind’s ability to travel beyond the Earth, has the United States advanced into a second Industrial Revolution? Alternatively, perhaps, the country has not moved beyond the first. America has lived through two centuries since the beginning of the Industrial Revolution and, with thanks to the technological advances, it has changed many aspects of American life. Americans now travel more and faster, automobiles talk to drivers, and people no longer have to wait on the Pony Express or even the mail carrier to bring mail. Communication is nearly instantaneous through computers and telephones. People no longer feel the need to entertain themselves as they now have computers, tablets, and high-definition televisions for entertainment. Parents no longer have to rely on the children to act as a remote control to change the channel on the television. However, socially, how have all of these advancements affected the social atmosphere of the family? Just as family life was important to the colonists who settled the country and to those families who endured the Industrial Revolution, the family must also survive in modern society. People often look to the “good old days” and reflect on how much better life must have been, but they fail to see the realities of the suffering borne by their ancestors and what they did to survive those harsh realities and maintain the family concept. Christopher J. Kline Westmoreland County Community College See Also: Breadwinner-Homemaker Families; Child Labor; Child Safety; Childhood in America; Demographic Changes: Aging of America; Family Values; Frontier Families; Slave Families.

Further Readings Fischer, Claude S. Made in America: A Social History of American Culture and Character. Chicago: University of Chicago Press, 2010. Kozmetsky, George and Piyu Yue. The Economic Transformation of the United States, 1950–2000: Focusing on the Technological Revolution, the Service Sector Expansion, and the Cultural, Ideological, and Demographic Changes. West Lafayette, IN: Purdue University Press, 2005. Schmiedeler, Edgar. The Industrial Revolution and the Home: A Comparative Study of Family Life in Country, Town, and City. Whitefish, MT: Literary Licensing, 2013.

Infertility It is estimated that 10 to 15 percent of couples in the United States experience infertility, and studies indicate that rates of infertility in the industrialized world are rising. Remarkable advances have been made in the medical understanding of infertility and in the technologies available to treat it, offering hope—and a wide range of options—for many infertile couples. However, the understanding of the emotional and psychological impact of infertility and the long, costly, and often physically invasive treatment procedures has lagged behind. Since 2005, some psychologists have come to label infertility and its treatment as “reproductive traumas” (a term that has also been applied to a variety of adverse reproductive events, such as miscarriage and premature birth), with the potential for broad psychological impact on individuals, couples, existing children, and other members of the extended family. Individuals dealing with infertility and its treatment sometimes report symptoms of posttraumatic stress disorder, including flashbacks (to the moment of the initial diagnosis, or to specific medical procedures), numbing, emotional flooding, depression, and anxiety reactions. Couples and families facing infertility often experience significant stress and conflict. Definitions and Causes of Infertility Infertility is defined as the inability to become pregnant after at least one year of regular, unprotected



sexual intercourse. Because female fertility begins to decline rapidly after age 35, physicians usually make the diagnosis and recommend seeking help after only six months in women beyond this age. Primary infertility refers to the inability to become pregnant in a couple who have never had a previous pregnancy, while secondary infertility refers to the inability to become pregnant in a couple who have previously conceived at least once. Historically, infertility was explained by a variety of superstitions and myths, but by the middle of the 20th century, up to 20 percent of cases could be explained medically. Many of the unexplained cases were attributed to “psychogenic infertility,” usually thought to result from unconscious conflict in women about becoming mothers. Advances in technology have now made it possible to determine the medical causes in 85–90 percent of cases, and as a result, older psychogenic explanations are now largely regarded as implausible. Recently, research has explored possible links between psychological stress and infertility, with some experts noting that stress may disrupt female ovulation and menstrual cycles. While high levels of chronic stress have been shown to be associated with a number of medical conditions, and stress reduction is always useful for maintaining good health, research on stress and infertility remains at an early stage and cannot be considered conclusive. On the other hand, it is clear that the experience of infertility and its treatment causes a great deal of stress. For many infertile couples, more than one factor interferes with the ability to conceive. An estimated 30 to 40 percent of cases are due primarily to female factors, 30 to 40 percent to male factors, and 20 to 25 percent to a combination of male and female factors. In 10 to 15 percent of cases, the medical causes cannot be determined. It is widely known that age is one major factor affecting female fertility, but sensationalized cases of women becoming pregnant in their 50s and even 60s have led to common misunderstandings. Females are generally most fertile in their early 20s, with a gradual decline beginning in the late 20s to early 30s. Fertility declines much more rapidly after about age 35, and after age 40, a woman’s chances of conceiving with her own eggs are quite small. On average, women having regular, unprotected sex have a 20–25 percent chance of conceiving during any given menstrual cycle in their 20s, but only 5 percent after age 42.

Infertility

729

The onset of menopause signals the end of female fertility, but as much as five to 10 years before menopause the chances of a woman’s conceiving are greatly reduced. As women age, the risks of genetic anomalies such as Down syndrome also increase. Male fertility undergoes a gradual decline after age 40, although many men remain fertile until quite late in life. Recent research has suggested, however, that there may be increased risks of certain developmental problems, such as autism or schizophrenia, in children of older fathers. The physiology of human reproduction is complex. Problems of many different types can cause or contribute to infertility, and several risk factors increase its likelihood. For women, common problems and risk factors include the following: • Difficulties with ovulation. No egg may be released, ovulation may be irregular, or the egg may not be healthy enough to be fertilized and to grow. Problems with ovulation are involved in approximately 25 percent of infertility cases. Age is a factor in ovulatory problems, because the number and quality of a woman’s remaining eggs decline as she ages. Genetic abnormalities, which are also more likely with age, may lead to spontaneous abortion (miscarriage) of the embryo or fetus. • Damage to fallopian tubes and other parts of the female reproductive organs. These conditions are involved in approximately 35 percent of cases. • Endometriosis, polycystic ovarian syndrome (PCOS), being significantly overweight or underweight, and other medical conditions such as diabetes. • A history of sexually transmitted diseases. • Smoking, alcohol, or excessive caffeine use. Factors contributing to male-factor infertility include the following: • Difficulties with erection or ejaculation. • Problems with low sperm count, low motility (reduced ability to swim), or morphology (abnormally shaped sperm cells). • Varicoceles (varicose veins in the scrotum that can affect sperm production).

730

Infertility

• Being overweight or having certain other health problems. • Persistently overheating the testicles, such as in spas or hot tubs. • Smoking, use of alcohol and drugs. • Exposure to certain environmental toxins and chemicals. Diagnosis of Infertility For men, a relatively simple physical examination is initially used to detect anatomical problems or conditions such as varicoceles. Male sperm is easily examined in terms of sperm count (number of sperms cells in the semen), reduced motility (swimming ability), abnormal morphology (shape of the cells), and other factors. For women, the procedures are more complex, and sometimes more invasive, involving minor surgical procedures. Usually an attempt is first made to determine whether the woman is ovulating regularly, by measuring daily body-temperature changes over the menstrual cycle and by tests of hormone levels. Subsequently, tests involving imaging or other minor surgical procedures (such as laparoscopy) may be conducted to check for problems with the fallopian tubes, the uterus, or conditions such as endometriosis or polycystic ovarian syndrome (PCOS). The process of diagnosis may extend over a long period of time as initial problems are discovered and treated, further attempts are made to achieve a pregnancy, and—if these are unsuccessful—additional, more comprehensive testing occurs. Treatment of Infertility Contemporary treatment of infertility involves an extremely wide array of options, ranging from use of hormones or other medication to highly complex and invasive procedures known as assisted reproductive technologies (ART). Because many cases (20–25 percent) involve both male and female factors, often multiple treatments are used. For men, varicoceles can be corrected with minor surgery, often increasing sperm production. Hormones may also be employed to increase sperm count. Problems with female ovulation may be initially treated with a variety of medications to stimulate production of eggs. If these initial treatments do not result in conception, a variety of other treatments may be used, including the following:

• Intrauterine insemination (IUI). This process involves the introduction of sperm obtained from the male directly into the uterus to improve the odds of conception. • In-vitro fertilization (IVF). In this complex series of medical procedures, eggs are fertilized by sperm in the laboratory, and the resulting embryos are transferred to the uterus. Unused embryos are often frozen for later use. Early IVF attempts often involved the transfer of multiple embryos to increase the chances that at least one embryo would implant in the uterus and continue to develop, but this practice also increased the risks of multiple conceptions and multiple births. In recent years IVF technology has improved substantially, allowing the use of fewer embryos. Highly publicized cases involving the birth of five, six, or more infants are rare now; the transfer of so many embryos is now considered outside the normal standard of care for infertility and is increasingly regarded as unethical. • Intra-cytoplasmic sperm injection (ICSI) is a variation of IVF that can be used if the male has very few sperm of good quality. In this process, a single sperm cell is injected directly into the egg to fertilize it, after which the other normal IVF procedures are used. • Other options. If couples do not have viable sperm or eggs of their own, or if initial IVF trials using their own cells are unsuccessful, they have many other options, including the use of donor sperm or eggs, surrogacy, or adoption. Each of these steps requires decisions that can be emotionladen and highly stressful. They involve others outside the couple in the reproductive process, raising new questions about what it means to be a parent. In many cases, surrogates or birth parents remain involved with the infertile couple and the child, raising further questions about the definition of “family.” Direct Stresses of Infertility Treatment The medical procedures involved in IVF and other assisted reproductive treatments are sometimes



embarrassing, time consuming, costly, and often physically stressful or invasive. For example, men must produce semen samples via masturbation; women usually undergo minor surgical procedures as part of diagnosis and treatment. IVF treatment involves hyperstimulation of the ovaries via hormone injections that produce unpleasant physical and emotional side effects, and additional hormone injections to prepare the uterus for implantation of embryos. Infertility treatment using IVF costs $10,000 to $15,000 per attempt, and multiple attempts are often required before a pregnancy occurs. The process can last months or years, creating long-term physical, financial, and emotional distress. Stigma and Social Pressure Throughout history, the inability to have children, usually blamed on the woman, has been stigmatized in most societies. Such attitudes were conveyed by the use of the term barren to describe an infertile woman; in some cultures, women could be stoned to death or readily divorced for failing to bear children. In many less developed parts of the world, such severe stigma remains common today, and while these negative attitudes are less common and less obvious in many parts of the Western world, it is clear that the United States is still in many ways a “pronatalist” culture that places high value on childbearing. Individuals and couples who voluntarily choose not to have children are still often regarded as somehow deficient or psychologically troubled, although scientific studies have found that such individuals are no more likely to suffer from psychological disturbance than people who do have children. Support organizations for people without children have noted many subtle aspects of pronatalism endemic in the United States, such as favorable tax policies and other benefits. Some such groups have promoted the use of the term childfree instead of childless to counteract the continuing stigma. These social biases often increase the selfdoubt and self-blame felt by infertile individuals and couples. Psychological Factors Individuals and couples faced with infertility are also deeply affected by a complex set of psychological factors that can produce high levels of emotional upheaval and stress. In addition to the

Infertility

731

stigma and social pressures to have children, deep and powerful emotional forces propel many people to exert enormous effort to become parents. As different treatment options are pursued in succession over a long course of treatment, each new step requires complex choices that must be made in the face of the failure of the previous stage. A renewed sense of loss and trauma can result from treatment failures at any step in the process, and stress can accumulate as time passes. Medical professionals and couples themselves, trying to remain optimistic, often ignore or deny these underlying emotions, leaving no opportunity for relief. The impact of infertility is often surprisingly powerful and disruptive to people’s sense of overall well-being because of the many levels of meaning and complex needs associated with the desire to have children. First, the diagnosis of infertility is usually a shock in itself, disrupting the conscious and unconscious expectations that most people have carried with them since early in life about how their parenting experiences will unfold. Another large part of the emotional impact of infertility stems from its tendency to obstruct the progress of psychological development that continues through the life span. The process of separating from one’s family of origin and establishing a sense of personal identity begins in adolescence and continues into early adulthood. Having children often gives people a sense that they are no longer simply children but adults in their own right, on par with the previous generation, and with an identity of their own. For many people, the idea of having children is also seen as an important part of intimate adult relationships or marriage. While there are many ways besides having children to establish a sense of identity and to feel like an adult, the unexpected experience of infertility can leave people feeling that their sense of identity and progress toward full adulthood have been blocked. The existential needs to leave behind something of oneself for the future is also an important factor for many people. Closely related to this factor is what psychologist Erik Erikson called “generativity,” which is a need that arises in midlife, as a normal part of adult development, to leave one’s mark on the world by contributing something to the next generation, whether in terms of family, work, or other pursuits. In the absence of a sense

732

Infertility

of generativity, a person feels a sense of stagnation or loss of meaning in life. While having children is by no means the only way to establish a sense of generativity, it is often a means by which people partly meet this need; thus, infertility poses a potential obstacle to this aspect of adult development as well. Yet another traumatic aspect of infertility is that it involves multilayered losses of the experiences and opportunities that people expect to be part of the normal path toward parenthood. The couple misses the experiences of preparing for the birth, the baby showers, and all the anticipated experiences of pregnancy and birth. They lose the feeling of control, the sense of belonging to their peer group, and the feelings of being healthy and “normal.” It may also be a blow to an individual’s self-esteem or, for some, to the sense of being a complete man or woman. The accumulation of these losses, the injuries to the sense of self, and the potential interference with normal adult development make it clear that infertility can be highly stressful and traumatic, and they indicate that access to psychological evaluation, support, and therapy are ideally a part of the overall treatment for infertility. Legal, Ethical, and Moral Considerations Statutory and case law pertaining to infertility treatment, especially with regard to the use of donor eggs and sperm, custody of unused embryos, or in situations of surrogacy and adoption, is still evolving. In some parts of the world, procedures such as the use of donor eggs remain illegal. Medical ethics and standards of care in infertility treatment are also undergoing rapid change. For example, as technological advances have increased the chances that embryos implanted during IVF will develop into pregnancies, the recommended number of embryos to be implanted during any single IVF attempt has been gradually reduced. However, the absence of firmly established national or international standards leaves room for wide variation in actual practice, occasionally resulting in sensationalized cases of multiple births. Such cases often raise questions about the ethical and moral judgments of parents who request such procedures and of physicians who perform them. Religious and cultural views about the morality of assisted reproductive technologies also

vary a great deal internationally, as well as within the United States. Prospects for the Future Technologies for the treatment of infertility continue to advance rapidly, with many new options emerging. For example, nuclear transfer is one relatively recent development, involving the removal of the DNA from the nucleus of a fertilized egg created using the couple’s own sperm and egg, followed by the transfer of this DNA into a donor egg from which the nucleus has been removed. Improvements are also occurring in the technology for freezing a young woman’s eggs, allowing her to preserve her fertility if it is likely to be impaired for reasons such as the need for cancer treatment. In addition to technological advances, wider recognition of how infertility changes with age may lead couples to make different decisions about timing of pregnancy, or lead them to seek treatment earlier if they are not conceiving. Greater understanding of the role of environmental toxins in infertility also holds promise for prevention of impaired fertility. Finally, greater awareness of the powerful psychological impact of infertility may help to ensure that couples facing infertility have access to proper resources as part of their treatment. David J. Diamond Alliant International University See Also: Artificial Insemination; Assisted Reproduction Technology; Childless Couples; Multiple Partner Fertility; Surrogacy. Further Readings Center for Reproductive Psychology. http://www.repro ductivepsych.org (Accessed June 2013). Horowitz, Judith, Joann Galst, and Nanette Elster. “Ethical Dilemmas in Fertility Counseling.” Families, Systems, & Health, v.29/1 (2010). Jaffe, Janet and Martha Diamond. Reproductive Trauma: Psychotherapy With Infertility and Pregnancy Loss Clients. Washington, DC: American Psychological Association, 2011. Jaffe, Janet, Martha Diamond, and David Diamond. Unsung Lullabies: Understanding and Coping With Infertility. New York: St. Martin’s Press, 2005. Resolve: The National Infertility Association. http:// www.resolve.org (Accessed May 2013).



Information Age The term information age refers to the period in American society in which receiving, organizing, and relaying information through a system of technological innovations is the dominant means of commerce and communication. This era began roughly in the 1960s and 1970s and has accelerated since the 1980s. The information age affects many aspects of family life such as living and work arrangements, meal planning, and connecting with family. Like the invention of agriculture that allowed human groups to live together in larger groups or the Industrial Revolution with the growth of inexpensive consumer products through mass production, the information age influences where people live and how they make a living. When referring to the information age, some scholars place emphasis on these economic transitions. Others, however, focus on the changes in mass communication associated with the information age. Irving Fang, in his history of communication, notes that the Internet is the latest of six information revolutions that began with the invention of writing. Whether viewed as a transformational economic epoch or another communication revolution, each of these perspectives leads to important ways of understanding the impact of the information age on family life. Advanced technology and rapid changes associated with its growth provide both opportunities and challenges for contemporary families. Technological Innovations and Use by Families There are three innovations that, essentially, provide the infrastructure of the acceleration of the information age: the availability of personal computers, the inception of the Internet, and the invention of data exchange platforms (i.e., the World Wide Web). These technological tools and platforms allowed individuals to communicate more rapidly and share knowledge, expertise, and ideas in a variety of new ways. One of the ways to assess the movement of families into the information age is to chart the adoption of digital devices in the home. The U.S. Census Bureau collected the first national survey of household computer ownership in 1984. At that time, 8.2 percent of households had computers. In 2010, the percentage of households with computers was 76.7 percent. The first assessment of

Information Age

733

home Internet access was in 1995, and the Census Bureau reported that 14 percent of households had Internet access at that time. In 2001, that percentage had increased to 53 percent and to 81 percent by 2012. These numbers do not include the growth of Internet access in libraries, schools, and many other public and private venues. Another measure of Internet access has been the percentage of American households with high-speed Internet access (broadband), which increased from 3 percent to 65 percent between 2000 and 2012. Further, the growth of “personal digital devices,” such as cell phones, are a part of the growth of the information age. In 2012, 87 percent of adults owned cell phones, and 45 percent of those phones were smartphones, or mobile devices that were Internet-enabled. Many other information technology devices continue to be produced and purchased by families in the United States, including music players, game consoles, reading devices, and tablet computers. Families have used such technological advancements for work, leisure, consumerism, and to obtain information through articles, videos, blogs, and other forms of mass media. Many individuals and families use technological tools to communicate and interact with one another. For example, many contemporary families communicate or share pictures and videos through e-mail, texts, or social media platforms. The use of social media in the 21st century has been prolific such that billions of individuals across the world have engaged in the phenomenon by participating in a wide range of online environments (e.g., Facebook and other platforms or Web sites) in some capacity. Although there has been dramatic growth in the number of information technological devices in the home, there are still important individual differences in adoption and use. According to sources such as Pew Research Center, how, why, and when individuals use information technology is influenced by a variety of factors, such as education, ethnicity, age, gender, and socioeconomic status. Although it is narrowing over time, they report a “digital divide.” People with the most Internet access and use tend to have some college education, live in households where the annual income is more than $30,000 a year, are Caucasian, and younger. Despite these differences, the information age and technological advances have had a significant influence on environments and social policies surrounding work and family life.

734

Information Age

Changes in Work Life: An Economic Perspective Many of the economic changes that define the information age refer to changes in banking, trade, and globalization, but perhaps the most direct effect of the information age on families centers on the transformation from the use of human physical effort to human intellectual effort in work environments. This has led to an overall shift away from manual labor to service and information professions. For example, automation through the use of computers and robots has reduced the number of jobs needed in manufacturing. At the same time, the growth of jobs requiring the use of information technology skills has grown. Regardless of the profession, almost all work requires some level of knowledge and expertise with the technological innovations that mark the information age. For example, health professionals use machines to collect and receive data from their patients, mechanics must be able to fix a car’s computer, and teachers may use an interactive whiteboard when instructing. Thus, having some computer or technological training has become more of a requirement than an option for employment. The information age and technological advances have also influenced the number of individuals who work, the number of hours they work, or how they work. In the information age, it is difficult to maintain a middle-class living without some education after high school. Thus, the income distribution among families has become more unequal between 1980 and 2010, with better-educated, more technology-able workers earning higher pay and less skilled workers’ earnings remaining unchanged. Additionally, for those who are able to secure employment, advances in technology allows for flexible work hours as they can access computers and complete some or all work from home. Although this provides more flexibility for balancing work and family responsibilities, the blurring of boundaries between work and family roles can produce new difficulties. Changes in How Families Get Information: A Communication Perspective As the technological landscape continues to grow, the ways in which families find and receive information has changed. For example, when parents have a question about child rearing or a medical issue, they no longer have to rely solely on pediatricians or parent educators. The evolution of the Internet

and search engines allows families to read articles or view videos at their own convenience from homes, often finding answers they may have previously asked extended family members or professionals. Being able to find information quickly provides opportunities for families to remain informed. At the same time, Internet users are left to make decisions about the accuracy of the information, which may conflict with that which is provided in health care settings or schools. Further, finding information on the Internet is not limited to adults in families. The number of children and adolescents who own their own personal devices or have access to the Internet at home, school, other public places has increased. Parents and adult family members have to think about how they will monitor youth usage or deal with consequences when they do not. The Picture of Families in the Information Age Like almost all technologies in the past century, the introduction of computers and the Internet has led to pronouncements that technology will destroy families and interpersonal relationships. Those who oppose this view posit that modern technology will lead to a golden age of connected, networked individuals. Many of these assertions about the impact of information technology on families and social relationships are not based on scientific evidence; instead, many of the claims are based on anecdotes and personal experiences. One of the great challenges in understanding families in the information age is the limits in the ability to develop theories about how technology shapes family life and methods for studying a rapidly changing technological innovation. Despite these limitations, there is an emerging body of research evidence that is beginning to outline the ways in which family relationships and communication within families are changing. Dating and intimacy. The information age has changed the context in which people practice courtship. People of all ages use online dating sites to meet others who are looking for romantic partners. Online dating sites are not just used for relationship initiation; they can be used to facilitate extramarital relationships. Other technology methods are also commonly used in extramarital relationships, such as e-mail and texting. Sometimes these relationships just exist as online flirting, but they can also



lead to face-to-face romantic interludes. Additionally, some individuals use technology to announce changes in their relationships or to provide commitment-related assurances to their partner (e.g., by updating their relationship status via Facebook). Other forms of modern technology can also be used to facilitate relationship maintenance (such as text messaging to express caring toward a romantic partner, make a partner laugh, or keep lines of communication open). Advances in technology have also helped families to organize routines (such as mobile applications for shared grocery lists to aide with mealtime routines). Parent–child relationships. Much of the way the information age effects parent–child relationships is through the decisions parents make about technology, either intentionally or unintentionally. After the invention of the radio and television, parents had to make choices about what was appropriate and healthy for their children. Debates in this area have not ended but have instead intensified with the use

Information Age

735

of personal digital devices. There are thousands of smartphone and tablet applications targeted toward children, many of which claim to be educational. As parents navigate access to technology, they may use new Internet-based mediums to avoid unwanted traditional media such as commercials; parents can use Internet streaming services to select shows and watch them without advertising. For young children, parents still have to decide what and how much media is appropriate, but once youth get older, parents need to consider how they will monitor their children’s technology use. Teenagers and parents often hold different views about what information parents should have access to. Bullying is an area that can bring this issue to the forefront. Although bullying behavior is not a new phenomenon, the growth of social media created a new domain in which this can occur, often called “cyberbullying.” This “virtual” bullying can often have real-life consequences for young people, such as triggering feelings of depression or even suicidal ideation. Although parents have always had to make

A young girl works on her homework at a computer, a scenario typical of the information age. The number of children and adolescents who have their own personal devices or have access to the Internet at home, school, or in other public places has grown. Parents and guardians are increasingly compelled to consider how they will monitor youth usage or deal with consequences when they do not.

736

Inheritance

communicative or information-seeking decisions based on research and their own value systems, the advances in technology pose new challenges related to the types of monitoring that can or should be done to manage relationships both on- and off-line. Supporting Families in the Information Age Online technology provides a new set of tools to help families. Increasingly, many professionals have begun to provide online education or therapy as a means of reaching, engaging, or helping individuals and families. The information age has enabled professionals to deliver education or therapy on a variety of topics (e.g., marriage or relationships, parenting, drug/alcohol abuse, stress/anxiety, or career skill development). Although online delivery of education or therapy provides the opportunity to reach wider audiences, professionals can also help individuals and families during the information age by providing media literacy. One example of a media literacy topic is the credibility of online information. Online sources of information include both amateurs and professionals with a range of agendas from fraud to genuine efforts to improve family life. Professionals can support families by teaching them how to question or ensure the information they receive is accurate. For example, despite the fact that scientific evidence for causal links between vaccinations and autism does not exist, there is a lot of information available via the Internet linking the two; this influences many parents’ apprehension about vaccinations or schedules recommended by pediatricians and, in turn, some children may not be receiving medications that have been found to prevent chronic illnesses or death. Professionals can support individuals and families by helping them to distinguish between reliable and inaccurate sources, or by providing media literacy on a variety of levels. Although the information age brings opportunities for families, it is not without challenges. Individuals and families will continue to need support so that they can appropriately use technological advances to receive, relay, and organize information in a manner that optimizes their family functioning. Sarah L. Curtiss Robert Hughes Jr. Jill R. Bowers University of Illinois at Urbana–Champaign

See Also: Blogs; Cell Phones; Digital Divide; Facebook; Internet; Internet Pornography, Child; Personal Computers in the Home; Skype; Technology; Telephones; Texting; YouTube. Further Readings Fang, Irving. A History of Mass Communication. Boston: Focal Press, 1997. Hughes, Robert, Jr., and Jason D. Hans. “Effects of the Internet on Families.” In Handbook of Contemporary Families, M. Coleman and L. Ganong, eds. Thousand Oaks, CA: Sage, 2004. Levy, Frank and Richard Murnane. The New Division of Labor: How Computers Are Creating the Next Job Market. Princeton, NJ: Princeton University Press, 2005.

Inheritance When property, titles, rights and obligations, debts, and other items pass to heirs upon the death of an individual, the practice is referred to as inheritance. Inheritance has played a central role in many societies since the advent of history and has almost always been governed by a series of rules and regulations. The rules and regulations affecting inheritance have and continue to vary greatly from nation to nation, reflecting the values and mores of each individual culture. Changes in the laws governing inheritance are sometimes used to affect great societal transformations, such as when the laws of primogeniture were altered to permit younger children to inherit. The tax considerations of inheritance are often considerable, and a great many legal and contractual agreements are entered into in an effort to avoid the tax consequences of inheriting property. While many oppose taxes on inheritances, others decry the practice asserting that it leads to social stratification and inequitable distributions of wealth. Background When an individual dies, all of his or her property, real or personal, owned at the time of death constitutes that person’s estate. An estate passes to those persons and organizations designated by the deceased after the payment of debts and taxes unless the individual has not left a will. When a



person dies without a will, he or she is termed to have died “intestate” and the estate will pass to certain relatives of the deceased, if living, as set forth in state statutes. Certain assets owned by the deceased do not become part of the estate but instead pass through the operation of law. Assets that are not part of the estate include the proceeds from life insurance policies and property that is held on a joint tenancy basis. Laws of inheritance have a tremendous effect upon how property is distributed and how governments collect revenue. The United Kingdom had laws that established primogeniture and entail as a matter of law. As historically used in Europe, primogeniture was the right of first-born children, in most cases firstborn males, to inherit his parent’s estate to the exclusion of younger siblings. In the event a couple or individual died without issue, inheritance of the estate passed to other relatives, again usually male, in order of the seniority of their lines of descent. Descendents of eligible siblings who were deceased took precedence over living younger siblings, making birth order tremendously important with regard to who inherited wealth and titles and who got little or nothing. Although not all parts of an estate were subject to primogeniture laws, the vast majority commonly were. Primogeniture was established as a way of assuring that families that enjoyed power and rank would be able to maintain that status rather than having it diluted by dividing it up over succeeding generations. Certain parcels of real property were also bound by entail, which prevented land from being sold, divided, or otherwise encumbered. Entail ensured that real property would remain under the control of a certain family. When combined, primogeniture and entail were a powerful means of ensuring that the social order of these nations remained little changed. Although the various American colonies of the United Kingdom had laws establishing primogeniture and entail, soon after the American Revolution the former colonies began to change this. In 1777 Georgia became the first state to abolish primogeniture, although Virginia had passed a statute the year before ending the practice by 1785. Other states soon followed, as American notions of democracy led to strong feelings against the practice; that many British “younger sons” had immigrated to the colonies probably also played a part in the decision. Most states never had laws permitting entail. Of those

Inheritance

737

that did, New York first abolished this in 1782. The concept still exists in four states: Delaware, Maine, Massachusetts, and Rhode Island. In Delaware, Maine, and Massachusetts, however, real property can be deeded, sold, or left to heirs the same as other property except when the titleholder dies intestate, in which case the entail operates. These changes permitted almost all personal and real property to be left as the person holding title sees fit, something that accelerates the spread of wealth. Governmental Interests Although inheritance is a matter involving the transfer of wealth from an individual to other persons or organizations, it is of keen concern to legislative bodies and government agencies. While federal and state estate taxes produce considerable revenues, fewer than 3 percent of adult deaths result in an estate large enough to generate a federal tax return. In 2013 the Internal Revenue Service (IRS) collected tax only on those estates that were greater than $5.25 million, as the U.S. Congress has been pressured by wealthy individuals to limit the taxes collected on money that had already been taxed when earned by the deceased. In 2001, taxes were assessed on estates greater than $675,000. Attitudes regarding estate taxes change rapidly and often, as there are many in favor of assessing such taxes and a similar number vigorously opposed to doing so. The government is also interested in inheritance because of its concern for the survivors of the deceased, especially children and spouses. If these individuals inherit adequate funds, they will have little need for government assistance for their continued well-being. For this reason, the government has made certain tax concessions that encourage individuals to leave adequate funds to their heirs. Life insurance payouts to beneficiaries, for example, are not taxed. The decision not to tax such amounts was made because of the desire to encourage individuals to purchase life insurance as a means of providing for one’s heirs. Intestacy and Other Protections In the event that an individual dies without leaving a will, state laws concerning intestacy control the distribution of assets remaining in an estate after appropriate debts and taxes have been paid. Intestacy laws vary by jurisdiction, although most such statutes have much in common. Following

738

Inheritance

the common law of descent, intestacy laws provide a clear system for distributing the deceased’s estate in the event he or she failed to leave a will. A major portion of the estate usually goes first to the spouse, then to children of the deceased, and finally to dependents of these children if the offspring have predeceased the deceased. In the event a person leaves no spouse or children, intestacy laws require the court to go first to the deceased’s parents, then siblings, then the siblings’ descendants, then grandparents, then the parents’ siblings, and then parents’ siblings’ descendants. How far up the family tree a court will go is determined by statute, and some jurisdictions do not provide inheritance rights to more remote degrees of kinship. If a will surfaces, the estate of the deceased takes the will to a court so that a probate hearing can determine the validity of the will. To be valid, a will must comply with state laws, which vary from jurisdiction to jurisdiction. In many jurisdictions, a valid will has been executed by a person of sound mind over the age of majority. In most jurisdictions, a will must state that it is a will, be signed and dated at the bottom, and be executed before two or more witnesses. In some cases a handwritten, or holographic, will can be deemed valid even if not witnessed. If the court approves of the will, an executor is appointed to carry out the provisions of the instrument. All debts left by the testator must be paid, and taxes due on the estate are turned over to the government. If sufficient funds are remaining, the executor then pays out bequests indicated in the will. Although certain bequests may be made conditional pending certain behaviors by an heir, courts will not enforce requirements that require illegal or immoral actions nor those against the public interest. In certain cases, statutes permit an heir otherwise left out of a will to demand and receive a portion of the deceased’s estate. Such a decision must be permitted by statute and is known as an “elective share.” An elective share, historically known as a “widow’s share,” permits a survivor of the deceased, usually a former spouse, to claim a portion of the estate that is greater than the share, if any, designated in the will. Although the amount that is awarded to the surviving spouse varies from jurisdiction to jurisdiction, generally the share is one-third to one-half of the total value of the estate after debts and taxes are paid. Some states require

a marriage to have lasted a certain amount of time before an elective share may be selected, while others adjust the amount of the share depending on the duration of the marriage. A few states also permit children of the deceased to elect a share of his or her estate, again in an amount determined by statute. Similar to laws related to intestacy, elective shares are permitted because state legislatures want to prevent a will from making the surviving spouse or minor children dependents of the state. Effects of Inheritance In addition to real and personal property that is left by the deceased, certain estates also contain intellectual property that benefits the heirs of the estate. Such intellectual property usually consists of trademarks, copyrights, and patents owned by the deceased at the time of death. Patents give an inventor the exclusive right to profit from his or her invention for a certain period of time, generally 20 years. A trademark is a recognizable sign, logo, design, or expression that is owned by an individual, corporation, or organization for as long as the trademark is active. A copyright grants the creator of a creative work exclusive rights to that work for a certain period of time. In the United States, inheritance of copyrights is the most complex, in part because changing laws have extended the amount of time these exist. Traditionally, copyrights existed for 50 or 70 years after the demise of the creator of a work. In the United States, changes in copyright law have permitted the heirs of creators to extend the duration of the period for which they may demand royalties. Although all books or other works published before 1923 are now in the public domain, others are not. Those works published before 1964 that did not seek a 28-year extension of the copyright are also in the public domain. Pursuant to the Copyright Term Extension Act of 1998, the copyrights of authors extend for their lifetime plus 70 years. In the event of corporate authorship, the copyright lasts for 120 years after the date of creation. These changes have made the inheritance of intellectual property more valuable, as they extend the period for which royalties may be collected. The concept of inheritance is controversial with some, as it is believed to contribute to social stratification and inequality. Affecting the distribution of wealth at a societal level, inheritance allows

Inheritance Tax/Death Tax



certain members of society to benefit from their family connections while others are not able to do so. Because minorities and members of other groups are less socially and financially advantaged than others, inheritance contributes to maintaining the inequalities that exist. As economic inequality leads to differences in education, health, and overall quality of life, some question why the government should encourage or allow a system that permits some to benefit. These assertions are met with equal vigor by supporters of inheritance, who often point out that the process of allowing individuals to determine how their estate is used is no different from any other choice of how to spend or use assets. Additionally, the supporters of the process assert that if inheritance were forbidden, the wealthy could use gifts and other transfers of wealth to accomplish the same results. Despite criticism of the process, it seems likely that inheritance will continue to affect American families for years to come. Stephen T. Schroth Knox College See Also: Adoption Laws; Almshouses; Child Custody; Community Property; Estate Planning; Estate Taxes; Power of Attorney; Wealthy Families. Further Readings Beyer, G. W. Wills, Trusts, and Estates: Examples and Explanations. 5th ed. New York: Wolters Kluwer Law and Business, 2012. Dukeminier, J., R. H. Sitkoff, and J. Lindgren. Wills, Trusts, and Estates. 8th ed. New York: Aspen Publishers, 2009. Friedman, Lawrence. Dead Hands: A Social History of Wills, Trusts, and Inheritance Law. Palo Alto, CA: Stanford University Press, 2009.

Inheritance Tax/ Death Tax The estate tax (sometimes referred to as an “inheritance tax” or the “death tax”) is a tax on the transfer of wealth from the deceased to inheritors. The federal government chiefly applies this tax, although

739

states may also have their own versions. Opponents of the estate tax may refer to it as the “death tax.” This colloquialism is somewhat of a misnomer, as less than 1 percent of all estates are actually assessed an estate tax. Generally, only large estates are subject to the estate tax, as most estates are small enough to be excluded. Estates over the exemption are subject to a relatively high rate of tax (e.g., 40 percent for 2013). However, the average effective tax rate actually runs about 15 percent to 20 percent of an estate’s total value. Estate tax revenues account for about 1 to 2 percent of all U.S. tax revenue. History of the Tax Federal taxes on transfers of wealth date back to 1797 and were used periodically in the 1800s and early 1900s to help fund wars or in times of crises. Earlier “legacy taxes” involved taxing inheritances, although by the 1900s federal taxes involved taxing estates. The difference between the two is that an inheritance tax is imposed on an heir for the amount he or she inherits, while an estate tax is a tax for the total value of the deceased’s estate that is imposed directly on the estate before disbursement to heirs. In 1916, in response to World War I and to growing fears of rising inequality, the federal estate tax was established. This was the origin of the federal estate tax of today, and the tax has remained in effect continuously. Some states have their own estate and inheritance taxes; however, only a handful of states have an inheritance tax (e.g., less than 10 in 2013) and each state figures this tax differently. The rationale provided for the estate tax in the early 20th century varied. The most straightforward explanation was that some form of taxation was necessary to fund government, and taxing the estates of deceased was one of the least controversial methods. As industry replaced agriculture as the dominant means to wealth, and as structural changes to corporations and mergers took place, concerns of concentrated wealth and political power arose. Progressives such as President Theodore Roosevelt advocated that estate/inheritance taxes could stem rising inequality, provide greater equality of opportunity, and address plutocracy fears. Opponents of the tax, however, felt it would disincentivize entrepreneurship. In the century since the enactment of the tax there have been notable revisions and related changes to the tax code. In 1932, a gift tax was

740

Inheritance Tax/Death Tax

added to the tax code that taxed gifts over a certain amount. This was done to prevent avoidance of the estate tax by giving away an estate before death. In 1935, the estate tax was amended to allow the value of an estate to be figured up to one year after the decedent’s death. This amendment (later reduced to six months) was enacted in response to the Great Depression, during which estates could devalue significantly after the deceased’s passing. In 1948, amendments to both the gift and estate taxes allowed for tax-exempt transfers to spouses. Prior to 1976, it cost significantly less to transfer estates through gifts than at death, but reforms created a unified tax structure. Also addressed were “generation-skipping” efforts to avoid estate taxes. Reforms in 1981 and 1997 indexed for inflation, revised credits, and created a family business deduction. The most sweeping reforms came in 2001, when the “Bush tax cuts” put in place a series of increases on the exemption amount until 2010, at which time the act would repeal the estate tax. Before the repeal fully took effect, Congress reinstated the estate tax in late 2010 with an exemption amount of $5 million (to be adjusted for inflation), which was made permanent in 2013. Political/public debate over the estate tax was relatively uneventful until the mid- to late 1990s. The debate that led up to the repeal legislation reflected a larger “antitax” movement most often associated with the Republican Party. The term death tax rose to prominence during this period as well as when the repeal was addressed later. Although the vast majority will never be subject to the estate tax, debate among the American public has typically involved confusion (i.e., the belief the tax applies to them), idealism (e.g., the belief they will become millionaires), or philosophical arguments (e.g., the belief the tax is morally wrong and punishes hard work). Another opposition involved concerns of bankrupting family farms and businesses. However, estimates on how many family farms and businesses are subject to the estate tax range from less than 1 percent to 2.5 percent, and analyses find most farms and businesses have enough liquid assets to pay estate taxes, and in these situations the tax may be paid over 15 years. How the Estate Tax Works A deceased’s estate is comprised of all the assets in which the deceased had an ownership stake at the

time of death. Through deductions, the size of the estate decreases, and what is left over will be subject to being taxed. Deductions include last medical expenses, estate administrative expenses, funeral expenses, and the payment outstanding. The law allows an unlimited amount of money to transfer to a spouse without incurring estate tax. The law also allows the estate to give unlimited assets to charities without incurring the estate tax. If the taxable estate is less than what the federal government has determined to be exempt from taxation, then no tax is owed. Any estates worth less than the exemption ($5,250,000 in 2013) are not subject to an estate tax, and any estates over the exemption are taxed only on the overage. Figuring the final tax owed, if any, involves a somewhat complicated process involving tentative tax and estate tax credit computations that factor in the exemption amounts. In recent years, the tax rate that determines the amount of the estate tax has ranged from 35 percent to 55 percent, and is currently set at a top tax rate of 40 percent. The estate tax return is due nine months after the date of death. The estate of the deceased pays any tax due, generally before the estate is disbursed to the inheritors. There are several ways to avoid or at least minimize estate tax. A popular strategy is to give away assets before death. In addition to marital and charitable transfers, money can directly pay tuition or medical expenses for someone else without being taxed. The gift tax limits how much may be given tax free, although married couples may use “gift splitting” to give away more. This gifting limit is termed the annual exclusion and can quickly reduce the size of a modest estate with gifts going to children, their spouses, and grandchildren. Other methods to avoid estate taxes include the establishment of trusts, the purchase of life insurance, the creation of a family limited partnership, and other complicated techniques. These methods seldom eliminate estate or gift taxes, but they can reduce the amount of estate or gift tax to be paid. Graham McCaulley Andrew Zumwalt University of Missouri See Also: Estate Planning; Estate Taxes; Family Farms; Inheritance; Trusts; Wills.

Further Readings Graetz, Michael and Ian Shapiro. Death by a Thousand Cuts: The Fight Over Taxing Inherited Wealth. Princeton, NJ: Princeton University Press, 2006. Internal Revenue Service. “Estate Tax.” http://www.irs .gov/Businesses/Small-Businesses-&-Self-Employed /Estate-Tax (Accessed December 2013). Jacobson, Darien, Brian Raub, and Barry Johnson, “The Estate Tax: Ninety Years and Counting.” SOI Bulletin (2007). http://www.irs.gov/pub/irs-soi/ninetyestate .pdf (Accessed September 2013).

In-Laws In-law relationships are formed by marriage. They include relationships between a person and his or her spouse’s relatives and one’s relationships with blood relatives’ spouses. In-law relationships are nonvoluntary relationships for both parents- and childrenin-law. It is involuntary for parents-in-law because of the reduced parental influence on children’s mate selections. Similarly, children-in-law do not have a choice in selecting their spouse’s parents. In-laws invest to maintain these involuntary relationships because of the triadic nature of in-law relationships. In-law relationship is built upon each in-law member’s relationship with a common person: a spouse or blood relative. For example, a motherand daughter-in-law may be eager to have a good relationship with each other because of their affection for the man who is their son and husband, respectively. Because of the involuntary nature of in-law relationships, their development often involves challenges and thus contributes to negative stereotypes. However, researchers have found both positive and negative aspects of in-law relationships and have investigated how they are associated with familial factors and the well-being of individuals and family. In-Law Relationships The most significant and investigated in-law relationship is the parent- and child-in-law relationship. The relationship between mother- and daughter-in-law is especially known as the most difficult in-law relationship. In general, women play a role in maintaining extended family relationships and

In-Laws

741

mother- and daughter-in-law frequently have more opportunities to contact and interact with each other as kin-keepers. The opportunities could bring tensions and conflicts, which is why the relationship between mother- and daughter-in-law is more challenging than other in-law relationships. For instance, issues related to child rearing can have both positive and negative influences on the relationship between mother- and daughter-in-law. On the one hand, mothers-in-law can give advice on child care and babysit their grandchildren. On the other hand, such advice and help can be the source of tension and frustration for daughters-in-law or considered intrusive if a young mother has values and opinions on child rearing that differ from her mother-in-laws. Compared to the relationship between motherand daughter-in-law, the relationships involving father- and son-in-law are relatively unknown. Sons-in-law tend to have more contacts with their parents-in-law compared to daughters-in-law. Parents-in-law, in addition, are likely to have more positive ties to their sons-in-law than those to their daughters-in-law. Sibling-in-law relationships are peer-like relationships and resemble sibling relationships with regard to sharing opinions and asking for advice. Thus, siblings prefer to have direct relationships with their siblings’ spouses rather than having indirect relationships through their parents. The Ambiguity of Norms, Perceptions, and Roles About In-Laws Norms about in-law relationships are ambiguous in the United States. In-laws may have different expectations and standards in their relationships because of the ambiguity. If there exists a significant expectation gap between in-laws—concerning frequency of contacts and support exchange, for example—satisfaction with the in-law relationship may be reduced. Perceptions of in-law roles are also obscure. For example, caregiving for elderly parents is commonly considered the daughters’ role in the United States and sons-in-law are often requested to be involved with direct and indirect caregiving and support for their parents-in-law. Caregiving for parents-in-law is not considered a daughter-in-law’s responsibility in general, but many daughters-in-law provide care to their parents-in-law by choice or from feelings of obligation to their husbands. In-laws from different families of origin have their own rules, which may lead to discomfort,

742

In-Laws

stress, or misunderstanding. It can make in-laws struggle to adjust to each other’s family rules, especially in the early stages of developing in-law relationships. Some family rules are ambiguous to inlaws, and thus, without clear communication and negotiations between in-laws about expectations, constant tensions can arise in in-law interactions. To establish new family rules that respect the traditional rules of both families of origin requires a two-way compromise rather than one-way assimilation. The two-way adjustment also helps in-laws to share their identity as family. Factors Influencing In-Law Relationships As with other family relationships, in-law relationships are influenced by contextual factors. Family history such as divorce, remarriage, and relational history of one’s family of origin can affect the quality of in-law relationships. For example, earlier experience of one’s own or a child’s divorce can alter expectations in a new in-law relationship and can have either positive or negative effects on the quality of in-law relationships. In addition, whether parents-in-law accepted or rejected their child’s spouse before marriage can influence their in-law relationship. Acceptance leads to greater closeness in parent- and child-in-law relationships and early rejection can cause later marital problems. Value consensus is another important factor related to the quality of in-law relationships. Differences in values in regard to religion, politics, money management, child rearing, and worldview may create conflicts or distance between in-laws. If inlaws share similar values, it may help all parties to get along with each other and provoke less conflict. Sharing common interests can offer opportunities for in-laws to enjoy activities together and thus can increase the quality of in-law relationships. In addition, similar personalities and lifestyles are conducive to better in-law relationships. Communication among in-laws influences in-law relationships significantly as well. Many parentsand children-in-law prefer having communication mediated, respectively, by their child and spouse to avoid direct communication with each other. The mediated communication may help reduce conflict between parent- and child-in-law and thus maintain positive in-law relationships. On the other hand, it could deter the parent- and child-in-law from developing close ties. Topics of communication also

matter. In general, disclosure from in-laws promotes sharing family identity by fostering feelings of acceptance as a family member. However, the disclosure to in-laws of relational trouble, such as details about marital problems of a family member, can damage the quality of in-law relationships. The frequency of contact with in-laws can have either a positive or negative effect on in-law relationships. On the one hand, having frequent contact with in-laws means having more interactions, thereby providing opportunities for developing inlaw relationships. Consequently, in-laws might end up with strong, positive relationships. On the other hand, having limited contact with in-laws can prevent potential tensions and also result in positive in-law relationships. It is hard to balance loyalty between one’s family of origin and in-law family given limited time, energy, and resources. For example, couples often have a hard time deciding with which birth family they should spend holidays because the choice might be considered a loyalty issue. Loyalty to one’s family of origin could hinder closeness with in-laws. Having close relationships with in-laws requires spending time, energy, and resources that could be expended with one’s family of origin. Thus, choosing to be with one’s in-laws might be viewed as sacrificing his or her family of origin, so a daughter- or son-in-law might hesitate to develop close relationships with in-laws. Associations Between In-Law Relationships and Other Family Relationships The association between marital relationship and in-law relationship may be bidirectional. The marital relationship could affect the quality of the in-law relationship. When a couple is satisfied with their marital relationship, it serves as a foundation for an in-law relationship. The healthy couple may contact their in-laws more frequently and participate in family events and activities willingly. However, a couple going through marital struggle often does not invest a lot of energy in maintaining in-law relationships. Conversely, the in-law relationship can also influence the marital relationship in both positive and negative ways. Conflicts with in-laws may spill over into the couple’s relationship, and consequently the conflicts cause discord between spouses and threaten marital stability. If a spouse experiences conflicts with in-laws, it can bring about marital

Intensive Mothering



conflict and even result in marital instability in the long term. On the other hand, a high-quality in-law relationship can strengthen a couple’s relationship. In-laws as a part of their social support network can nurture the couple’s relationship and prevent marital dissolution by supporting them. The grandparent–grandchild relationship is usually mediated by the relationship between parentand child-in-law. It is well known that grandparents’ interactions with a grandchild are affected by their relationship with their adult child. Researchers have suggested that the relationship with a son-in-law or daughter-in-law might be more influential than the relationship with the adult child when it concerns the grandparent–grandchild relationship. The grandparent–­grandchild relationship is a blood relationship that is not dissolved following the divorce of a child’s parents. However, for grandparents with a divorced child, it is critical to maintain their relationship with the former son-in-law or former daughterin-law to maintain contact with grandchildren, especially if their own child is not the custodial parent. Juyoung Jang University of Minnesota See Also: Caring for the Elderly; Extended Families; Family Reunions; Nuclear Family. Further Readings Bryant, Chalandra M., Rand D. Conger, and Jennifer M. Meehan. “The Influence of In-Laws on Change in Marital Success.” Journal of Marriage and Family, v.63/3 (2001). Lopata, Helena Znaniecka. “In-Laws and the Concept of Family.” Marriage & Family Review, v.28/3–4 (1999). Morr Serewicz, Mary Claire. “The Difficulties of In-Law Relationships.” In Relating Difficulty: The Process of Constructing and Managing Difficult Interaction, D. Charles Kirkpatrick, Steve Duck, and Megan K. Foley, eds. Mahwah, NJ: Erlbaum, 2006.

Intensive Mothering Intensive mothering is the normative or ideal mothering ideology in affluent Western countries, especially in the United States. According to sociologist

743

Sharon Hays, who coined the phrase intensive mothering in her landmark book The Cultural Contradictions of Motherhood, intensive mothering is the current ideology of—or pattern of beliefs and values for what counts as—“good mothering” today. Intensive mothering is a demanding approach to mothering that is child centered and requires mothers always to be responsive in terms of their interactions with their children and professional in terms of their concerted efforts to ensure that their children develop into happy, emotionally and physically healthy, and appropriately ethical children and future adults. The intensive approach holds mothers responsible for all aspects of their children’s physical, social, moral, emotional, and intellectual development. In addition to being child-centric in that a child’s needs are always to take precedence over a mother’s needs, intensive mothering expectations (IME) are also expert guided, emotionally absorbing, labor intensive, and financially expensive. As such, intensive mothering requires mothers to bring professional-level skills to their mothering, while also presuming and promoting at least middle-class economic standing and racial privilege. Finally, while not all mothers practice intensive mothering, the intensive mothering ideology and the embedded expectations promote a system of beliefs and values about what “good mothers” ought to do and, as a result, mothers who are unable or unwilling to meet IME risk being labeled “bad mothers.” Intensive mothering rests on at least five core beliefs: (1) the insistence that no woman is complete until she has children; (2) children are sacred and, as a result, deserve and require constant nurturing and interaction with their biological mothers; (3) “good” mothers must devote their entire physical, emotional, and psychological being to their children all day, every day; (4) mothers must rely on experts to guide them in meeting their children’s needs; and (5) mothers must regard their mothering as more important than paid work. These core beliefs require all mothers to develop professional-level skills, such as therapist, pediatrician, consumer-products safety instructor, and teacher, to assess, meet, and manage the needs of each of their children. In addition to creating impossible ideals of mothering, IME also define women first in relation to their children and encourage women to believe that mothering is the most important job

744

Intensive Mothering “bad mothers”: neglectful, welfare, and/or drugaddicted mothers. In their work on the role the media play in promoting IME, Susan Douglas and Meredith Michaels suggest, however, that celebrity mom profiles in magazines, Web sites, and/or on TV shows where celebrities extol the virtues of motherhood and always suggest that being a mother is their most important “role” or calling in life—have played the most important role in reinforcing and romanticizing intensive mothering and regulating the behavior of mothers and women into IME.

Intensive mothering is a demanding approach to mothering that is child centered and requires mothers always to be responsive in terms of their interactions with their children.

for women, regardless of any professional and/or educational success a woman has achieved. The Rise of Intensive Mothering Sharon Hays argues that intensive mothering began in the 1980s to redomesticate women through motherhood as more and more women took advantage of the large-scale social, legal, and gender changes brought about the 1960s and 1970s social movements, particularly by feminist groups advocating gender equality between men and women. As more and more women became educated and entered the labor force, while also delaying motherhood to establish their career, 1980s and 1990s media stories promoted “good mothering” values and beliefs while warning about the consequences of “bad mothering.” IME was embedded in media representations of mothers, particularly in terms of maternal advice, marketing, and “bad mother” stories, such as newspaper stories about children being abducted and/ or molested while at day care and in TV stories and news reports about children being taken away from

Intensive Mothering Limits Gender Equality Intensive mothering plays a key role in limiting or curtailing gender equality and keeping mothers as the primary parent, regardless of the changes to gender roles and expectations that have occurred as a result of the 1960s and 1970s liberation movements. In other words, even though men’s and women’s lives have become more and more similar in terms of educational and professional access and expectations, becoming a mother fundamentally undermines this similarity and, equally important, changes women’s lives in ways that most often do not affect many men’s lives after they become fathers because mothers continue to be primarily responsible for child rearing. Once children arrive, mothers are the primary parent even when women work and across class lines. Ann Crittenden also reveals that women’s responsibility for child rearing and care also emerges even if a couple shared household labor before the arrival of a child. Ironically, then, many women today, particularly privileged women who are college educated, middle class, and professional, may not actually encounter overt gender discrimination until they become mothers. Crittenden has even suggested, “once a woman has a baby, the egalitarian office party is over.” Even if a mother continues to work after the birth of her children, then, because intensive mothering is so demanding, most mothers struggle to meet simultaneously good mothering and good worker expectations, particularly in comparison to fathers who usually have fewer caregiving responsibilities at home. Thus, intensive mothering works to limit and undermine the similarity American men and women enjoy prior to having children, while also placing the burden of responsibility for child rearing and family life on mothers once children arrive.



Intensive Mothering and Privilege Clearly, IME also require and presume access to economic and cultural resources, especially in terms of money and economic standing. Moreover, IME also assume and require certain class-based behaviors, values, and practices. So, for example, because intensive mothers are responsible for nurturing and cultivating their children’s physical, social, moral, emotional, and intellectual development at every stage of growth, intensive mothers are required to spend much time finding, securing, scheduling, and shuttling their children to the appropriate classes, activities, and/or extra support that experts now deem as necessary for developing healthy children. This finding, securing, scheduling, and shuttling requires a variety of resources to accomplish: the time and knowledge to research, find, and obtain a spot for a child in an activity and the economic resources to pay for and attend the activities, to name just a few of the forms of cultural capital and resources required of intensive mothers. This means, then, that intensive mothering assumes and promotes at least a middle-class or above standing as “necessary” for good mothering, while devaluing lower-income or working-class mothers because of their inability to provide these “necessary” resources embedded in IME. In addition to assuming economic privilege and displacing the non–middle class, intensive mothering also privileges white middle-class mothers over mothers of color who engage in different mothering practices. Many African American mothers, for example, engage in other mothering—the practice of accepting responsibility for a child that is not one’s own—and community mothering—taking care of and responsibility for one’s community. Other and community mothering traditions are viewed as “deviant” within the intensive mothering ideology because these practices challenge and resist the belief that only blood mothers can care for children, refuse the practice of mothering only in isolation, and defy the notion that mothers must lavish all their attention on their children at their own expense. Even though these “deviant” mothering practices challenge intensive mothering ideologies in important ways, because these practices are viewed as problematic within the intensive ideology, many black women have been vilified for these practices, particularly in terms of discussions of

Intensive Mothering

745

welfare mothers. Equally important, vilifying other and community mothering practices also encourages and privileges the practices of white, middleclass mothers above these and all other forms of mothering. Intensive Mothering and “Professional” Mothering Many well-educated and middle- and upper-class intensive mothers also engage in a class-and-education-based form of “professional mothering.” Professional mothering is a specific approach to meeting IME that is entangled with the “experteeism” of intensive mothering: the requirement that good mothers know and understand the advice of experts. While all intensive mothers are required to know and follow expert advice—particularly medical, psychological, and child rearing—not all mothers follow every piece of expert advice they learn. In fact, well-educated, middle-class mothers often seek out advice and then evaluate that advice in terms of the appropriateness of that advice to each of their children based on each child’s natural aptitudes, skills, and abilities. In doing so, many well-educated, middle-class mothers who have had or have professional careers use their professional skills and resources in learning, assessing, and implementing the advice they deem appropriate for their children. “Professional mothers,” then, see mothering as a “career,” where each mother’s primary objective is to use the knowledge and skills she has acquired in her professional life in what Annette Lareau calls the “concerted cultivation” or conscious fostering of children’s skills and talents. In other words, a professional mother is a good mother because she uses her professional and educational knowledge and skills in the service of nurturing and cultivating the circumstances and conditions necessary to “maximize” and “produce” successful and happy children while also using what she has deemed as the most appropriate expert advice and methods to develop each of her children’s intellectual, creative, and physical skills fully and extensively. D. Lynn O’Brien Hallstein Boston University See Also: Child-Rearing Practices; Mothers in the Workforce; Overmothering; Parenting Styles.

746

Interfaith Marriage

Further Readings Crittenden, Ann. The Price of Motherhood: Why the Most Important Job the World Is Still the Least Valued. New York: Henry Holt and Co., 2001. Douglas, Susan J., and Meredith Michaels. The Mommy Myth: The Idealization of Motherhood and How It Has Undermined Women. New York: Free Press, 2004. Green, Fiona Joy. “Intensive Mothering.” In Encyclopedia of Motherhood, Andrea O’Reilly, ed. Thousand Oaks, CA: Sage, 2010. Hays, Sharon. The Cultural Contradictions of Motherhood. New Haven, CT: Yale University Press, 1996. Lareau, Annette. Unequal Childhoods: Class, Race, and Family Life. Berkeley: University of California Press, 2003. Nelson, Margaret K. Parenting Out of Control: Anxious Parents in Uncertain Times. New York: New York University Press, 2010. O’Reilly, Andrea, ed. Mother Outlaws: Theories and Practices of Empowered Mothering. Toronto, Canada: Women’s Press, 2004. Rutherford, Markella B. “The Social Value of SelfEsteem.” Social Science and Public Policy, v.48 (2011). Vincent, Carol. “The Sociology of Mothering.” In The Routledge International Handbook of the Sociology of Education, Michael W. Apple, Stephen J. Ball, and Luis Armando Gandin, eds. New York: Routledge, 2010.

Interfaith Marriage Marriage between people of two religious backgrounds, formerly known as mixed marriage, is commonly referred to as interfaith marriage in the contemporary moment. Because religious institutions and individuals often disapproved of marriage between members of religious groups, the term mixed marriage came to be freighted with negative associations. In the second half of the 20th century, as such marriages became more common, “interfaith marriage” emerged as a nonjudgmental way of referring to couples from different religious backgrounds. That said, some members of such marriages prefer the term interreligious marriages, which they argue denotes differing religious heritages but not different belief structures. Definitions of interfaith marriage have also varied at different moments in history. Surveyors of the

contemporary American landscape such as the Pew Forum on Religion and Public Life and the Gallup poll tend to use a definition of interfaith marriage that considers marriages across major religious families. They track not only marriages between Christians, Jews, Hindus, Buddhists, and Muslims but also between at least three kinds of Christians: Catholics, mainline Protestants, and evangelical Protestants. While American polling data tend to examine only intra-Christian diversity, interfaith marriage might be seen to exist between two members of other broad categories of religion, such as between a Hindu and a Muslim. At some points in American history, people considered marriages between different but similar Protestant denominations to be interfaith, for instance, between a Presbyterian and a Methodist or a Lutheran and an Episcopalian. In the early 21st century, the rate of interfaith marriage is at a notable high in American history. Prominent scholars and public intellectuals Robert D. Putnam and David E. Campbell note that as many as 50 percent of American marriages begin as interfaith marriages. They further note that the average percentage of interfaith marriages in the country is closer to one-third, the difference coming from one spouse converting to the other’s religion or the couple together seeking out a third religion. They note that using more fine-grained distinctions that track interfaith marriages within mainline and evangelical Christianity results in a 10 percent jump in the rates of interfaith marriage. Putnam and Campbell point to the increased acceptance of interfaith marriage over the course of the 20th century, particularly between Protestants, Catholics, and Jews. They note that in a 1951 Gallup poll, 54 percent of Americans believed that a young couple who were in love but of “different religious faiths—Protestant, Catholic, or Jewish— should not get married.” In Gallup surveys taken in the early 1980s, approximately 80 percent of Americans approved both of marriage between Catholics and Protestants and between Jews and non-Jews. In a survey taken in 1990, only 24 percent of young adults believed that sharing a religion was “very important” to a marriage. This trend toward interfaith marriage likely results from increased contact across religious lines, and interfaith marriage is more common among more highly mobile social groups, including the college educated. While ethnic minorities and the more religiously devout are less likely



to intermarry, both because of familial pressure and because of more homogenous social groups, in the decades since World War II, all groups have experienced an increase in interfaith marriage. History While interfaith marriage became increasingly common in the 20th century, particularly the second half of the century, it has always been part of the American landscape. Interfaith marriage has existed in the United States since before there was a United States. While a close examination of the many kinds and circumstances of interfaith marriage are beyond the scope of this article, some broad trends do emerge. Marriage across religious lines has tended to denote minority religious groups’ level of acceptance in the broader society. These marriages typically came about when minority groups socialized freely with majority groups. For example, French Huguenots in colonial New York married those of other religious groups at a rate of over 80 percent, while Jews from the same time period intermarried at a rate of between 10 and 15 percent. While neither group lived in a segregated community (a reality that made the colonial Jewish experience different from the European Jewish experience), this disparity suggests that the Protestant French Huguenots were more likely to be seen as acceptable marriage partners by the majority Protestant population of New York than were colonial Jews. That said, the lower intermarriage rate among colonial Jews reflects Jewish resistance to marriage outside the religion—a behavior strongly discouraged by both Jewish law and custom. At the same time, Jews, Catholics, and other religious minorities have used marriage to Protestants as a way to assimilate into the dominant culture. Historian Anne Rose notes that in the 19th century, men were expected to function largely as public citizens whose religious convictions were not part of their public role. Religion belonged in the home, and religious instruction of children was left to the women. Therefore, a Jewish or Catholic man, in particular, could marry a Protestant woman, come to function socially as a member of her family, and raise Protestant children, functionally disappearing into the cultural mainstream. When a Protestant man married a Jewish or Catholic woman, he could allow her to maintain the home and raise her

Interfaith Marriage

747

children according to the tenets of her religious background, though tensions could arise if the sons became particularly devout. If interfaith marriage could serve as a steppingstone into the American establishment in cities, on the frontier it was simply a fact of life, because minority religious communities were either too small or too male dominated to sustain endogamous marriage. If a Jewish family was one of the only Jewish families in town, intermarriage was essentially inevitable. Similarly, patterns of immigration often created favorable conditions for intermarriage. Rather than coming in family groups as members of other religious minorities did, men dominated the first wave of Muslim immigration in the United States. They often came to work as traders and peddlers, sending money home with the intent either of returning home themselves or of becoming established and then sending for their families. Without Muslim women to marry, unmarried peddlers often married women they met on the frontier. Home Life Debates about interfaith marriage have often focused on how the couple should conduct their home life. Individual religious organizations, for instance, have worried that marrying a nonbeliever would have a negative impact on religious practice. In the first half of the 20th century, as Catholics and Protestants married in increasing numbers, the Catholic Church created pamphlets that expressed concern about the domestic details of interfaith marriage. Because Catholics were forbidden to eat meat on Friday, would Protestant wives be willing to cook fish for their Catholic husbands and children on Friday? Would Protestant husbands force their Catholic wives to feed them meat on Fridays and ridicule them in front of their children for eating fish? Differences such as these in Catholic and Protestant practice have largely disappeared, along with the sense that religious differences will trouble the Catholic/Protestant home, but the question of whether Jewish/Christian couples should have a Christmas tree continues to cause debate and controversy. Most contemporary rabbis who perform interfaith marriages require that the couple agree to maintain a Jewish home, marked by the exclusion of certain Christian practices, often focusing on the Christmas tree. These concerns that interfaith marriage will blur the boundaries between the

748

Intergenerational Transmission

traditions are often articulated in terms of the children: Will the children be confused if they practice both Jewish and Christian traditions? Will the children learn to be good Catholics if there is dissent about Catholic practice in the home? Around the turn of the millennium, a new, more multicultural approach to both assimilation and home practice developed. Not only were minority cultures often less interested in assimilating into the mainstream (and therefore more likely to claim space for their tradition in the home), but the multicultural movement has created new space for interfaith families to maintain both sets of religious practices in their homes, rather than choosing one over the other. While previously such families were perceived as “unable to make up their minds,” multiculturalism offered them a moral groundwork for such choices. Samira Mehta Emory University See Also: Catholicism; Christianity; Judaism and Orthodox Judaism. Further Readings Fishman, Sylvia Barack. Double or Nothing? Jewish Families and Mixed Marriage. Lebanon, NH: Brandeis University Press, 2004. McGinity, Keren. Still Jewish: A History of Women and Intermarriage in America. New York: New York University Press, 2009. Mehta, Samira K. “Beyond Chrismukkah: A Cultural History of the Christian/Jewish Blended Family From 1965 to 2010.” Ph.D. diss. Emory University, 2013. Naff, Alixa. Becoming American: The Early Arab Immigrant Experience. Carbondale: Southern Illinois University Press, 1985. Pew Forum on Religion and Public Life. Many Americans Mix Multiple Faiths: Easter, New Age Beliefs Widespread. Washington, DC: Pew Research Center, 2009. Pew Forum on Religion and Public Life. U.S. Religious Landscape Survey: Religious Affiliation Diverse and Dynamic. Washington, DC: Pew Research Center, 2008. Putnam, Robert D. and David E. Campbell. American Grace: How Religion Divides and Unites Us. New York: Simon & Schuster, 2012.

Rose, Anne C. Beloved Strangers: Interfaith Families in Nineteenth Century America. Cambridge, MA: Harvard University Press, 2001. Sarna, Jonathan D. American Judaism: A History. New Haven, CT: Yale University Press, 2001.

Intergenerational Transmission Intergenerational transmission, the idea that advantages and disadvantages are passed on across generations, is an aspect of family life that has been studied for decades. Research on intergenerational transmission began in the 1960s with studies of status attainment, which explicitly considered how socioeconomic advantages and disadvantages are transmitted from parents to children, and has been since broadened to consider how a host of additional aspects of the social environment—including family formation, health, and crime and delinquency—are transmitted across generations. Given the importance of understanding how and under what conditions advantages and disadvantages are passed on across generations, as well as the relative availability of secondary data sources that include vast amounts of information about parents and their offspring, it is likely that researchers will continue to rigorously interrogate this intergenerational transmission. The first examinations of intergenerational transmission were the studies of status attainment that began in the 1960s. This early research, spearheaded initially by Peter Blau and Otis Dudley Duncan, found that fathers transmit socioeconomic status—in the form of educational attainment and occupational prestige—to their sons. Highly educated fathers, compared to their counterparts, are more likely to have highly educated sons. Similarly, fathers with high occupational status are likely to have sons with high occupational status. Additional research on status attainment was spawned by the development of the Wisconsin Longitudinal Study, a longitudinal examination of men and women who graduated from Wisconsin high schools in 1957. This study allowed researchers to consider the social and psychological factors



in the intergenerational transmission of socioeconomic status across generations. Seminal work by William Sewell, Archibald Haller, and Alejandro Portes, for example, found that parents’ educational and occupational attainment is linked to children’s attainment because of peer influences and because of children’s educational and occupational aspirations. Additional work, especially books by Paul Willis and Jay MacLeod, described another mechanism—cultural norms and opportunities—linking parents’ and children’s socioeconomic status. But parents transmit more than socioeconomic status to their children. More recently, and spawned in large part by the development and refinement of the life course perspective, researchers have begun to consider additional types of advantages and disadvantages conferred from parents to children. One central tenant of the life course perspective, which more generally suggests the importance of timing and context in individual life trajectories, is that of interconnectedness among individuals. Individuals live their lives interdependently of one another and, in accordance with this principal, parents and children influence one another. Researchers have also been able to provide increasing attention to studies of intergenerational transmission because of the development of longitudinal data sources, including the National Survey of Families and Households, the National Longitudinal Study of Youth, and the Fragile Families and Child WellBeing Study. Family Formation and Dissolution Aside from socioeconomic status, family formation and dissolution is perhaps the most commonly considered form of intergenerational transmission. Early research on family formation and dissolution considered how marriage, divorce, and relationship quality are transmitted across generations. For example, children who grow up with married parents, compared to children who grow up with divorced or single parents, are more likely to get married themselves. Similarly, children of divorced parents are more likely to experience divorce in adulthood, and the association between parental divorce and child divorce is especially strong if the woman or if both partners experienced parental divorce. The association between parental divorce and child divorce is most commonly explained by the

Intergenerational Transmission

749

social learning perspective, which posits that children learn behaviors within social contexts. Those who experience parental divorce, for example, may witness conflict between their parents that leads to interpersonal difficulties in their own adult relationships that, in turn, lead to divorce. This is consistent with other research showing marital aggression and other measures of relationship quality are passed on across generations. Though parental divorce remains a strong predictor of children’s divorce, it is important to note that a variety of other factors are associated with the probability of experiencing divorce, and the association between parental divorce and children’s divorce has declined over time, likely because divorce has become more common and less stigmatized. But in addition to its association with children’s family formation and dissolution, parental divorce has a host of additional, mostly negative consequences for children. Children of divorced parents, compared to their counterparts, are likely to have educational, behavioral, health, and socioeconomic disadvantages, though many of these negative associations are short-lived and dissipate within a few years after experiencing the parental divorce. In addition to research considering the intergenerational transmission of marriage, divorce, and relationship quality, more recent research considers how family behaviors such as cohabitation and nonmarital childbearing are transmitted across generations. It is especially important to consider these forms of intergenerational transmission given the stark demographic changes that have occurred in the United States in recent decades, including the dramatic rise of cohabitation and nonmarital childbearing. Similar to research on the intergenerational transmission of divorce, this research is motivated by social learning theory and finds that cohabitation and nonmarital childbearing are transmitted across generations. Children who spend time living with cohabiting parents, compared to their counterparts, are more likely to cohabit before marriage (as opposed to getting married without prior cohabitation). This association between parental cohabitation and children’s cohabitation persists even after controlling for a variety of individual-level characteristics associated with cohabitation such as childhood socioeconomic status, attitudes about cohabitation, and family instability. Similarly, there exists a relationship between parents’ age at first birth and

750

Internet

child’s age at first birth. Parents who have children at a young age are likely to have children who do the same, and this relationship is especially true among unmarried parents. Finally, there is a burgeoning literature on the other ways in which parents transmit advantages and disadvantages to their children, and much of this literature focuses on the intergenerational transmission of health behaviors or criminal behavior and delinquency. With respect to health, both parental physical and mental health is associated with children’s health. Children who grow up with depressed mothers, for example, are more likely to experience depression and other mental health and behavioral problems than children without depressed mothers. With respect to criminal behavior, parents who engage in criminal behavior are likely to have children who engage in criminal behavior. Recent research, for example, shows that paternal incarceration is associated with delinquency throughout adolescence and adulthood and arrest in adulthood. Taken together, research on intergenerational transmission suggests that parents are influential in shaping the behaviors and attitudes of their children. Kristin Turney University of California, Irvine See Also: Life Course Perspective; Parenting; Social Mobility. Further Readings Blau, Peter and Otis Dudley Duncan. The American Occupational Structure. New York: John Wiley, 1967. MacLeod, Jay. Ain’t No Makin’ It: Aspirations and Attainment in a Low-Income Neighborhood. Boulder, CO: Westview Press, 1987. McLanahan, Sara S. “Fragile Families and the Reproduction of Poverty.” ANNALS of the American Academy of Political and Social Science, v.621 (2009). Sewell, William H., Archibald O. Haller, and Alejandro Portes. “The Educational and Early Occupational Attainment Process.” American Sociological Review, v.34/1 (1969). Willis, Paul. Learning to Labor: How Working-Class Kids Get Working-Class Jobs. New York: Columbia University Press, 1981.

Internet The Internet, which has been available in the United States since the mid-1980s, has swiftly become a staple in the majority of U.S. homes and is changing the ways families across the country live, learn, and interact. It is the most recent major technological advancement in a long line of communication developments over the past 200 years, each one having a significant impact on social worlds in fundamental ways. Historically, every time a new communication technology is developed, critics raise concerns about weakening of communities and human relationships. Such were the worries per the introduction of the telegraph, then as the telephone appeared, then when radio became prominent, followed by motion pictures, television, and now the Internet. What power and impact will it have on and in people’s lives, especially those of children and adolescents? Most parents, family members, and individuals, but also educators and health officials, worry about the impact on family relationships and other significant subsystems in society. The Internet is suggested to be a powerful communication medium and tool for all ages because it combines multiple features of all technologies preceding it. As a way to facilitate interpersonal communication, the Internet (i.e., e-mail, Skype) can be used like the telephone and telegraph were. As a way to gets news, be entertained by shows, videos, and movies, and learn about products and services in communities and beyond, it serves the function of radio and television. The Internet also provides access to a “global library” of information, shopping, health information, and political, social, and economic commentary from around the world. In 2011, there were more families connected to the Internet than ever before; it has become a central figure in the lives of families and permeates the large majority of households in the United States. Estimates are that approximately 72 percent of households in the United States have at least one Internet connection, with approximately 100 million homes now having broadband (nondial-up) Internet connections. Children born today and in the past approximately 20 years are “digital natives,” growing up in a world with Internet technologies in their lives and homes from the day they are



born. Middle-aged and older adults are “digital immigrants”; they have experienced families and a world before the digital age and are now adapting to an Internet-heavy world. Children and people in future generations will likely take for granted Internet technologies, quite similar to the way most of us now do the telephone and television. Who’s Connected and to What Extent? Most families today are considered “networked families.” In 2012, more homes and individuals in the United States had Internet connection than ever before. Among the most likely to have Internet connection households are young people; white non-Hispanic and Asian households; households with higher incomes; and families in which members have college degrees. Approximately 16 percent of Americans report having no connection at all to the Internet. The population of “nonconnected” is disproportionately older, Hispanic and/ or African American, lower socioeconomic status, and poorly educated. Overall, 81 percent of adults in the United States between the ages of 18 and 29 are connected to the Internet. Among families and households, the most likely to be networked and Internet using are families consisting of a married couple with minor children. In such families technology is more likely to be a central feature in the way they live, interact, and organize/ manage their daily lives. Such families report using a variety of media tools and new technologies to keep in touch with each other, specifically high-speed/ broadband Internet service in the family home, cell phone connections, and owning and using at least one personal computer in the home. In fact, having multiple “gadgets” has become an expected feature of family life in such highly connected families. For example, according to Pew research, 58 percent of married-with-children families own two or more desktop or laptop computers, 76 percent of spouses and 84 percent of children in such families use the Internet, and 89 percent of such families own and use multiple cell phones. In 2009, approximately 8 percent of families did not have a computer in the home and 4 percent had at least one computer but it was not connected to the Internet. Parents report using the Internet in high numbers, but for different reasons and at different rates if they are single parents versus dual-parent families.

Internet

751

According to recent data, approximately 25 million mothers and 22 million fathers are online in the United States. Among all parents, only 58 percent of single parents use the Internet, compared to more than 75 percent of married parents. Among users, mothers report using the Internet most often from home, whereas fathers are more likely to use it from home and work. Mothers, more than fathers, use the Internet to acquire knowledge and information about spiritual, health, medical issues, and weight loss; fathers, more than mothers, report using the Internet to learn about hobbies or seek leisure information, seek news and current event information, gather financial knowledge, or look at government Web sites. Among single parents, single fathers log on to the Internet more often than do single mothers, and single parents tend to use the Internet more often to communicate and connect socially, whereas married parents tend to use it more for research and gathering health, medical, and financial information. Children in single-parent versus two-parent homes also seem to be using the Internet in slightly different ways and with different purposes. A study in 2000 reported that teens of single parents were more likely than teens in two-parent families to use the Internet for chat room conversations, downloading or playing games, or simply for entertainment/fun. Teens in two-parent families were more likely to search for and buy products and use the Internet for news and information. Experts worry that the Internet might serve too often as a babysitter or nanny for children in single-parent households, fulfilling social or other needs. Internet connections in the home come in a variety of speeds and qualities, and more than ever, access to the Internet is changing, with many family members—especially teens and young adults—going online using smartphones and gaming devices. Still, the home computer is the staple for Internet connection for a majority of families, and the speed they demand in their Internet service has steadily risen during the past decade. The number of family homes adopting broadband, nondial-up Internet service has increased dramatically in the last decade. In 2012, in families with children between the ages of 12 and 17, more than 76 percent had some type of home broadband Internet access, compared with 71 percent in 2008. Among those, approximately 32 percent of families in the United States had a cable modem Internet connection,

752

Internet

another 30 percent had a DSL phone line, approximately 11 percent had wireless Internet, and 3 percent had a fiber-optic connection. Only about 10 percent of Internet-using families had a dial-up connection as of 2009. Family Outcomes: Positive and Negative Effects of Internet Connectedness With more families than ever being wired and networked, the outcomes and effects of changes in family dynamics have been a point of both scrutiny and curiosity among family members themselves, as well as among family researchers, therapists, social workers, educators, lawmakers, government agencies, and family health care providers. While there is no definitive answer to the question of whether Internet and related technologies are ultimately good or bad for families, there seem to be both upsides and downsides to Internet technologies for overall family functioning, closeness, connectivity, and family life management. Families feel closer, or at least not more distant. Although the digital age has increased the amount of time many family members spend away from home, and many fear that technologies are pulling families apart, most family members themselves report a positive or neutral, rather than negative, impact of new technologies on families’ feelings of closeness and connection. More individuals than not credit the Internet and cell phones with helping their current family feel a closer bond than the family in which they grew up; a majority simply believe that despite increased time away from family and the increased potential for distraction, new technologies have not negatively affected the level of closeness felt among family. Couples in families report that Internet media options have allowed them to stay in touch more frequently throughout the day, and that, overall, Internet and cell phone connections have allowed family members to better coordinate family life and have provided more opportunities for day-to-day communication with other family members, even those with whom they share a household. Shared experiences and opportunities for communication. Part of what has created increased feelings of closeness is not only staying connected while physically apart during the day (both spouses, and

parents and children) but the increased use of the Internet together—for entertainment or exploration—when they return home in the evening. Called “screen-sharing” by some, researchers and family members report benefits to families such as shared moments of mundane yet important family communication, coordination of activities, and increased knowledge of each other’s lives, thoughts, and interests. In fact, the Internet and having multiple computers in the home have been credited with more family interaction, less time watching TV, and members more likely to share “Hey, check this out!” moments. While there are fears that Internet and computer technologies can create isolation of family members—when an individual focuses too much or becomes addicted to Internet offerings (social media, gaming, online gambling, etc.)—more than 50 percent of current Internet users in families where members share a household with a spouse and one or more children report using the Internet with another member of the family at least a few times each week. Approximately one-third of such families report “shared screen moments” at least occasionally. In many households across the country, family members are simultaneously together while also connecting and communicating outside the family system to friendsm and extended family members living elsewhere, or using social media as outlets and/or information sources. On the other hand, some research has found that time spent with family was actually negatively affected by family members’ Internet use. Increased conflict, poorer relationships, and fewer shared meals. While the positive and negative outcomes of increasingly wired families are yet to be seen, early indications point to a few potential downsides and emerging conflicts between family members. For instance, Internet-using families work longer hours, are becoming less likely to eat dinner together, and in some studies are less likely than families who own fewer high-tech gadgets to report high levels of family satisfaction. Further, conflicts are frequently reported among teens and their parents, and sometimes between couples, about the amount of time spent and the type of content accessed on Internet-connected computers in the home. For instance, some studies report that when teens use the Internet for entertainment or



Internet

753

An Air Force sergeant on deployment watches the birth of his child over the Internet via a Webcam, August 12, 2004. While there are both upsides and downsides to the role of the Internet in the modern family, most feel the Internet has created increased feelings of closeness among families by allowing them to stay connected even when physically apart.

social needs but parents believe the Internet should be used more for educational purposes, conflict increased between parents and adolescents. Parents frequently express concern that their children’s Internet time is creating less family time. Even a large majority of teens (69 percent of girls, 59 percent of boys) themselves agree and express concern that their use of the Internet might be taking away time that might otherwise be spent with their families. There is also evidence that when adolescents engage in heavy Internet use, the quality of their relationships with parents and friends is lower than those adolescents who have low Internet use. Parents have further expressed concern that the Internet is exposing children to undesirable commercial products, advertising, and consumer ideas, and encourages children to disclose household information in ways they are not comfortable with nor would allow in other contexts. Further, because in many families children know more about computers and the Internet than do parents or other elders in the family, children are

frequently adopting new roles and a heightened status in the family hierarchy and in family decision making, which sometimes results in intergenerational conflicts. Digital family helper and assistant. According to many parents, the Internet has revolutionized parenthood, empowering them with the tools to help them with the core of family life, health, and family-centric information gathering. For instance, approximately 30 percent of parents say that options via the Internet improve time spent with children, have given them improved opportunities for shopping, especially for birthday and holiday gifts, and improve planning of weekend trips and family outings. Nearly 20 percent of parents say the Internet has improved the way they can care for their children’s health, and a large majority of parents believe the Internet is helping their children do better in school, provides opportunities for better research and resource gathering for educational projects and tasks, and is essential to know, use, and understand

754

Internet Pornography, Child

for children to grow up and be successful in the world. The Internet has infiltrated the lives of many individuals’ daily routines, and the majority of users report great satisfaction with how the Internet has positively affected their own lives and, most importantly, their ability to communicate with family and friends. A huge majority (88 percent) of Americans who use the Internet say it plays a key role in daily life and that they use the Internet most often to communicate with friends and family, as well as find useful information with little effort. Of individuals who interact with friends and family, nearly 80 percent say they use the Internet for such communication, and over 50 percent of Internet users report creating/responding to invitations, sending cards, and exchanging greetings online. Currently, there is little argument that the Internet has had a significant impact on the social and daily lives of children, youth, and adults of all ages; the disagreement remains only about to what extent and in what way the Internet is valuable or having a negative impact. Carol J. Bruess University of St. Thomas See Also: E-Mail; Facebook; Flickr; Information Age; Online Shopping; Parental Controls; Personal Computers; Technology; Television; Texting; Wii; YouTube. Further Readings Bargh, John A. and Y. A. McKenna. “The Internet and Social Life.” Annual Review of Psychology, v.55 (2004). Bryant, J. A. and J. Bryant. “Implications of Living in a Wired Family: New Directions in Family and Media Research.” In The Family Communication Sourcebook, L. Turner and R. West, eds. Thousand Oaks, CA: Sage, 2006. Kennedy, Tracy L. M., Aaron Smith, Amy T. Wells, and B. Wellman. “Networked Families: Parents and Spouses Are Using the Internet and Cell Phones to Create a ‘New Connectedness’ That Builds on Remote Connections and Shared Internet Experiences.” Washington, DC: Pew Internet and American Life Project, 2008. Mesch, Gustavo S. “Family Relations and the Internet: Exploring a Family Boundaries Approach.” Journal of Family Communication, v.6/2 (2006).

U.S. Census Bureau. “Computer and Internet Use in the United States” (May 2013). http://www.census .gov/prod/2013pubs/p20-569.pdf (Accessed May 2013).

Internet Pornography, Child Child pornography and child sexual abuse are not new phenomena; however, the emergence of modern technologies, such as digital cameras, recording devices and, most significantly, the Internet, have dramatically changed the nature and extent of these crimes. The Internet has made it possible for offenders to produce, distribute, and access an everincreasing quantity of pornographic materials and content instantly, anonymously, and relatively inexpensively. Moreover, the Internet has made it possible for offenders to engage in the sexual exploitation of children in real-time and interactive experiences via chat rooms, instant messaging, live streaming, and more. Children victimized in the production of child pornography often experience serious physical and emotional consequences, as do their families. As the Internet has facilitated the proliferation of child pornography, families have had to adapt to the changing nature of child sexual abuse with greater vigilance and specific prevention efforts to protect their children from sexual exploitation on the Internet. The Internet has promoted child pornography in a number of specific ways. It has provided offenders with a convenient means to collect and share an exponential quantity of high-quality digital materials, including images, video, and audio, from the comfort and privacy of their own homes at minimal cost and with relatively low risk of detection or legal consequences. The anonymity of the Internet has also allowed offenders to avoid social stigma and, simultaneously, connect with other offenders to help them normalize their sexual attraction to children as well as their sexual fantasies that would otherwise be considered deviant. Furthermore, the Internet has enabled opportunities for offenders to contact children and solicit children for sexual purposes. In effect, the Internet has likely



facilitated both an increase in child pornography and an increase in child sexual abuse, either to meet the high demand for pornographic content or as a result of offenders’ desire to carry out sexual fantasies fueled by pornographic content on the Internet. What Is Child Pornography? Because of differences across jurisdictions in defining both childhood and pornography, there is no universal definition of child pornography. Current federal legislation in the United States defines child pornography as the visual depiction of obscene, sexually explicit, or lascivious conduct involving a minor under the age of 18. More broadly, child pornography may be defined as the visual recording of child sexual abuse. In the United States, the production, distribution, or possession of child pornography is illegal and punishable under the law. The law does not, however, prohibit computer-generated images or images of persons over the age of 18 who appear to be minors. Extent of Child Pornography It is difficult to ascertain the true extent of child pornography on the Internet because of the fact that there is no reliable statistical source that tracks and reports it. Nevertheless, there are a number of indicators that suggest that child pornography on the Internet is a serious and growing problem. In 2012, the Internet Watch Foundation found 1,561 Web domains with content containing images of child sexual abuse. The Internet Watch Foundation concluded that child pornography Web sites were among the fastest-growing businesses online and that the content on these sites often depicted the worst types of child sexual abuse, such as penetrative sex between adults and children, sadism, and bestiality. Other research on child pornography has estimated that, at any given time, there are more than 1 million pornographic images of children circulating on the Internet and about 200 ne images posted daily. These numbers may, in fact, be quite modest considering that much of the child pornography on the Internet is password protected, hidden, and accessible only to technologically savvy offenders through file-sharing programs. Anecdotal reports from law enforcement suggest that offenders arrested for child pornography often have massive collections, in the hundreds of thousands, of pornographic images featuring children.

Internet Pornography, Child

755

Enforcing Laws Against Child Pornography Despite the fact that there are strict laws against child pornography, enforcement of those laws is often challenging. Pornographic materials are generally produced in one location, stored in another location, and distributed from still another location. Consumers, then, are able to access pornographic materials from numerous other locations. Therefore, the production, distribution, and consumption of pornographic materials cross jurisdictions, national and international, which makes it particularly difficult and complicated for law enforcement to track and pursue cases. Furthermore, technological innovations, such as new filesharing sites and programs, often remain one step ahead of law enforcement, further complicating the strict enforcement of laws against child pornography on the Internet. Nevertheless, the U.S. Department of Justice (DOJ) has taken a number of steps to fight child pornography on the Internet. It has funded a CyberTipline at www.cybertipline.com to act as a national clearinghouse for reports of child pornography on the Internet. The CyberTipline is operated by the National Center for Missing and Exploited Children. The DOJ has also created regional Internet Crimes Against Children Task Forces to assist state and local law enforcement agencies to coordinate efforts in pursuit of child pornography offenders. Finally, the DOJ has funded specialized Internet child exploitation units in federal law enforcement agencies to monitor the Internet for child pornography, conduct undercover investigations to identify potential offenders, train other agencies on how to investigate child pornography cases, and conduct forensic examinations of computers to search for child pornography. Characteristics of Offenders Research indicates that offenders are a diverse group with varied interests and reasons for participating in child pornography. Some are pedophiles who use child pornography for sexual fantasy and gratification; some are sexually indiscriminate and use child pornography as one of many forms of sexual stimuli; others are merely interested in the potential financial profits from involvement in the child pornography market. The National Juvenile Online Victimization Survey, which was conducted in 2005 by the National

756

Interracial Marriage

Center for Missing and Exploited Children, surveyed law enforcement agencies across the United States to count arrests for child pornography on the Internet and describe the characteristics of the offenders, victims, and crimes. The survey found that almost all offenders were male and, on average, typical offenders were unmarried, white, and over the age of 25. About 40 percent of those arrested were dual offenders who were also arrested for the physical victimization of children. Furthermore, the survey found that more than 80 percent of offenders had collected images of prepubescent children and images of children involved in penetrative sex, and more than 20 percent of offenders had collected images of children involved in bondage, rape, and torture.

Quayle, Ethel and Kurt Ribisl, eds. Understanding and Preventing Online Sexual Exploitation of Children. New York: Routledge, 2012. Taylor, Max and Ethel Quayle. Child Pornography: An Internet Crime. New York: Brunner-Routledge, 2003. Wolak, Janet, David Finkelhor, and Kimberly Mitchell. Child-Pornography Possessors Arrested in Internet Related Crimes: Findings From the National Juvenile Online Victimization Study. Alexandria, VA: National Center for Missing and Exploited Children, 2005. Wortley, Richard and Stephen Smallbone. Internet Child Pornography: Causes, Investigation, and Prevention. Westport, CT: Praeger, 2012.

Consequences for Children and Families The children involved in child pornography experience multiple victimizations. The first victimization occurs when their abuse is perpetrated and recorded, but the victimization experience recurs each time that record of abuse is accessed. Surveys of victims indicate that this type of sexual exploitation can result in both short-term and long-term physical and psychological consequences. Children report pain, psychological distress, shame, anxiety, hopelessness, and difficulty in establishing healthy emotional and sexual relationships. These consequences can be devastating for children as well as their families. Parents too often experience severe psychological consequences, including guilt, sadness, and anger when learning about their child’s experiences of sexual abuse and exploitation. Families are responding to the changing nature of child pornography by taking a number of steps to protect their children from exposure to offenders by raising awareness about these types of crimes, restricting Internet access, monitoring Internet use, and talking to their children about the risks of sharing too much personal information on the Internet.

Interracial Marriage

Julie Ahmad Siddique William Paterson University See Also: Center for Missing and Exploited Children; Child Abuse; Internet. Further Readings Jenkins, Philip. Beyond Tolerance: Child Pornography Online. New York: New York University Press, 2001.

Interracial marriage represents a form of exogamy—that is, out-group marriage, in which two people from different racial groups marry. For instance, marriage between an Asian American individual and a European American individual is considered an interracial marriage. For a long time, de jure and de facto restrictions on intermarriage prevented people from engaging in interracial relationships. In the United States, antimiscegenation laws forbidding interracial marriage between blacks and whites go back as far as the 1600s. Interracial marriages were also forbidden between Asian immigrants and U.S. citizens. The 1967 Loving v. Virginia Supreme Court, which declared antimiscegenation laws unconstitutional, ushered in an era of growing social acceptance of interracial unions, which in turn led to an increase in the number of interracial relationships. In the last decade alone, there has been a 28 percent increase in the number of interracial marriages in the United States. Interracial marriages are especially prevalent in the west and less common in the northeast, Midwest, and south. The highest rates of intermarriage have been observed among Asians and Latinos in the last decade. Several mate selection theories such as the caste and exchange theory, structural theory, accessibility hypothesis, and racial motivation theory have been used to explain why individuals choose to select a mate outside their racial group. In addition, research has identified race, age, marital status,



nativity status, educational level, phenotype, and residential location as predictors of interracial marriage choice. The number of multiracial children, that is, children born to interracial couples, has also grown considerably in recent decades. These children are generally well adjusted, although some experience difficulties associated with social challenges. History of Interracial Marriages Prior to slavery, sexual relations between black and white indentured servants were not uncommon. During slavery, interracial sexual relations frequently occurred between black slaves and free white women. Initially these sexual liaisons were encouraged because children produced by such unions were also considered property of the slaveholder. However, as sexual relationships between white women and black slaves increased, laws were introduced to prevent the spread of interracial marriages. Just 10 years after transporting the first few African slaves to the colony, in 1630 the Virginia Assembly ordered the whipping of a white man for lying with an African slave woman. Numerous other laws restricting and prohibiting interracial marriages between blacks and whites followed this order. In 1662, Virginia passed a law against interracial sexual relations and about 30 years later the first statutory prohibition of interracial marriage between blacks and whites was enacted. Throughout the following decades, a series of laws ensued raising the fine and the severity of punishment for interracial marriages. In 1818, Virginia expanded antimiscegenation laws to marriages of Virginians who married blacks out of state. The purpose of this law was to prevent Virginians from marrying interracially outside the state and then returning to Virginia. Scholars have argued that the deconstruction of legal barriers, for example, 1954’s Brown v. Board of Education and the 1964 and 1965 Civil Rights Acts, contributed to the gradual increase in the number of interracial marriages in the United States. Namely, they facilitated integration between African Americans and European Americans in social settings such as education and work. As such, these policies also paved the way for the landmark ruling in the Loving v. Virginia case, which in effect legalized interracial marriages. The legal case involved Mildred Jeter, a black American woman, and Richard

Interracial Marriage

757

Loving, a white American man, who married in 1958 in Washington, D.C., where interracial marriages were legal at the time. Upon their marriage, they returned to Virginia, where they were arrested. They were given a choice between a one- to threeyear prison sentence and banishment from Virginia for violating the state’s antimiscegenation law. The couple moved to Washington, D.C., where they experienced much hardship as a result of discrimination. Their legal case was taken up by two lawyers who agreed to represent them pro bono. After nine years of legal battling, their case reached the U.S. Supreme Court. In 1967, Virginia’s antimiscegenation law prohibiting interracial marriage with the exception of persons who had only one-sixteenth or less Indian blood was overruled by the Supreme Court. This historical decision set a legal precedent for repealing antimiscegenation laws throughout the United States. Though legally no longer enforceable, antimiscegenation clauses were part of the state constitutions in South Carolina until 1998 and Alabama until 2000. Interracial Marriages Today Since the legalization of interracial marriages, public attitudes toward interracial relationships have become more accepting, which has contributed to the growing number of such unions. According to the 2010 U.S. Census, there has been a 28 percent increase in interracial marriages in the last decade (from 7 percent in 2000 to 10 percent in 2010). There is considerable regional variation in interracial marriage rates. Marriage between two people of different racial backgrounds is particularly prevalent in the west, where 11 percent of marriages are interracial. They are much less common in the Midwest, northeast, and the south, where only 4 to 6 percent of marriages occur between people of different races. The highest rates of interracial marriage are found in the ethno-racially diverse Hawai‘i, where 37 percent of all marriages are between people of different races. Alaska and Oklahoma are also home to a high number of interracially married couples (28 percent). Demographers attribute the high interracial marriage rates to the presence of relatively large Native populations found in these states. Research also suggests that people who are Asian, Native American, or Hispanic, native born

758

Interracial Marriage

rather than immigrant, young, highly educated, have a light skin complexion, or reside in urban areas are particularly likely to date and marry outside their race. According to Wendy Wang at the Pew Research Center, the highest rates of racial out-marriage occur among Asians and Hispanics (25 percent). Intermarriage rates vary considerable by nativity status within and across these two groups. Among Hispanics, the native born are more likely to out-marry relative to immigrants (36 percent versus 14 percent). However, this disparity is much smaller among Asians (37 percent versus 24 percent). Unlike Asian and Hispanic Americans, a smaller proportion of black Americans is married across racial lines (about 17 percent). Interracial marriages occur at even smaller rates among white Americans; only one out of 10 white Americans is in an interracial marriage. Between 2000 and 2010, interracial marriage rates actually declined among Asian Americans, whereas they increased among black Americans. The Pew Research Center also reported that white– Hispanic couples accounted for 43 percent, white– Asian couples made up 14 percent, and white– black couples made up 12 percent of all interracial or interethnic marriages in 2010. It is notable that gender differences in interracial marriage rates can be observed among Asian and black Americans. Among black Americans, men are more likely to out-marry than women. The opposite pattern is found among Asian Americans.

each partner brings to the relationship. As an extension of the social exchange theory, the caste and exchange theory proposes that U.S. society is structured based on the caste system of race. Within this caste system, black Americans belong to an inferior caste. Thus, when they engage in an interracial relationship, they seek a higher racial status from being with a white person. In exchange for the higher racial position, they have to offer relationship assets that their white partner will find attractive, such as higher economic or occupational status or their physical beauty. Their European American partner in turn brings a superior racial status in society. This caste and exchange theory is also known as status hypogamy or hypergamy. According to the hypogamy theory, white women usually marry down in

Theories of Interracial Mate Selection Several theories of mate selection have been put forth to explain why individuals choose a partner outside their race. Based on the classic mate selection theory of social exchange, most theories have focused on the exchange of relationship assets unique to cross-racial partners. Theories such as the caste and exchange theory, the accessibility hypothesis, racial motivation theory, and structural theory have been proposed to explain cross-race mate selection. It is important to note that many of these theories have been developed to understand interracial partner choice among black Americans and white Americans. Caste and exchange theory/hypogamy. According to social exchange theory, individuals select their mates based on an exchange of personal assets that

An interracial couple in England in the early 1900s. The United States had laws against interracial marriage for much of its history, some of which were still part of state constitutions until the 21st century.



terms of their racial status when they marry black men. However, in terms of their economic or occupational status, white women tend to experience a status hypergamy, that is, they tend to marry up when marrying black men who are more educated and economically more advanced than they are. Accessibility hypothesis. According to the accessibility hypothesis, black men choose to engage in relationships with white women because of their increased accessibility. Certainly during slavery and even up to the 1960s, white women were forbidden fruit for African American men who, therefore, did not have social access to these women. It was not until the social climate became more tolerant toward interracial unions that black men gained greater access to white women. Even today, in some regions of the United States, couples consisting of a black man and a white woman may experience social disapproval ranging from stares to verbal or physical harassment. Racial motivation theory. Racial motivation theory proposes that individuals who opt to be in an interracial relationship exchange racially based relationship assets. In other words, individuals in an interracial relationship are attracted to their partner because of his or her different racial background. Usually, but not exclusively, physical characteristics such as skin color, body structure, hair, or facial features are the relationship assets that individuals find attractive in a person of another race. In addition to being attracted to a person with different physical appearance, often individuals choose a mate across racial lines out of rebellion toward their parents. Structural theory. Structural theory proposes that interracial couples choose their mates for the same reasons as same-race couples. Individuals who become part of an interracial relationship do so because they meet, discover that they have similar interests and values, and that they can relate to each other based on these commonalities. Consequently, love and a romantic relationship between interracial partners develop out of attraction based on compatibility instead of attraction based on racial difference. Children of Interracial Marriages There has been a considerable growth in the number of children born to interracial couples. In fact,

Interracial Marriage

759

multiracial individuals were one of the fastest growing racial subgroups in the last decade. Those under age 18 accounted for a considerable proportion of the multiracial population. Historically, multiracial children were relegated to the lower-status parent’s racial group. For instance, the one-drop rule classified black-white biracial children automatically as black. Since the 1990s, multiracial people have garnered increasing social acceptance, which culminated in the 2000 U.S. Census “check all that apply” allowance in response to the race question. This public recognition of the multiracial experience along with election of the first black-white biracial U.S. president have enhanced social acceptance of multiracial people. Recent research suggests that contemporary multiracial youth have more freedom in how they identify racially. Today their racial identity options include monoracial minority, white, biracial, situational, or aracial. These identities are fluid and may vary across context and over time. A host of child, family, and contextual influences have been identified in the literature as important predictors of multiracial youth’s racial identification. These include, but are not limited to, child gender, age, race, parents’ education, family socioeconomic status, and the type (public versus private) and ethno-racial composition of the school, as well as families’ residential location. Recent research also suggests that racial identification has developmental consequences for multiracial youth. Instead of an ideal racial identity, however, studies indicate that the implications of racial identification for youth adjustment depend on the complex interplay of identity choice, youth and family characteristics, and the specific social ecology in which multiracial youth live. Multiracial youth who feel unaccepted by their social environment or perceive their chosen racial identity to be questioned or denied may experience adjustment difficulties. Despite some social challenges, multiracial youth are also thought to be more tolerant toward diversity and exhibit cognitive flexibility and bicultural competence. Annamaria Csizmadia University of Connecticut See Also: Life Course Perspective; Miscegenation; Multiracial Families; Urban Families.

760

Intersex Marriage

Further Readings Csizmadia, Annamaria, David L. Brunsma, and Teresa M. Cooney. “Racial Identification and Developmental Outcomes Among Black-White Multiracial Youth: A Review From a Life Course Perspective.” Advances in Life Course Research, v.17 (2012). Lewis, Robert, Jr., and George Yancey. “Racial and Nonracial Factors That Influence Spouse Choice in Black/White Marriages.” Journal of Black Studies, v.28 (1997). Lofquist, Daphne, Terry Lugalia, Martin O’Connell, and Sarah Fellz. “Households and Families: 2010.” 2010 Census Briefs (April 2012). Porterfield, E. “Black-American Intermarriage in the United States.” Marriage and Family Review, v.5 (1982). Qian, Zhenchao and Daniel T. Lichter. “Changing Patterns of Interracial Marriage in a Multiracial Society.” Journal of Marriage and Family, v.73 (2011). Wang, Wendy. “The Rise of Intermarriage: Rates, Characteristics Vary by Race and Gender.” http:// www.pewsocialtrends.org/2012/02/16/the-rise-of -intermarriage/2 (Accessed August 2013).

Intersex Marriage “Intersex” is an adjective and umbrella term to describe people whose bodies are not strictly “female” or “male” based on current medical norms. “Intersex” is the most widely preferred term internationally for such people, although some people prefer to use more specific language to describe their biological variations. Although some intersex biological variations are commonly associated with particular medical conditions, having an intersex body is not inherently pathological. Intersex people are often confused with transgender people, and some intersex people who have affirmed a gender other than the one in which they were raised may identify as “trans.” However, intersex is about physical characteristics and not gender. Being intersex physically does not automatically make someone more likely to identify as having a gender other than woman or man. Many intersex people identify simply as women or men. Intersex people, like any other people, may have any sexuality (including heterosexual) or none.

Many intersex people are medically “normalized” in childhood or adolescence without their own informed consent through hormonal or surgical interventions. These “normalizing” interventions are distinct from those done to ensure basic functions such as urination, as the rationale for normalizing interventions is primarily social rather than medical. Research has documented that these medically unnecessary interventions can cause permanent damage such as scarring, chronic incontinence, loss of sensation, and inability to orgasm. Normalizing interventions can also alter an intersex person’s legal sex for the purpose of marriage, and thus affect the person’s subsequent ability to achieve civil marriage recognition. Lack of Federal Protection for Intersex Marriage In the United States, the legal right to marry for intersex people has been complicated historically. At a federal level, the 1996 Defense of Marriage Act (DOMA) defined marriage as between “one man and one woman.” DOMA prohibited the recognition of “same-sex” married couples, which meant that couples who were considered to have the same legal sex were unable to receive any marital benefits at a federal level. This included a broad range of benefits and protections, including social security benefits for surviving spouses and immigration. Under DOMA, intersex people’s right to civil marriage was not protected. DOMA also permitted states to deny recognition to “same-sex” marriages that were legally contracted in other states. Section 3 of the act barred all federal recognition of “samesex” marriages. In a legislative system that defines legal sex based on physical characteristics rather than self-designated gender, intersex people cannot neatly be categorized as either female or male. This administrative binary has also been criticized by some legal scholars for promoting the continuation of medically unnecessary normalizing interventions on intersex people who have not given informed consent. One consequence of DOMA has been that intersex people who were in mixed gender (i.e., one woman, one man) relationships were not guaranteed legal recognition of their marriage. In mid2013, the U.S. Supreme Court ruled in United States v. Windsor that Section 3 of the act was unconstitutional. However, the federal definition of marriage



in the United States does not adequately address intersex people’s need for legislative equality. At a federal level, only female and male classifications are addressed in the context of marriage recognition. According to leading legal scholars, intersex people may have no federally protected marriage rights in the United States. These experts have cautioned that the lack of equal protection for intersex people will persist even if full federal protection for same-sex couples is secured, because of the legislative binary and the absence of federal legislation that specifically protects intersex people. Intersex Marriage at the State Level At a state level, some jurisdictions effectively deny civil marriage rights to intersex people in mixed gender relationships that courts consider samesex relationships. For example, in the 1999 Littleton v. Prang wrongful death case brought by surviving spouse Christine Littleton, a Texas court invalidated a mixed gender marriage involving a woman who was assigned as male and who was genetically male. Although the court classified Littleton as transsexual and she appears not to be intersex, her case had implications for intersex people. The use of genetic sex markers to define sex for the purpose of civil marriage may lead to samegender relationships being recognized as mixed sex (i.e., one biological female and one biological male). Justice Karen Angelini’s concurring decision in Littleton v. Prang acknowledged the challenges that this definition posed for situations in which people’s chromosomal, gonadal, and genital tests are different. Justice Angelini also noted that some people’s biological sex could be “ambiguous,” even with medical testing. She declined to express a view regarding the legal status of intersex people for the purpose of marriage. The Texas case of Nikki Araguz illustrates some of the difficulties that intersex people can face with regard to civil marriage recognition. Although Araguz has publicly discussed being intersex, she is often described in the media solely as transgender. In a HoustonPBS news channel interview in 2010, Araguz stated that she had an intersex variation known as partial androgen insensitivity syndrome (PAIS). Araguz described PAIS as a medical classification called “transgender syndrome.” However, PAIS is distinct from gender, and “transgender syndrome” is not a recognized medical classification.

Intersex Marriage

761

Araguz married firefighter Thomas Trevino Araguz in 2003 in Texas. Although Araguz has publicly discussed being intersex, she is often described in the media solely as transgender and has herself conflated intersex with transgender in a televised interview. In 2010, Thomas died in the line of duty. Araguz was away on a business trip and was not notified of his death; she discovered it from a social networking post by another firefighter’s wife. Her in-laws denied her access to her stepchildren and filed two suits to deny her survivor’s benefits— both as Thomas’s spouse and as a named beneficiary. Thomas’s parents argued that the marriage was invalid because Araguz was not assigned as “female” at birth, and thus was male at the time of the marriage. In 2011, the state district court ruled in favor of the in-laws and nullified the marriage. Araguz’s survivor benefits were withheld, and the first of her appeals was denied later in 2011. Araguz lost a second appeal in 2012 but continues to appeal the decision as of September 2013. Araguz married her second husband, William Loyd, at the Nueces County Courthouse in Corpus Christi, Texas, after struggling to be recognized as a woman for the purpose of marriage. Future Directions Intersex marriage has not been addressed directly by most U.S. jurisdictions that use physical characteristics to determine marriage eligibility. Intersex activists have raised concerns that the lack of federal protection for intersex people, the persistence of “normalizing” medical interventions, and the binary physical requirement of legislative definitions of sex may continue to undermine intersex people’s right to civil marriage. Thus the term same-sex in reference to marriage can be misleading and has been critiqued by some intersex activists as exclusionary. Some legal experts have predicted that, as public awareness about the existence and relationship needs of intersex people increases, legislative efforts for “marriage equality” will increasingly be tasked to consider and include intersex people. Y. Gavriel Ansara University of Surrey See Also: Gay and Lesbian Marriage Laws; Same-Sex Marriage; Transgender Marriage.

762

Interventions

Further Readings Karkazis, Katrina. Fixing Sex: Intersex, Medical Authority, and Lived Experience. Durham, NC: Duke University Press, 2008. Organisation Intersex International (OII) Australia. “Intersex Legislative Issues–A Brief Summary.” http://oii.org.au/21053/intersex-legislative-issues (Accessed September 2013). Uslan, Samantha S. “What Parents Don’t Know: Informed Consent, Marriage, and GenitalNormalizing Surgery on Intersex Children.” Indiana Law Journal, v.85/1 (2010).

Interventions The term social intervention is broad and multifaceted and has a cultural history dating back to the emergence of humankind. An intervention is any planned human act implemented at any ecological level for any population segment specifically designed for the purpose of influencing change. A multitude of factors could be associated with the deployment or administration of an intervention, including social ethics, research approach or design, purpose of the change or hope of the outcome, contraindications of the change, and the shifting cultural milieu. Social interventions were begun as soon as the early colonies began organizing within the boundaries of what is now known as the United States. For example, early European models of the almshouse traveled with the colonists to America and were soon used as a resource to manage the very old, poor, mentally ill, or other vulnerable populations. Some of these almshouses were funded and managed by private pay sources as altruistic gestures toward struggling colonists who needed assistance and possessed little or no resources to manage their own care. Others were funded by allotments of city taxes once the municipal structures began to form. And still others were formulated by certain acts or laws implemented on the federal level once the United States had been formed and a national income tax had been established. Each of these various styles of almshouses were managed in certain ways and met general or specific needs depending on the funding source, the population who managed

them, the population they served, and the investment of the community surrounding the service(s) they provided. Applying the almshouse example and carrying it forward, however, provides a chance to see just how complex a social intervention can be. At the legislative level in the early 20th century, the U.S. Congress decided to close down almshouses because of their history of poor care and high cost to manage. They passed a law federalizing a system of providing money to the elderly so they would manage their own needs. This system dissolved when the goals of that intervention were not met with satisfaction, and a new system of providing funding for long-term care came about when Medicare was passed into law in 1965. Since that time, interventions attached to all aspects of long-term care have been formulated and refined at a more scrutinized level. The formulation of a series of outcomes to ensure those within the long-term care system are treated ethically were developed by invested stakeholders at the federal, state, professional organization, university, and professional provider level. Ombudspersons have been embedded in various long-term care facilities to manage problems that arise between providers and recipients of care. More innovative treatments related to dementia, falls, daily living skills, general or skilled nursing care, or specialized medical care have been developed specifically for the aging or other vulnerable populations. And interventions based solely on daily activities such as eating, toileting, or social interactions are developed on a continual basis. Historical examples related to other aspects of the American family include multitudinous ways of intervening to improve child welfare, support the economic stability of families, and assist families in crisis or with specific needs. For example, the Orphan Train movement, operating in the United States from the 1820s to 1929, was an effort to relocate homeless and orphaned children from the crowded cities along the east coast to foster homes in rural areas across the west. Family assistance at the policy level took shape over time in the form of child nutrition programs, housing assistance, and employment assistance. At the community level, a variety of interventions have met family needs in hundreds, if not thousands, of ways, such as through nonprofit food pantries or facilities that provide



clothing, parenting classes for families involved in court-ordered treatment plans, and domestic violence shelters for battered individuals and their children, just to name a few. From organizations springing up to meet new demands for a period of time to interventions that remain similar to the way they began a century ago, “family intervention” is a key piece of U.S. family sociology. Thus, the development of the social intervention can be examined across national history, at the various levels of application, through the lens of various academic or professional disciplines, and through the humanistic or ethical lenses of culture, spirituality, ethics, and the continued teaching and learning to improve any of these domains. Professional Disciplines and Social Intervention Academically delivered and professionally or legislatively regulated, social disciplines across the spectrum of service delivery have emerged and become refined over the country’s national development. Much of this development reflects trends that emerged internationally, but the United States in particular is known worldwide for much of the rigor and excellence related to various training systems and social service professions in modern history. The development of these professions parallels the development of the affiliated interventions with which they are associated. For example, the medical profession, once a single provider or small operation service provider, has evolved from a single location and perhaps in-person or rural delivery of services to include today a multitudinous system of all levels of operations and all methods of care delivery. This individually oriented and single-operation model has evolved across disciplines as well. America has gone from single prison operations, single-source education systems, single-operator pharmacy, or support operators to include large corporate conglomerates and a variety of service deliveries that also include inpatient to home-based delivery. One application as an example would be the prison system. Single town or city jails have now evolved to corrections service contracts to manage multistate federal prison systems all the way down to nonprofit or community-subsidized minimum security or halfway-house units embedded into various communities or even neighborhoods.

Interventions

763

Professional credentialing within these types of systems ranges from Council on Law Enforcement Education and Training (CLEET)–certified security guards or wardens to American Medical Association (AMA) board-certified physicians or American Psychological Association (APA) boardcertified psychologists or other behavioral health providers. The interventions performed by each of these professionals are unique to their profession and are performed within their professional scope of practice to reach the goals of change set by their specific institutions or professional standards and practices. Refined over time, training and credentialing bodies are formed and continue to develop in response to needs at the community, state, or federal level and sometimes in response to changing culture or events. For example, after the World Trade Center towers in New York City were destroyed as a result of terrorism in 2001, new training protocols for emergency and first responders emerged that included communication protocols should large power sources or flooding of network capacities take place. Also resulting from this particular event were new protocols for triage and emergency response related to terrorism within the Red Cross organization, national military systems, and medical communities. New certifications and credentialing emerged related to resulting rises in post-traumatic stress disorder (PTSD), infectious disease management should any emergencies result in populations being exposed to chemical weapons, airline security credentialing along with newly formed search and safety-related jobs required by the Federal Aviation Administration (FAA), and evacuation or student management training for professionals within the education systems who have oversight for students during a potential emergency. Professionals who provide interventions such as case management or early intervention screening are ethically bound by continuing education standards they must meet on an annual, biannual, or other time-related basis. These continuing education credits help professionals stay up to date on the latest research and applied instruction on new interventions, new deliveries of old interventions, or revised standards applied to well-documented or researched interventions within their respective fields. Continuing education standards for

764

Interventions

professionals are set by their own state or national organizations and licensing boards, and are balanced by feedback from the public and the profession to strengthen their credentialing and requirements to perform the interventions necessary to meet the needs of the communities they serve. The professions seen in society today are not those that performed social interventions at the onset of the United States, and will not be the same several years down the road. Even today, many states have before them legislative requests to implement laws to establish or recognize new professions, add various professionals to old laws to provide funding for the interventions they perform, and to re-create or delete older professions shown by evidence to be candidates for obsolescence. The “Intervention to Prevention” Spectrum The verb form of “intervention” can be analyzed by dividing the action performed by social scientists, professionals, and systems into two categories: intervention and prevention. Preventive-based services have been shown to have markedly effective outcomes but are generally reduced when funding systems are strained. Intervention-based services are generally maintained at the level deemed to meet the greatest need first, even when funding systems are strained, but are considered hierarchically if resources are scarce. For example, a preventive service might be the provision of education for families and caregivers of aging populations related to the in-home provision of volunteer services. This education could include ways to lift elderly persons from bed to a chair, ways to prepare certain foods for better swallowing results, and information related to respite services available in the area should caregivers need a physical or mental break. These prevention services could be provided as part of a spectrum of services offered by state departments of human services agencies, grants from a variety of sources, or from the medical community seeking to address problems they have identified in family practice or gerontologically focused medical clinics. A more midrange intervention could be the provision of mobile meals delivered daily to aging populations as a service to help prevent the recipients from having to begin long-term care, as research shows meals or caloric intake are one of the main reasons why decisions are made to begin inpatient or

long-term care. And a more intensive intervention could be the provision of what is currently known as memory care, or inpatient care for persons with advanced dementia. Memory care includes less freedom for the person with dementia to provide a higher level of safety, and includes professionals trained in managing outbursts, pain reports related to dementia, troubled family members who present as frequently distressed, etc. As funding or economic factors influence this segment of the population, however, the preventive services could be cut. Thus, the burden for educating families and other caregivers could fall on family physicians, home-care nurses, or professionals affiliated with long-term care facilities. As preventive services are streamlined or eliminated, the supply and demand balance shifts toward a burden to those intervention services or professionals that still remain active. The current implementation of the Affordable Healthcare Act (AHA) is a social experiment in prioritizing preventive care versus intervention services. Within the AHA, annual checkups for children or adults are integrated into various insurance plans. Preventive screenings are being debated as to their cost-effectiveness. And, preventive care such as outpatient behavioral health services on balance with how these services prevent the need for inpatient behavioral health hospitalizations are all considered within this large and groundbreaking social experiment. If history follows, the preventive services will be the first cuts made should funding for the AHA become problematic. Finally, prevention social systems range from the public education system on the prevention side to high-security prison systems on the intervention side. Societies deem prevention versus intervention needs and apply resources as they deem the value and meanings applied to these types of interventions. And, as meanings and values of the nation change, the demand for prevention or requirements for intervention will follow suit. The Role of Research in Interventions The social sciences, known to academia in earlier times as the soft sciences, are becoming increasingly rigorous and standardized through research as it applies to interventions. Psychotherapeutic models supported by outcome-based research are

Irish Immigrant Families



considered to be higher quality than those that have not been tested. Classic experimentally controlled research is becoming more pervasive as it relates to interventions, and opportunities to implement standards of “at least” quasi-experimental designs are highly sought in federal funding sources. As the developments of social interventions are refined, the research supporting the validity of methods and reliability of tests or interventions provides more confidence to funders and to providers that statistically significant portions of a population will benefit from a specific treatment. Examples of interventions that have developed over time as a result of feedback from research outcomes include the Sooner Start and Head Start programs, marriage and relationship education programs, fatherhood involvement programs, substance abuse treatments that include family components, adolescent independent living programs developed for emancipated youth or young adults, speech therapies, and clinical interventions such as family therapy or applied behavioral analysis. Each of these programs or interventions have been developing over time based upon ongoing results from research, and contemporary families benefit greatly because of the scientific processes involved. Social interventions that are less tested still exist, and anecdotal or informal survey results on many of these treatments show they can be as effective or possibly more effective than other methods. An example of these less tested models would be many of the interventions provided by nonprofit or faith-based organizations. These organizations meet a need and fit well with the belief system of certain funders regardless of research or evidentiary support. However, within the social and political structure of public funding, research is generally now needed to report on all grant activity, new methods or pilot ideas for new interventions to solve social problems, and to compare across the field of all similar interventions that serve the public best for the greatest humanitarian and monetary value. Kelly M. Roberts Oklahoma State University See Also: Almshouses; Assisted Living; Family Medicine; Orphan Trains; Nursing Homes.

765

Further Readings Haskins, R. and I. Sawhill. Creating an Opportunity Society. Washington, DC: Brookings Institution Press, 2009. Huey, P. “The Almshouse in Dutch and English Colonial North America and Its Precedent in the Old World: Historical and Archaelogical Evidence.” International Journal of Historical Archaelogy, v.5/2 (2001). Sussman, M., S. Steinmetz, and G. Peterson, Eds. Handbook of Marriage and the Family. New York: Plenum Press, 1999.

Irish Immigrant Families People of Irish descent comprise the second-largest ancestry group in America. Seven times more Irish lived in the United States than in Ireland in 2010. They immigrated in the colonial period along with other European groups such as Germans, Poles, and Italians. Unlike the other three groups, the Irish had the advantage of speaking English and understanding many aspects of the American culture when they arrived. Between 1717 and 1771, large numbers of immigrants came to the United States from Ulster in Northern Ireland. They were predominantly Protestants who had immigrated to Ireland from Britain. In America, they became known as the Scotch-Irish. Many of them settled in the Appalachian Mountains, where they engaged in subsistence farming. In 1789, there were only around 30,000 Catholics living in the United States. The potato famine that hit Ireland in the 1840s and 1850s sent hordes of poor, uneducated, and unskilled Irish immigrants across the ocean. The fact that most Irish immigrants from this latter group were Catholic, as were almost all Irish immigrants who came to the United States after 1930, meant that Irish Americans became the single most significant ethnic influence on American Catholicism, causing them to come into frequent conflict with other Catholic immigrants such as Polish Americans. Unlike most other immigrant groups, the Irish often viewed life in the United States as enforced exile. Some of that feeling arose from the tendency of the Irish to blame all of their problems on the British, and America was seen as an extension of

766

Irish Immigrant Families

Britain because of its pro-British sentiment. The 2010 census did not detail ancestry, but according to data from the American Community Survey, there are 34.7 million Irish Americans living in the United States. Most Irish Americans live in the south (32 percent) or the northeast (25 percent). Others live in the Midwest (18 percent) or west (12 percent). There are also 3,470 Scotch-Irish Americans who live in the United States. More than half of all Scotch-Irish live in the south, with smaller concentrations in the west (20 percent), Midwest (17 percent), and northeast (12 percent). The Irish have influenced American culture through foods such as Irish stew, corned beef and cabbage, and Irish soda bread. On each St. Patrick’s Day on March 17, American families of all ancestries wear green and attend parades and festivities. Becoming Americans Between 1820 and the 1920s, some 4.5 million Irish immigrants arrived in the United States. They worked on the railroads that connected the East Coast with the West Coast. They also helped to build the Brooklyn Bridge, the Erie Canal, and various city subway systems. Irish males were often called “Paddy,” sometimes with affection but more often with ridicule. The Irish love of drink defined the Irish for many Americans. Irish women usually entered domestic work, and some served as housekeepers. Later on, as they settled into American life, some Irish males became police officers and firefighters while others entered the fields of politics and law. Females became nuns, nurses, and teachers. Because they were Protestant, the Scotch-Irish who settled in Appalachia and other southern areas were able to blend well into the community. Most early Irish Catholic immigrants settled in New York, Boston, Philadelphia, Chicago, and San Francisco. By the 1850s, Irish Americans made up half of all skilled workers in Boston. Most Irish immigrants regularly sent money back to family members still living in Ireland. In large cities, Irish Americans clustered together, forming what became known as Irish ghettos. Irish American families often lived near African American families. Because so many Irish immigrants had arrived in the United States within a relatively short period of time, relations with their neighbors were sometimes tense because African Americans believed that the Irish

had taken jobs that would otherwise have gone to them. The Irish responded to the conflict by identifying themselves as part of the white majority and African Americans as “the other.” Although the Irish faced a good deal of prejudice from other ethnic groups and were often stereotyped by the mainstream Americans, most were dedicated to their families and viewed themselves as responsible, friendly, dependable, and diplomatic, with a tendency to avoid open hostility whenever possible. Within the home, the mother was seen as the center of the family, and she served as the moral conscience for family members. Males often placed Irish women on pedestals. Mothers were usually the ones who made decisions about the children. Instead of doting on them, Irish mothers tried to train their children to be strong and independent. Thus, Irish children sometimes suffered from a lack of open affection. The Catholic priest was also an important figure in the lives of most Irish American families. He was the one that mothers turned to when fathers failed to care for their families because they were spending too much time at local saloons. The priest was also the one who stepped in whenever drunken fathers became abusive. If a family could not eat, the priest helped to feed them and sometimes paid their rent when families would have been evicted without help. The link shared among parish priests, nuns, and Irish Catholic families was heightened by the establishment of parochial schools. Social occasions for Irish families were often linked to the church and to extended family groups. The wake was an important event in Irish American families because it allowed them to celebrate the joy of life while bringing acceptance of loss. Between 1860 and 1890, 2 million new Irish immigrants joined those already living in the United States. Irish immigrants who moved to the port cities of the south found themselves both economically and physically vulnerable. Most of them had come to America to escape the caste oppression of their homeland. Yet, they found themselves taking on jobs that were considered risky. Despite the risks, the Irish were generally welcomed in the south because they added to the white population, and they generally accepted the racial prejudice that was an integral part of daily life. Outside the south, Irish Americans tended to live in large



Irish Immigrant Families

767

Members of the Hurley School of Irish Dance, based in Maryland, participate in the 42nd annual St. Patrick’s Day parade in Washington, D.C., in 2013. The most significant influence of the Irish in the United States is the annual celebration of St. Patrick’s Day on March 17, with the first St. Patrick’s Day parade held in New York City in 1762.

cities where they were crowded in with other poor immigrants, exposing their families to high crime rates from Irish and other ethnic street gangs and to rampant disease that was a by-product of overcrowding, inadequate sanitation, and poorly ventilated homes. Some 20,000 Irish Americans fought for the Confederacy, but as a group, Irish Americans were more likely to support the Union than the Confederacy. In all, about 140,000 Irish American males fought for the Union. Some joined Union forces for the financial safety of having a regular wage along with room and board. Others became paid substitutes for wealthy Northerners who were unwilling to fight. On July 13, 1863, in what became known as the New York Draft Riots, Irish Americans revolted against President Abraham Lincoln’s enforced enlistment of Irish American males. The violence

was mostly directed at African Americans. Before it was over, more than a thousand people had been killed, and property damage had climbed to more than $1 million. Early- to Mid-Twentieth-Century Families Between 1851 and 1921, 4.5 million Irish immigrants arrived in the United States. Twentyseven percent of those were females between the ages of 15 and 24. Community leaders often discouraged females from marrying because they could be more productive working as domestics and receive room and board in addition to their wages. Before the northern migration of southern blacks, most female domestic servants in New York, Boston, and Philadelphia were these young Irish females who developed a reputation for being virtually untrainable. By 1855, they comprised almost three-fourths of domestic servants

768

Irish Immigrant Families

employed in New York City. By 1900, more than half of those arriving in the United States entered the field of domestic service. In the autumn 1871 issue of Harper’s Bazaar, the magazine attempted to explain that “Biddy,” the name by which Irish American domestics had become known to their largely Protestant employers, was innately incapable, crude, and prone to breaking things and disobeying instructions. Employers were encouraged to understand their faults without being overly sympathetic. Irish immigration in the United States declined after 1920, but large numbers of Irish American Catholic families were living in America when the stock market crashed in late 1929. The Great Depression hit Irish Americans particularly hard because they had wielded little economic power before it began, partly because of their ethnicity but also because of their religion. Feelings of hopelessness and frustration were common among Irish American families. Many families were forced to go on relief after male wage earners lost jobs. The more fortunate members of the community had government jobs, working for city, county, or state governments. In urban areas, most Irish American families lived in overcrowded cold-water flats. Because there was usually only one bathroom for an entire floor of families, baths were taken once a week in the kitchen. Irish Americans had long been politically active in the United States, and Irish political machines were legendary. In colonial America, Charles Carroll represented Maryland at the Second Continental Congress and signed the Declaration of Independence in 1776. Following the American Revolution, Thomas Fitzsimmons of Philadelphia attended the Constitutional Convention and signed the U.S. Constitution in 1787. However, when Irish Catholic Al Smith ran for president of the United States in 1928, he was soundly defeated by Herbert Hoover, his Republican opponent who had fanned the flames of anti-Catholicism by assuring the American people that the Pope would control American politics from Rome if Smith were elected. In 1960 John Fitzgerald Kennedy, a Democrat from Boston, where the Irish were a significant political power, became the first Irish American Catholic to become president of the United States. Many Irish Americans felt vindicated by his election. When Republicans attempted to use anti-Catholicism

against Kennedy, he faced it head on. His candidacy also benefited from that fact that anti-Catholicism had declined greatly. Late Twentieth and Early Twenty-First Centuries In the 1980s, a new wave comprised of an estimated 100,000 to 150,000 Irish immigrants sought new homes in the United States. They left a homeland where unemployment rates were skyrocketing, and many came to America illegally. In general, they headed to cities that already boasted large Irish American populations such as New York, Boston, Chicago, and San Francisco. These new immigrants tended to be better educated than earlier immigrants, and their families tended to enjoy a higher standard of living. In 1980, 44 million Americans reported that they were of Irish descent. Onethird of the population of Massachusetts was in that group, 23.5 percent of the population of New Hampshire, and 22 percent of Pennsylvania, Kentucky, West Virginia, and Tennessee. By the 1990s, the economy of Ireland had improved, and the population of Irish immigrants declined to 38,735,539, or 15.6 percent of the total population. Another 5,617,773, or 2.3 percent, were Scotch-Irish. When the 2000 census was taken, 30,524,799 Americans identified themselves as Irish Americans, comprising almost 11 percent of the total population. Some 4,319,232 (1.5 percent) Americans reported that they were of Scotch-Irish ancestry. There were 34.7 million Irish Americans living in the United States in 2010. Some 144,588 were naturalized citizens. The median age of 39.2 years was lower than that of previous generations because of the influx of young Irish immigrants in the late 20th century. Census data from 2010 reveals that a third of all Irish Americans had at least one college degree, and 92 percent had completed high school. The median wage of $52,290 was slightly higher than that of the general population ($52,029). Some 40 percent were engaged in managerial and professional occupations. Twenty-six percent worked in sales and office jobs, and 15.7 percent were involved in the service sector. More than 70 percent owned their homes. St. Patrick’s Day The most significant influence of the Irish in the United States is the annual celebration of St.

Islam



Patrick’s Day on March 17, the anniversary of the death of the patron saint of Ireland. St. Patrick was taken to Ireland and enslaved. After escaping slavery, he served as a Catholic missionary. St. Patrick’s Day is a day of feasting and celebration for Irish Catholics, and Americans have adopted the practice of wearing green on that date. Schoolchildren have long delighted in pinching anyone who forgets to wear green. The first St. Patrick’s Day parade was held in New York City in 1762. In the 21st century, parades are held throughout the country and around the world. Elizabeth Rholetter Purdy Independent Scholar See Also: Catholicism; German American Families; Italian American Families; Polish American Families; Segregation. Further Readings Benson, James K. Irish and German Families and the Economic Development of Midwestern Cities, 1860– 1895. New York: Garland, 1990. Coffey, Michael. The Irish in America. New York: Hyperion, 1997. Connelly, Bridget. Forgetting Ireland. St. Paul, MN: Borealis Books, 2003. Giemza, Bryan. “Turned Inside Out.” Southern Culture, v.18/1 (Spring 2012). Gleeson, David T. The Irish in the South, 1815–1877. Chapel Hill: University of North Carolina Press, 2001. Godfrey, A. W. “The Way We Really Were.” Commonweal, v.122/4 (February 24, 1995). Kenny, Kevin. The American Irish: A History. New York: Pearson Education, 2000. Kenney, Kevin. “Twenty Years of Irish Historiography.” Journal of American Ethnic History, v.28/4 (Summer 2009). Miller, Kerby. Emigrants and Exiles: Ireland and the Irish Exodus to North America. New York: Oxford University Press, 1985. Mulrooney, Margaret M. Fleeing the Famine: North America and Irish Refugees, 1845–1851. Westport, CT: Praeger, 2003. Unsworthy, Tim. “The Irish.” National Catholic Report, v.32/20 (March 15, 1996). Urban, Andrew. “Irish Domestic Servants, ‘Biddy’ and Rebellion in the American Home, 1850–1900.” Gender and History, v.21/2 (August 2009).

769

Valone, David A. and Christine Kinealy, eds. Ireland’s Great Hunger: Silence, Memory, and Commemoration. Lanham, MD: University Press of America, 2002.

Islam Islam, meaning peace or submission, is a major world religion founded by the Prophet Muhammad in the 7th century c.e. Muslims (literally “ones who submit”) number more than 1 billion, approximately one-fifth of the world’s population. Because of the perceived negative portrayal of Islam in the media, particularly in the United States and other Western countries, Muslim organizations have begun to stress that traditional and moderate Islam—which is observed by a large majority of the global Muslim population—is family-centered and nonviolent. Muslims have contributed to world civilization in various fields and disciplines, including astronomy, calligraphy, chemistry, mathematics, medicine, and physics. Islam remains relevant for scholars in multiple fields for several reasons: (1) Islam contains a religious legal code and theology that not only considers the relationship between the individual and Allah (God) but also addresses the importance of the relationship between humankind and its environment (e.g., economical, ecological, political, and social); (2) Muslims, in addition to contributing to world civilization in the arts and sciences, have a unique and rich history involving miraculous origins, faith in a divine being and the message of a sacred text, and a conquest and geographic expansion (perhaps larger than that of the ancient Greeks and Romans); and (3) Islam continues to generate headlines around the world because of the actions of a small number of militant, radical Muslim groups who perpetrate acts of terror. Foundations and Spread of Islam The prophetic and revelatory career of Muhammad (b. 570 c.e.) and the development of the Quran as divinely inspired scripture led to the founding of Islam as a major world religion in a relatively short period. Muhammad was born to the Banū Hāshim tribe in Mecca, not only a commercial

770

Islam

center for caravan traders who transported goods along Arabia’s west coast from southern Arabia and East Africa to Syria, but also the home of the Káaba sanctuary that housed approximately 360 idols. Beginning at age 40 (610 c.e.) until his death in 632 c.e., Muhammad reported that he received a series of revelations from the archangel Gabriel. Following Muhammad’s first revelatory experience on Mt. Hira, his wife (Khadija) and Christian cousin (Waraqa ibn Naufal), both accepted the revelation, and the latter compared the experience to that of Moses on Mt. Sinai. Word concerning the revelations quickly spread and Muhammad gained many followers. As Muhammad’s popularity grew, so did hostility toward him and his followers. Muhammad eventually fled Mecca with his fledgling faith community in 622 c.e. and migrated approximately 200 miles to the town of Yathrib, later known as Medina. After the hijra (emigration), Muhammad established the umma (Muslim community) that was based on the laws and guidance of the revelations (i.e., the Quran). Muslims thereafter marked 622 c.e. as the first year of the Islamic calendar. In 630 c.e., Muhammad and a large number of followers returned to Mecca, cleared the idols out of the Káaba sanctuary, and dedicated the sacred space to the worship of one deity, Allah. Following Muhammad’s death in 632 c.e., the umma split into two major factions. One segment of Muhammad’s followers maintained that the Prophet had prepared Ali bin Abi Taleb, his son-inlaw, to lead the umma after his death. This group came to be known as Shi’at Ali, or the Party of Ali (commonly referred to as Shi’ites). The other segment of Muhammad’s followers rejected the claim that Muhammad had handpicked Ali to be his successor, and understood that the umma was sufficiently educated to choose their new leader after Muhammad’s death. This group selected Muhammad’s father-in-law, Abu Bakr, as the new leader and referred to themselves as Ahl al-Sunnah wa’lJamā’a, or People of the Sunnah and the Community and are commonly referred to as Sunnis (denoting a clear path or practice). Today, approximately 80 percent of the world Muslim population follow Sunni Islam. Muhammad’s message spread rapidly throughout the region in the 7th century, and by the 10th century, Islam had spread from Morocco to the

southern tip of Africa, and from Spain and Portugal in the West to China and India in the East. Islam in America It is unclear when Muslims first came to the United States. Some historians claim that Muslims traveled to North America with Spanish explorers after their expulsion from Spain in the late 15th century. That African Muslims were first brought to the United States as slaves is more likely the case. Researchers have estimated that as many as 15 percent of African slaves brought to the United States were Muslim. In the 19th and early 20th centuries, most of the Muslim immigrants to the United States originated from the Middle East and North Africa. By 1952, more than 1,000 mosques had been built in North America. The current Muslim population in the United States is intriguing, as demographers estimate that nearly half of the Muslim population in the United States are African Americans who have converted to Islam. The highest concentration of African American Muslims is in Illinois. Also intriguing is the fact that the majority of Arab Americans in the United States are not Muslim, but Christian. Today, the estimated American Muslim population ranges between 4 and 6 million. The Muslim American Family Although research on American family life is plentiful, research on the Muslim American family is scant at best. Part of the reason for this reality is that only in the last 15 years or so has the field of religion and family received significant attention. A majority of that research addressed issues relevant to religious families in general, or specifically of the dominant religion (Christianity). The field of religion and family vis-à-vis the American Muslim community will hopefully provide useful information in the near future that policymakers, social workers, therapists, and others can rely on to assist one of America’s fastest-growing populations. Like any religious philosophy, Islam has produced a variety of lifestyles on the spectrum of religious observance. Islamic law, however, tends to be more “traditional” concerning family life. Laws regarding, for example, gender roles, marriage and divorce, and parenting are codified in the Quran and hadith (a body of traditions concerning the Prophet Muhammad’s life and revelations).



The Role of Women Perhaps the most controversial issue in Islam today, other than terrorism, is the role of women in society. Many Western commentators and feminists discuss issues pertaining to women in Islam at length on college campuses and in the media. Traditional Islam teaches that men and women must fulfill particular societal and familial roles to perpetuate a moral and productive society. Confusion of gender roles, it is posited, will corrupt society and lead to the breakdown of the family. The specific word in Arabic describing this societal degeneration is fitna, which refers to disorder, mischief-making, rebellion, and temptation. The Prophet Muhammad purportedly taught that the most threatening fitna to men (i.e., men’s spiritual progression) is women. This teaching has perhaps contributed to distortions of certain laws and exaggerations of various customs among some uneducated Muslims who have promoted (or tolerated) female slavery, female genital mutilation, and “honor killings.” Some Muslim feminists have argued in response to traditionalists, as well as to the more uneducated segments of Islam who perpetuate “extreme” practices previously mentioned, that rigid dogmas of female submission to men, the full-body covering and niqab (veil), “honor killings,” and genital mutilation must not be attributed to the fundamental teachings of Islam and that these practices do not originate with Muhammad or the Quran. Rather, these traditions developed and were perpetuated as a result of a variety of complex economic, political, religious, and social conditions and norms. Muslim feminists and others have also argued that the Prophet Muhammad was a radical dissenter of his contemporary Arabian tribal customs regarding women, as he taught, among other things, that women must be afforded economic protection and property rights, that marriage contracts are necessary to protect women’s child-custody and divorce rights, and that polygyny must be limited and practiced under stringent guidelines. In other words, the laws concerning family life and gender roles in the Quran and hadith must be reconsidered, and the strict and arguably unethical preferences of some Muslim cultures and tribes must not be confused with the fundamental principles of Islam. Regardless of which argument is more credible, these issues will be debated with a great amount of zeal for years to come. Regarding the American

Islam

771

Muslim community, it appears that a more moderate and sensitive approach to family life and women’s rights is in place, particularly among the majority of American Muslims (including African American Muslims and converts to Islam) who were born or raised in the United States. Marriage, Divorce, and Fertility Rate Marriage in Islam is of upmost importance. According to Islamic law (Sharia law), Muslims are required to marry. In fact, one hadith posits that the prayer of a married man is equal in the eyes of God to 70 prayers of a single man. In most (if not all) Muslim communities, a marriage is bound by a contract, which posits various stipulations depending on the community. These stipulations usually include the legalization of sexual relations between the husband and wife; the entitlement of the wife to adequate housing, food, clothing, and dowry; and the obligation of the man to financially provide for the family, thus encouraging the wife to remain in the home to raise children. Islamic law also permits a man to marry up to four women, but only if he is capable of treating them equally. Women, on the other hand, are permitted to marry only one man at a time. A man is also permitted, according to most Islamic legal scholars, to marry a non-Muslim woman of another monotheistic faith (usually Jewish or Christian). However, Islamic law prohibits a woman from marrying a non-Muslim. According to the Quran and hadith, divorce, although permissible, is detested by God. A man may divorce his wife with no justification before a court, but he must honor the marriage contract and grant his wife the necessary rights. Islamic law does not allow a woman to divorce her husband; however, she may appeal to a judge for a divorce that may or may not be granted to her. That Muslim divorce rates are low based on Islamic law is expected. The Muslim divorce rate in the United States hovers around 30 percent, which is significantly lower than the general population (50 percent) but generally higher than divorce rates in predominantly Muslim countries. In addition to having relatively low divorce rates, the Muslim community in most countries has been known for having large families. The past decade, however, has experienced a dramatic shift in the fertility rate among Muslims worldwide. The total

772

It Takes a Village Proverb

fertility rate of the 49 Muslim-majority countries dropped from 4.3 in the early 1990s to 2.9 by 2010. In Iran, for instance, the fertility rate dropped by more than 70 percent between 1975 and 2005. Researchers estimate that by 2030, the total Muslim fertility rate worldwide will drop to 2.3 children per woman. The fertility rate among Muslims in the United States has also declined rapidly. Researchers have suggested that this shift reflects the attitude and agency of couples, rather than, for example, access to birth control. Muslim American family life proves to be an exciting and beneficial field of research for clinicians and scholars in several disciplines, but particularly family scholars and other social scientists who seek to understand the complex phenomena relevant to this religious community. Zahra Alghafli Trevan Hatch Loren Marks Louisiana State University See Also: Catholicism; Christianity; Judaism and Orthodox Judaism; Middle East Immigrant Families; Sharia Law. Further Readings Lewis, B. and B. E. Churchill. Islam: The Religion and the People. Upper Saddle River, NJ: Wharton School Publishing, 2009. Maududi, A. A. Towards Understanding Islam. Leicestershire, UK: Kube Publishing, 1994. Peters, F. E. Islam: A Guide for Jews and Christians. Princeton, NJ: Princeton University Press, 2003. Sherif-Trask, B. “The Muslim American Family.” In Ethnic Families in America: Patterns and Variations, 5th ed., R. Wright, C. H. Mindel, T. V. Tran, and R. W. Habenstein, eds. Boston: Pearson, 2012.

It Takes a Village Proverb The concept of the “caring village” needed to raise a child has arisen from a multitude of factors. Over the past several years, there has been a dramatic shift in

family structures, especially the increase in singleparent households, leaving many families without the supports on which earlier generations counted. Additionally, multiple economic factors have dictated what external supports a family requires, as well as what supports society is willing to provide. As child and family advocates and researchers better define the outcome that the presence or absence of particular familial and societal supports has on children, families are better able to strategize how to prioritize and meet their conflicting needs for connection and protection. The specific phrase “it takes a village to raise a child” is typically—but probably erroneously—considered to have originated as an African proverb. The general concept, however, has been reflected in many African cultures by a variety of identified proverbs. Informally, the phrase is often used to describe the need or desire to have many people, especially family members and friends, involved in the care of children more than referencing the role of government. Individuals’ levels of geographic mobility have increased and it is no longer unusual for families to be spread across the country, if not the world. Advances in transportation and communication have greatly contributed to this shift, as has increasing motivation to move where educational or professional opportunities may be found. Despite the relative ease with which families can visit or communicate with one another, many individuals have found themselves not recognizing the limitations that distance imposes until they begin to have children. Needing safe, reliable, and affordable child care, which, historically, was often provided by extended family members, is particularly seen as lacking in families in which all adults work outside the home. Additionally, significant time with extended families has been shown to have multiple benefits to a child’s sense of well-being and a parent’s level of child-related work. Although many families face significant frustration, if not hardship, as they raise their children in varying degrees of isolation, there are other aspects of current culture that prevent a move toward the caring village model. A growth in awareness of the risks potentially posed by those who were more likely to be trusted by earlier generations (e.g., community leaders, clergy members, governmental officials) has led modern-day parents to be less willing to allow their children to be connected to



nonrelated adults. Relatedly, fears of liability, litigation, or false allegations can inhibit adults from intervening or offering the assistance, redirection, or support that may have been offered by an earlier generation. Raising children as part of a more communal effort runs counter to another aspect of American culture as well, that of family privacy and independence. Hillary Rodham Clinton’s Book The phrase rose to American awareness in 1996 when then First Lady of the United States Hillary Rodham Clinton wrote a book titled It Takes a Village: And Other Lessons Children Teach Us. Although the book was largely focused on the intended and unintended consequences of governmental decisions and policies, the phrase soon became shorthand for the larger concept of the range of influences and supports needed for a society to produce healthy and productive children. The book became a New York Times best seller and a controversial part of the national conversation. It Takes a Village was part reflection about her own child-raising experiences and part description of what responsibilities she believed society and government should be obligated to meet on behalf of children. She also described the relationship between positive or negative childhood outcomes and different legislative issues, such as tax reforms and Congress’s role in the maintenance of the amount for minimum wage. Not surprisingly, it quickly became a lightning rod for political debate. During his presidential campaign running against Bill Clinton in 1996, Senator Bob Dole publicly asserted that he believed that it takes a family to raise a child rather than a village. The debate of “family versus village” has continued to be used to illustrate the differences in ideology between the Republican and Democratic parties with regard to the role that government should play in many issues related to children and families. The book also sparked criticism from several Christian leaders who believed a greater reliance on the federal government would further erode the authority of the church in the lives of American families. These concerns fueled debates about feminism and the influence that the changing roles of women have had on the family unit and, ultimately, the health and happiness of the children. More than 15 years after its introduction into the American lexicon, the use of the phrase

Italian Immigrant Families

773

continues to evolve to reflect the country’s focus. In 2011, the first wave of the baby boomer generation turned 65. Baby boomers currently comprise approximately 25 percent of the American population, and when the last wave of the generation turns 65 in 2030, they are projected to comprise approximately 20 percent of the total population. In addition to the sheer number of individuals in this range, baby boomers are also living longer than any previous generation has, thus lengthening the oldest stage of adulthood. As the baby boomer generation ages and their needs for medical, logistical, housing, and emotional supports increase, it is becoming more evident that the “caring village” concept now applies to the care of the elderly as well as to children. Diana C. Direiter Lesley University See Also: Child-Rearing Practices; Collectivism; Extended Families; Family Values; Individualism. Further Readings Clinton, Hillary Rodham. It Takes a Village: And Other Lessons Children Teach Us. New York: Simon & Schuster, 1996. Fields, Jason. “Children’s Living Arrangements and Characteristics: March 2002.” Current Population Reports. U.S. Census Bureau, P20-547 (2003). Gonzalez-Mena, Janet. Child, Family, and Community: Family-Centered Early Care and Education. 6th ed. London: Pearson, 2012.

Italian Immigrant Families At Ellis Island there is an anonymous quote attributed to an Italian immigrant from the 1900s that reads: “I came to America because I heard the streets were paved with gold. When I got here, I found out three things: first, the streets weren’t paved with gold; second, they weren’t paved at all; and third, I was expected to pave them.” Italians immigrated to the United States in unprecedented numbers in the late 19th and early

774

Italian Immigrant Families

20th centuries seeking work and a way to surmount devastating poverty. They came both individually and in family units. The same period saw migrations from Italy to other destinations, especially Australia, Canada, and countries in South America. Essentially, there were three significant migratory flows. During the Risorgimento—the movement for Italian unification and independence—a number of wars created a flow of predominantly northern Italian refugees to the United States. It is estimated that by 1870 there were already 25,000 Italians living there. The peak of migration, however, came between 1900 and 1915, when some 3 million Italians immigrated to the United States, becoming the largest nationality of new immigrants. During this period the vast majority came from the southern Italian regions of Abruzzo, Calabria, Campania, Molise, and Sicily. These immigrants were not fleeing war or religious persecution but rather abject poverty. Their driving force was the need to find work. As Hasia Diner has noted, for newcomers, hard work was nothing new: “The difference was that arduous labor before migration had gotten them little food, while in the United States equally hard work in factories, mines, mills, railroads, and farms would be rewarded with tables sagging with food unimaginable to them back home.” Another important wave of immigrants from Italy arrived after World War II. At this time the U.S. Congress also began to liberalize immigration policy. According to David Riemers, immigrants coming after 1945 were more likely to be refugees, more skilled than the previous flow, and included a greater proportion of women than in the past. In the postwar period, Italian communities, through their contributions to American culture, steadily became part of the multilayered fabric of society—the mainstream. They were affected by their new home, and in turn they profoundly affected it. Today Americans of Italian ancestry are among the top 10 largest so-called ethnic groups. In the 21st century, descendants of Italian immigrants to the United States do not refer to themselves as a diaspora. Rather, through integration (facilitated by intermarriage and social mobility) Italian Americans have come to define what it means to be American while retaining important cultural symbolic identity markers.

Economy Most Italian immigrants from the second wave came over as contadini, a larger proportion compared to their Irish and German counterparts. Once in the United States, however, they moved to the city and shunned farm jobs, often opting for construction jobs. A minority, who came from Piedmont’s and Tuscany’s textile factories and from Umbria’s and Sicily’s mines, were industrial laborers who found work in factories. Italian workers were crucial in the construction of roads and bridges, the digging of tunnels, and the building of everything from railroad tracks to skyscrapers. Most work was contracted out by labor brokers known as padrone. According to some estimates, by 1890, 90 percent of New York City’s public works employees and 99 percent of Chicago’s street workers were Italian. Italian immigrant women earned wages in factories but also in jobs they could carry out in the home as pieceworkers, allowing them to retain a central role in the household structure. A portion of immigrants never intended to stay permanently in the United States but rather planned to return home after a period of stay. Others would go back and forth on a seasonal basis. In all cases, the driver was economic enrichment of the family. This could take the form of a home with plentiful food and furnishings in the new country, or money sent back to the old country to support relatives left behind. Many were prevented from returning to Italy because of the outbreak of World War I. Society The social structure of the lives of Italian immigrants revolved around two main institutions: the family and the church. These two entities overlapped and complemented one another. The family represented both a cultural lynchpin and a support structure holding together the various elements of a complicated new world. Traditionally, and not unique to Italy, labor migration began with a male setting off alone in search of fortune to achieve successful settlement in the new world: the arrival of his wife and family. Alternatively, and more frequently, the family tended to move as a unit. L. Baldassar and D. Gabaccia argue that only through the nurturing provided by relocated women were immigrant families



able to integrate and put down roots in new lands and therefore create new homes and communities with emotional links to receiving nations. Immigrants occupy spaces in both the public sphere and the private one—the household. In Italian immigrant societies, the public and the private were connected through the family unit (the household), which was also the conduit for public activity. This also underscores the critical role women played in creating conditions for social life, integration, and, eventually, mobility. The church (Roman Catholic) was the hub that generated social clubs and mutual aid societies— social places where celebrations, or feste, occurred. On arrival, Italian immigrants established a plethora of mutual aid societies, based on family ties and place of birth. These were formed by immigrants from towns all over southern Italy based on their common heritage, naming the societies after the town, its patron saint, or both. Women’s societies were often auxiliaries or separate societies, and in many cases organized to honor the Blessed Virgin Mary. From the early days of immigration through the postwar period, these mutual aid societies held feste in observance of various religious feast days. Some still hold annual feast days. The church also influenced political life of Italian communities, which tended to be more conservative than other immigrant communities because of their ties with the Roman Catholic Church and its stance on social and political issues. This is another example of how activity in the private sphere dictated behavior in the public sphere. Culture For Italians, both on the peninsula and in the new world, “family” as the unit of everyday life could not be separated from food culture. Culture has been described as a system of symbols, and for the Italian American family the overarching symbol was the meal. Diner describes cultural shifts experienced by immigrants in the new world. “In America they ate what they wanted . . . They measured the changes they had experienced in status and well-being by inventing new foods and calling them Italian. Food embodied where they had come from and what they had achieved.” Women, through their preparation of foods to mark special occasions in the home and in the community, often articulated culture and important

Italian Immigrant Families

775

values of the immigrant family. Meals around the table cooked by the women of the family represented the primary and tangible cultural element that allowed communities to remain “Italian” as they encountered new aspects of American culture. Along with being affected by the new country culture, Italian immigrants contributed to the cultural landscape of the United States. Immigrants formed “Little Italies” in cities on the northeastern coast, as well as in other cities in the south and in California. As these communities grew and prospered, Italian food, entertainment, and music influenced American life and culture. These processes of integration were not without their ups and downs. Throughout the 20th century, Italian immigrants who were able to assimilate in the direction of more established mainstream white Protestant ideals of beauty, food, and style generally had an easier time. Discrimination against Italian Americans was tangible, and only with the evolution of the United States as a multicultural entity did it subside. No culture in the world is immune from caricature or misrepresentation, and there is a timely body of academic literature on caricature and cultural politics. Italian immigrant families have most often been portrayed in the American mass media as being perennially connected to organized crime. The central institution of the family is therefore seen only in its perversion as “family,” a “Mafia” or “mob” clan whose loyalties extended only as far as the family’s authoritarian male leader, or “boss,” allowed. Studies have demonstrated that mass media has over time perpetuated this notion. Stereotypical characters and scenarios in the iconic movie The Godfather and popular television series The Sopranos are the first to come to mind. While these are obvious caricatures, it is true that kin and village ties were strong among Italians in the New World in both the public and private spheres, often for reasons of survival. The padrone, a potentially exploitative figure, was indispensable to immigrants who needed to navigate the world of employment. A more contemporary interpretation of Italian Americans that has been criticized for perpetuating negative stereotypes is MTV’s Jersey Shore. Numerous Italian American groups and individuals have criticized certain elements of mass media that have downplayed centuries of valiant—or even ordinary—history, in favor of promoting small

776

Italian Immigrant Families

subcultures as representative of the dominant Italian American culture. Emigrants from Italy, whether they returned permanently to the old country or put down permanent roots in the United States, encountered new realities and upheld their traditions through the filter of the strong family unit and its institutions. Similarly, since the arrival of Italian immigrants in significant numbers, all American families have incorporated, been influenced by, or at least digested aspects of Italian American culture. Odette Boya Resta Johns Hopkins University See Also: Catholicism; German Immigrant Families; Irish Immigrant Families; Polish Immigrant Families. Further Readings Baldassar, Loretta, and Donna R. Gabaccia, eds. Intimacy and Italian Migration: Gender and

Domestic Lives in a Mobile World. Bronx, NY: Fordham University Press, 2011. Brittingham, Angela and G. Patricia De La Cruz. Ancestry: 2000. Washington, DC: U.S. Dept. of Commerce, Economics and Statistics Administration, U.S. Census Bureau, 2004. Diner, Hasia R. Hungering for America: Italian, Irish and Foodways in the Age of Migration. Cambridge, MA: Harvard University Press, 2001. Mount Holyoke College. “From Europe to America: Immigration Through Family Tales, History of Italian Immigration.” https://www.mtholyoke.edu/~molna 22a/classweb/politics/Italianhistory.html (Accessed September 2013). Reimers, David M. “Post–World War II Immigration to the United States: America’s Latest Newcomers.” ANNALS of the American Academy of Political and Social Science, v.454/1 (March 1981). Schneider, David M. American Kinship: A Cultural Account. 2nd ed. Chicago: University of Chicago Press, 1980.

J Japanese Immigrant Families The Japanese immigrated to the United States in two major historical periods, before and after World War II. Significant numbers of Japanese immigration occurred following the political, cultural, and social turmoil and structural changes stemming from the 1868 Meiji Restoration. Many of these Japanese people arrived in Hawai‘i and on the West Coast. The 1907 “Gentlemen’s Agreement,” a formal agreement between Japan and the U.S. government enacted by President Theodore Roosevelt’s unilateral action, and never written into a law, essentially ended immigration of Japanese unskilled workers to the United States, but spouses of Japanese immigrants already in the United States, including businesspersons and students, were allowed to come to the United States. This agreement, however, was nullified by the Immigration Act of 1924, which legally banned all Asians from migrating to the United States. Significant Japanese immigration did not take place again until the Immigration Act of 1965 ended 40 years of bans against immigration from Japan and other countries. This act abolished the national origins quota system, which dictated American immigration policy from the 1920s. Due partly to the rise of the civil rights movement of the 1960s,

the ban on immigration was seen as an embarrassment by many Americans, including President John F. Kennedy. After Kennedy’s assassination, President Lyndon Johnson signed the bill at the foot of the Statue of Liberty as a symbolic gesture. The 1965 act replaced the quota system with a preference-based system that focused on immigrants’ skills and family relationships with citizens or U.S. residents. However, visas were restricted to 170,000 per year, with a per-country-of-origin quota, not including immediate relatives of U.S. citizens, nor former citizens, ministers, and employees of the U.S. government abroad. The pattern of post-1965 immigration from Japan has been quite similar to that from Western Europe and is characterized by low numbers, and usually related to marriages between U.S. citizens and Japanese, with some based on employment preferences. The number of Japanese immigrants (Shin Issei, or new generation of Japanese in the United States) averages 5,000 to 10,000 per year and is similar to the number of immigrants to the United States from Germany. This is in stark contrast to the large numbers of other Asian immigrants, where family reunification is the primary impetus for immigration. Internment Historically, what sets Japanese Americans and immigrants’ experiences apart from those of other races is their removal and imprisonment during 777

778

Japanese Immigrant Families

World War II by the U.S. government. More than 120,000 men, women, and children of Japanese descent, regardless of their citizenship, were ordered into concentration camps presumably for fear that they presented a threat to national security. Further, children comprised more than half of these Japanese Americans and immigrants interned in camps. The trauma of this imprisonment affected virtually all of those who were interned. In addition to the severe economic losses, the internees suffered psychologically, including increased sense of fear and helplessness and the loss of self-esteem, which eventually affected their parent-child relationships. The strong parental authority common among Japanese American and immigrant families along with the close-knit family characteristic of these families before the war were significantly weakened during the internment. The victimization experienced by the second generation (Nisei) Japanese Americans during the internment also influenced the way that they communicated with their third-generation (Sansei) children. Many Sansei reported that their parents maintained a silence about their experiences in the camps that inhibited communication and created a sense of secrecy within families. Sansei who had a parent interned also felt a significantly greater sense of vulnerability than their counterparts who did not have an interned parent. Demographic Characteristics Americans of Japanese heritage have historically been among the three largest groups of Asian Americans, along with people of Chinese and Filipino descent. Like the Chinese, the Japanese arrived in the United States as agricultural workers, but, unlike the Chinese who were also concentrated in railroad constructions in Western states, a large proportion of Japanese immigrants became plantation workers in Hawai‘i. In the 1920s, nearly 43 percent of the Hawai‘ian population was Japanese. On the mainland, many Japanese who were first employed in agriculture soon became self-employed merchants and farmers. By 1925, 46 percent of Japanese immigrants were involved in agriculture. In West Coast cities such as San Francisco, the Japanese immigrants established small enclaves where they could provide emotional and financial support for each other, eat familiar foods, and socialize together speaking their native language.

In recent decades, Japanese Americans have become the sixth-largest immigrant group in the United States at roughly 1,304,286, including those of mixed race or mixed ethnicity (U.S. Census Bureau, 2010). This constitutes approximately 7.53 percent of the Asian and mixed-raced Asian populations. Between the 2000 and 2010 censuses, the Japanese American population decreased by 1.2 percent. Among Asian groups, the Japanese population had the highest proportion (41 percent) reporting multiple races, reflecting the prevalence of Japanese marrying other Asian groups and other races. According to the 2010 census, the largest Japanese American communities were found in California with 272,528, followed by Hawai‘i, New York, and Washington. Southern California has the largest Japanese American population in North America. Japanese immigration can be seen as being small or of negligible size. But during the period from 1965 (when racial restrictions on Asian immigration were finally removed) to 2000, there were 176,000 Japanese immigrants, a number similar to Pakistanis (204,000), Thais (150,000), Cambodians (206,000), Hmong (186,000), and Laotians (198,000). Family Formation Japanese, more than other early Asian immigrants, arrived in the United States to settle and raise families. This is particularly true with Japanese immigrant women, the majority of whom arrived from 1908 to 1924, entering as wives of men previously settled in the United States. This resulted in a concentrated period of family formation that produced the first American-born generation, the Nisei. The adoption of the 1907 “Gentlemen’s Agreement,” which ended the immigration of Japanese unskilled laborers, spurred the term picture brides, which refers to Japanese immigrant men’s marriages of convenience made at a distance through matchmakers exchanging photos of their potential wives and themselves. By establishing marital bonds at a distance, Japanese women seeking to immigrate to the United States were able to receive a passport, and Japanese male workers in America were able to marry women of their own ethnicity. After establishing themselves with farms or businesses, many Japanese immigrant men took advantage of this custom and married Japanese-born women seen only in photos, without cultivating



A Japanese American family in Los Angeles, California, waits for a train to evacuate them to Owens Valley under a U.S. Army war emergency order. During World War II, more than 120,000 people of Japanese descent were ordered into internment camps.

personal relationships. Most of these picture brides, who often came from similar areas of Japan as their husbands, eventually settled into their marriages and the new lifestyle and worked diligently with their husbands in businesses and on farms, but some, after seeing their husbands for the first time, rejected them and returned to Japan. In some cases, their husbands turned out to be alcoholics or physically abusive, but many of these couples stayed married for the sake of their children. Today, Japanese-born wives of American citizens account for almost half of all Japanese immigrants to the United States. From 1945 to 1985, Japan was the sixth-largest source of foreign spouses (mostly female) immigrating to the United States. During that period, the 84,000 foreign-born spouses made up well over half (55 percent) of the 154,000 immigrants from Japan. The husbands include Japanese Americans as well as Americans of other racial backgrounds.

Japanese Immigrant Families

779

Intergenerational Relationships The 40-year immigration ban on nearly all Japanese produced unusually well-defined generational groups within the Japanese American and immigrant community. They have themselves distinguished their members with the terms Issei, Nisei, and Sansei. Original immigrants belonged to an immigrant generation, the Issei, and their U.S.-born children to the Nisei Japanese American generation. The Issei comprised exclusively those who had immigrated before 1924. Because no new immigrants were permitted, all Japanese Americans born after 1924 were—by definition—born in the United States. This generation, the Nisei, became a distinct cohort from the Issei generation in terms of age, citizenship, and English-language ability, in addition to the generational differences. Institutional and interpersonal racism led many of the Nisei to marry other Nisei, resulting in a third distinct generation of Japanese Americans, the Sansei. A post–World War II baby boom generation, the Sansei, reached its peak in the early 1960s. Although the current generation of young Japanese people in the United States is referred to as Yonsei (the fourth generation) or Gosei (the fifth generation), these age cohorts are a much more complex mixture of ethnic and racial backgrounds. Today, Japanese Americans have the oldest demographic structure of any nonwhite ethnic group in the United States. In general, researchers often attributed Japanese American success in educational and occupational attainment to a strong bond between parents and children, which gave a rise to the “ideal family myth,” including a strong respect of Japanese Americans for their elderly family members. In examining adult children’s support for their elderly Japanese American parents, it was found that parental need for assistance was more strongly related to children’s provision of support than the cultural concept of filial piety. Thus today’s Japanese Americans and immigrants can be seen as placing more emphasis on the financial conditions of their elderly parents rather than the cultural emphasis on filial piety when supporting their parents. Education In the years prior to World War II, many secondgeneration Japanese Americans attended the American school by day and the Japanese school in

780

Japanese Immigrant Families

the evening to keep up their Japanese skills as well as English. Other first-generation Japanese American parents were worried that their children might go through the same discrimination when going to school so they gave them a choice of going back to Japan to be educated or staying in America with their parents and study both. Japanese American and immigrant cultures place great value on education. Across generations, children are instilled with a strong desire to enter the rigors of higher education. Because of such widespread ambition among members of the Japanese American community, math and reading scores on the nationwide standardized tests may often exceed the national averages. Additionally, a large number of Japanese Americans (40.8 percent) obtain postsecondary degrees. Because of this high educational attainment, Japanese Americans are often described as the “model minority.” In reality, however, a gap still exists among Japanese Americans themselves in terms of educational and occupational attainment; thus, it is inaccurate to label Japanese Americans and immigrants as the “model minority” group. Interracial Marriage Before the 1960s, the trend of Japanese Americans marrying partners outside their racial or ethnic group was generally low. This may be attributed to the encouragement of many traditional Issei parents for Nisei children to marry only within their ethnic/cultural group. Arrangements to “purchase” and invite picture brides from Japan to relocate and marry Issei or Nisei males were quite common. In California and other western states until the end of World War II, there were attempts to make it illegal for Japanese and other Asian Americans to marry European Americans, but those laws were declared unconstitutional by the U.S. Supreme Court, like the antimiscegenation laws that prevented European Americans from marrying African Americans in the 1960s. According to a 1990 statistical survey by the Japan Society of America, the Sansei have an estimated 20 to 30 percent out-of-group marriage, while the Yonsei approaches nearly 50 percent. The rate for Japanese American women to marry European American and other Asian American men is becoming more frequent, but lower rates to marry Hispanic, Native American, and African American

men are reported. Despite these variations, interracial marriages are expected to continue to be a popular option among younger Japanese Americans as well as Japanese immigrants. Masako Ishii-Kuntz Ochanomizu University See Also: Chinese Immigrant Families; Immigrant Families; Indian (Asian) Immigrant Families; Korean Immigrant Families; Vietnamese Immigrant Families. Further Readings Daniels, Roger. The Politics of Prejudice: The AntiJapanese Movement in California and the Struggle for Japanese Exclusion. Berkeley: University of California Press, 1999. Inui, Kiyo Sue. “The Gentlemen’s Agreement: How It Has Functioned.” Annals of the American Academy of Political and Social Science, v.122 (1925). Ishii-Kuntz, Masako. “Diversity Within Asian American Families.” In Handbook of Family Diversity, Dave Demo, Katherine Allen, and Mark Fine, eds. Oxford: Oxford University Press, 2000. Ishii-Kuntz, Masako. “Intergenerational Relationships Among Chinese, Japanese and Korean Americans.” Family Relations, v.46/1 (1997). Ishii-Kuntz, Masako. “Japanese American Families.” In Families in Cultural Context: Strengths and Challenges in Diversity, Mary Kay DeGenova, ed. Mountain View, CA: Mayfield, 1997. Ishii-Kuntz, Masako. “Shin Issei and Their Adaptation to American Society.” Orange Network, v.2/7 (1994). Ishii-Kuntz, Masako, Jessica Gomel, Barbara Tinsley, and Ross Parke. “Economic Hardship and Adaptation Among Asian American Families.” Journal of Family Issues, v.31/3 (2010). Masuda, Hajimu. “Rumors of War: Immigration Disputes and the Social Construction of American– Japanese Relations, 1905–1913.” Diplomatic History, v.33 (2009). Neu, Charles E. An Uncertain Friendship: Theodore Roosevelt and Japan, 1906–1909. Cambridge, MA: Harvard University Press, 1967. Takaki, Ronald. Strangers From a Different Shore: A History of Asian Americans. Boston: Little, Brown, 1989. U.S. Census Bureau. “Race Reporting for the Asian Population by Selected Categories.” Washington, DC: U.S. Census Bureau, 2010.



Judaism and Orthodox Judaism The term Jew, which began as a tribal name and later became a national title, today refers to many things: an ethnic group, a philosophy, a religion (Judaism), a tradition, or a way of life. Although Jews have comprised a relatively small portion of the world population (currently a mere 14 million people), over the last 3,000 years the sacred texts (Hebrew Bible) and monotheistic tradition of the Jewish people have been foundational in Western civilization. The Jews, while suffering some of the greatest persecutions of any group in recorded history, have nevertheless managed to produce some of the most influential intellectual figures to date, including Albert Einstein, Sigmund Freud, Karl Marx, and Jesus of Nazareth. Of the 826 Nobel Prize winners to date, 187 (22 percent) have been Jewish. Jews are often viewed by historians and social scientists—including scholars of American culture, family, and religion—as a fascinating group to study for several reasons: (1) Judaica—referring to Jewish history, religion, and tradition—dates back more than 3,000 years and contains one of the most complex histories, legal and religious systems, and philosophical traditions of any ethnic or religious group; (2) Jews, because of their nature, are ideal for researching an array of important social issues, including assimilation, ethnicity, identity formation, and oppression; (3) the Jewish population is significantly more educated than the general population—as of 1990, 50 percent of Jewish males and 48 percent of Jewish females had completed at least one college degree, more than double and triple the national averages, respectively; and (4) traditional Jewish groups have exceptionally high rates of within-faith marriage and fertility, while more liberal and secular Jews have low rates of marriage and fertility, as well as high rates of intermarriage, providing a study in contrasts. Foundations of Judaism According to the Hebrew Bible, a nomadic tribe originally from Mesopotamia eventually settled in Egypt in the early 2nd millennium b.c.e. This period in Jewish history is occasionally referred to as the Patriarchal Period because this nomadic tribe was led by four generations of noble

Judaism and Orthodox Judaism

781

patriarchs: Abraham, Isaac, Israel (also called Jacob), and Joseph. The Hebrews, also called Israelites after the grandson of Abraham, eventually escaped Egypt and settled in what is today Israel and Palestine. Although the Israelites had become, by choice, geographically divided into 12 territories—one for the descendants of each of the 12 sons of Israel— they were unified by both a temple and legal system that the God of Israel revealed to them through Moses. The Israelites offered animal sacrifices and other offerings at the temple in Jerusalem to the God of Israel. The temple worship of ancient Israelite religion is the foundation of what would later be called Judaism. The ancient Israelites were led politically by kings (the most famous of them being David and Solomon), ritually by high priests who oversaw affairs of the temple, and religiously by prophets. Many Israelites began calling themselves Jews (yehudim in Hebrew) after Judah, the name of one of Israel’s sons and a dominant tribe in Israel, as early as the 8th century b.c.e. Within a few hundred years, by the time the Babylonians had destroyed the temple in 586 b.c.e., members of all 12 tribes of Israel were calling themselves Jews. Rabbis, the Synagogue, and Jewish Law Although ancient Israelite law and religion is the foundation of Judaism, the religion as practiced today by traditional Jews was largely developed by the rabbis (masters or teachers) in late antiquity. Thus, this traditional Judaism is also called Rabbinic Judaism. After the Jews rebuilt the temple in Jerusalem in 516 b.c.e., they were no longer led religiously by prophets but rather by scholars and rabbis who interpreted both the written law (Hebrew Bible) and the oral law (traditions). By the 1st century c.e., localized centers of worship had become common. In addition to the temple in Jerusalem, these smaller centers, proseuchē (place of prayer) or sunagōgē (place of assembly) in Greek, were places where Jews could gather and worship. After the Romans sacked the temple in 70 c.e., these synagogues became the central places of worship in each community, and the rabbis assumed an even more important role in the survival of Judaism as a religion. The rabbis in the first six centuries of the Common Era produced one of the most complex and extensive codes of religious law ever written, totaling more than 60 tractates in more than

782

Judaism and Orthodox Judaism

30 volumes. This code, the Talmud, contains rabbinic discussions on Jewish law (halakha) and ethics (based largely on the Hebrew Bible), including issues of business, diet, education, family life, war, and worship. The Struggle to Define Judaism Throughout the Middle Ages, Jews continued to orient themselves by the Hebrew Bible and the Talmud, but they also looked to contemporary Jewish intelligentsia to interpret these sources for direction on Jewish legal and religious matters. Sa’adia Gaon (d. 942), for example, codified the first Jewish prayer book for synagogue worship (siddur), after which today’s prayer books are structured. Maimonides (d. 1204) formulated 13 principles of faith and wrote a code of law (Mishnah Torah) meant to be more accessible to the Jewish masses than the Talmud. The Mishnah Torah (retelling of the law or second law) is widely consulted and studied by some Jewish groups today. In the early Modern period, after they were expelled from Spain in 1492, Jews began raising questions about how they—as a cultural, religious, and social minority—could better live and survive (both religiously and temporally) in the dominant society. Two major positions dominated the dialogue. One position (particularism) argued that Jews must largely remain insular and accept only Jewish ways of thinking (Hebrew thought) because all other forms of thinking (e.g., Greek philosophy) were either inimical or superfluous to the Jewish way of life. Accepting other ways of thinking, it was argued, would eventually lead to mass assimilation. The other position (accommodationism) maintained that Jews must accommodate to the dominant culture and accept “truth” wherever it exists (not only through Hebrew thought), including through Greek philosophy. These two philosophies clashed intensely for the next few centuries and by the early to mid-19th century, Judaism had produced three major separate movements: Conservative Judaism, Orthodox Judaism, and Reform Judaism. Denominationalism and American Jewry Reform Judaism was the first movement to emerge in the early 19th century. A segment of the Jewish population in Western Europe, particularly in Germany, adopted an accommodationist approach and

argued that much of Jewish belief and practice was antiquated, superstitious, or unnecessary. Proposals from within the Jewish community to adjust, reinterpret, and modernize both the belief system and the religious legal system were rejected by the particularists. Reform Judaism spread across Western Europe. By the early 1820s, a Reform Synagogue was established in Charleston, South Carolina. Currently in the United States, Reform Jews comprise roughly 35 percent of the adult Jewish population. Orthodox Judaism as a systematic movement that emerged as a response to Reform Judaism. Jews who rejected proposals for change argued that Judaism cannot be reformed because, as God’s creation, Judaism transcends space and time; therefore, any attempt to reform Judaism was anathema to traditional Jews. According to many traditional Jews, called “Orthodox” today, Reform Judaism is not considered Judaism. Orthodox Jews constitute roughly 26 percent of the adult Jewish population in the United States. Jews who are called Conservative in North America (“Masorti” outside North America), comprising roughly 27 percent of the adult Jewish population in the United States, offered a moderate alternative to the Orthodox and Reform positions. Conservative Jewish synagogues range on the spectrum from more liberal to more traditional but typically fall somewhere in between Orthodox and Reform. The intense philosophical debates that started centuries ago in Europe continue to the present between the three major branches of Judaism. These debates influence the Jewish way of life and Jewish families in America. Perhaps the best example is the classic question “Who is a Jew?” Traditional Jewish law (observed by Orthodox and some Conservative congregations) considers a person to be a Jew under two circumstances: (1) a person whose mother is Jewish (regardless of the status of the father), and (2) a person who has converted to Judaism by “proper” authority and appropriate procedures. More liberal Jewish groups (e.g., Reform) define a Jew as (1) a person whose mother is Jewish, (2) a person whose father is Jewish (assuming the mother is a Gentile) and who was raised Jewish, and (3) a person who converts to Judaism through “proper” authority and appropriate procedures. As a result of differing positions on such important



issues as who should be considered a Jew, many people who thought they were Jewish their entire lives have had their Jewish status delegitimized by other Jewish groups, or by the State of Israel upon relocating there. This reality affects various aspects of Jewish family life, including dating and mate selection. The Jewish American Family Jewish family structures and family roles in the United States are as diverse as those of the general population. Some Jews are highly religious and family centered, while others are nonreligious and do not anxiously pursue a family-centered life. Many Jewish leaders in North America are optimistic about the future of American Jewry. Recent studies reveal that Jews (even secular Jews), by and large, remain involved in the Jewish community through attending synagogue services, enrolling their kids in Hebrew school or Jewish summer camps, participating in Jewish holiday activities, or taking local adult classes in Hebrew or Jewish studies. At the same time, however, some Jewish leaders and social scientists are not optimistic about the future of American Jewry. Research reveals that Jewish families in the United States are facing challenges including low marriage rates, high intermarriage rates, low birthrates, and low fertility rates. Marriage and Intermarriage Rabbinic Judaism teaches that marriage is a commandment of God: “It is not good that the man should be alone; I will make him a helper as his partner . . . Therefore a man leaves his father and his mother and clings to his wife” (Genesis 2:18, 24). As a result of a strong emphasis on marriage in Jewish law, the Orthodox Jewish community experiences a high (and largely within-faith) marriage rate, as well as a lower divorce rate than most religious and ethnic groups in the United States. However, the national Jewish population experiences a different reality because more than half of American Jews are nonreligious (approximately 55 percent) and do not follow Jewish law. Research shows that the age at first marriage for Jewish men and women is higher than the national average (28 for men and 26 for women). At age 35, 52 percent of Jewish men and 36 percent of Jewish women are not married, which is also higher than the U.S. general population (41 percent for men and 30 percent for women).

Judaism and Orthodox Judaism

783

Perhaps the most difficult challenge facing the Jewish family and Jewish ethnic identity in the United States is the issue of intermarriage. Before 1970, the intermarriage rate among Jews was only 13 percent. Today, that number has nearly quadrupled to 47 percent. In addition, 52 percent of Jewish young adults were born to an intermarried couple, and the current trend is that roughly 75 percent of those born in intermarried families are choosing to marry non-Jews, compared to 28 percent of those born in families with two Jewish parents. Research has also revealed that while 98 percent of children with two Jewish parents are raised Jewish, only 39 percent of children in intermarried families are raised Jewish. Birthrate and Fertility Rate According to the most recent (2000–01) National Jewish Population Survey, the American Orthodox Jewish population doubled in size from the early 1980s to 2000, which is expected since the Orthodox Jewish community experiences not only relatively high marriage rates and lower divorce rates but also averages more children per family than any other Jewish group. Many traditional Jews take seriously the biblical commandment to “Be fruitful and multiply, and fill the Earth” (Genesis 1:28). The rabbinic sages, as far back as the 1st century c.e., determined that the commandment to “be fruitful and multiply” is fulfilled when a couple has had at least two children (to replace themselves). Research reveals that a majority of married Orthodox Jewish women have four children or more. In contrast, the national Jewish population experiences a net loss every decade (despite the growth of the Orthodox community). Since 1950, American Jewish couples at the end of their childbearing years have averaged 1.57 children. Since 2000, the number has grown slightly with an average of 1.8 children per couple. This means that among more liberal and secular Jews in the United States, who make up a large majority of the Jewish population, the birthrate is lower than the national average (2.1 children per family). A few factors contribute to the low birthrate among American Jews: (1) economic prosperity among Jews is not as high as in previous generations and, therefore, couples are having fewer children; and (2) Jewish women are postponing marriage due to career or educational pursuits, which is shortening the time frame for childbearing. For these and other reasons, Judaism and Jewish

784

Judaism and Orthodox Judaism

families continue to provide an anomalous attraction to social scientists and scholars of the family. Trevan G. Hatch Loren D. Marks Louisiana State University See Also: Bar Mitzvahs and Bat Mitzvahs; Hanukkah; Passover. Further Readings Ben-Sasson, H. H., ed. A History of the Jewish People. Cambridge, MA: Harvard University Press, 1985. Dorff, Elliot N. “The Jewish Family in America: Contemporary Challenges and Traditional

Resources.” In Marriage, Sex, and Family in Judaism, M. J. Broyde, ed. Lanham, MD: Rowman & Littlefield, 2005. Mindel, C. H., B. Farber, and B. Lazerwitz. “The Jewish American Family.” In Ethnic Families in America: Patterns and Variations, 5th ed., R. Wright, C. H. Mindel, T. V. Tran, and R. W. Habenstein, eds. Upper Saddle River, NJ: Pearson, 2012. National Jewish Population Survey. “The National Jewish Population Survey 2000–2001: Strength, Challenge, and Diversity in the American Jewish Population.” New York: United Jewish Communities, 2004. Sarna, J. D. American Judaism: A History. New Haven, CT: Yale University Press, 2004.

K Kindergarten In the United States, kindergarten is generally considered the first year of primary school and has become an established part of the elementary education system. According to the National Center for Education Statistics, 42 states and the District of Columbia offered kindergarten programs in 2011 and approximately 4 million children enrolled in half- or full-day kindergarten in the same year. Most children begin kindergarten at the age of 5 or 6 with a curriculum that focuses on basic literacy and numeracy skills, socialization skills, and physical development. Beginnings Friedrich Fröebel, a German educator and educational theorist, founded the first kindergarten in 1837 in Blankenburg, Germany. He used the German words kinder (“child”) and garten (“garden”) as the name of his new model of education to illustrate his belief that young children needed to be nurtured like plants in a garden. The “child garden” was a place where children could grow through social experiences, discovery, and play. His vision for uniting the child’s stage of development with external actions led to a method of schooling centered on fostering a child’s natural curiosity about the world through activities. In this way, Fröebel advanced the idea that children learn through experience. Experiences,

however, needed to be guided and purposeful. A teacher, for example, could use games, poems, songs, and object lessons to have children learn a wide range of knowledge, from moral values and social harmony to spatial relationships and mathematical concepts. The kindergarten movement gained momentum by the 1840s throughout Germany, and German immigrants eventually brought the concept to the United States. In Watertown, Wisconsin, Margarethe Meyer Schurz began the first American kindergarten in 1856. Schurz had studied under Fröebel and learned his core philosophy and methods. With six children, including her own daughter, Schurz provided instruction in German and focused on the preservation of her native language and culture. As other German immigrants who studied under Fröebel founded kindergartens in other cities, the kindergarten movement continued to gain momentum, primarily within German American communities. In 1860, Elizabeth Palmer Peabody founded what is considered the first English-speaking kindergarten, in Boston. Through her published writings as well as her school, Peabody introduced the kindergarten movement to a wider American audience. By 1873, the first public school kindergarten opened in St. Louis, Missouri, and the kindergarten movement quickly spread to other cities. Organizations such as the Free Kindergarten Association and institutions such as the Chicago Kindergarten Training School provided women with the necessary knowledge and 785

786

Kindergarten

credentials to support the movement. As the 19th century came to a close, more than 4,000 kindergartens were established throughout the United States. Kindergarten Redefined In the late 19th and early 20th centuries, kindergartens became an important part of the U.S. education system. With the great influx of new immigrants, kindergarten served as a way to assimilate younger children earlier to the dominant language, values, and norms of American society. Reform-minded, middle-class communities viewed kindergarten as a way to support and, in some instances, take responsibility for what was seen as an inadequate home life. Kindergarten, to them, was a means for a proper upbringing and preparation for future schooling. In this respect, kindergarten emerged as an extension of elementary education and consequently distanced itself from Fröebel’s original vision of nurturing the hands, hearts, and minds of children through developmentally appropriate approaches. Tensions between those who stayed true to Fröebel’s theories and methods and those who sought to make kindergarten a lengthening of the elementary school experience increased throughout this time. As the fields of psychology and education emerged as established social sciences through individuals such as G. Stanley Hall and John Dewey, a validation of Fröebel’s theories and methods gained a newfound credibility. The study of the child helped support the claim that young children needed an educational experience distinctly different from older children. In an educational era characterized by drill and practice, more calls were made for the schooling of younger children to resemble Fröebel’s “child garden.” The scientific approach to studying the child revealed the value of play, experimentation, discovery, and socialization in school. During this time, educational reforms emphasized that elementary education should take on the qualities and characteristics of a kindergarten. One of the foremost thinkers in developing the American kindergarten in the 20th century was Patty Smith Hill. As various interests struggled against one another to shape the kindergarten as an institution, some relying on Fröebel’s principles and others relying on traditional methods of elementary education, Hill conveyed that American kindergarten teachers had no responsibility to follow Fröebel’s theories and methods and certainly

rejected the idea that the needs of children could be met through traditional elementary education practices. Based on her training and experiences as a kindergarten teacher, Hill established her own ideas on how to best educate young children. While not opposed to Fröebel’s teachings in general, Hill firmly believed that kindergarten should support and foster a child’s creative and independent thinking beyond what Fröebel put forward. To her, a child was an autonomous individual with his or her own interests and social needs. Providing a context for self-direction where each individual child could play, explore, and collaborate with others moved the child to the center of learning. In this way, Hill adopted a number of key features of Progressive education and made them common characteristics of the kindergarten classroom. Eventually becoming a professor at Teachers College and remaining an advocate for universal kindergarten, Hill helped form a number of core beliefs relating to the purposes of kindergarten and best instructional practices for kindergarten teachers in the United States. With the 20th century coming to a close, more pressure for schools to meet set assessment measures mounted. In the era of high-stakes testing and accountability, a number of kindergarten programs across the United States continue to face new pressures. The demand for academic readiness has placed additional requirements and expectations on kindergartners. Test preparation, subject-specific lessons, direct instruction, and seatwork have become commonplace across many kindergartens, especially within school districts that fall short of state benchmark scores. Structuring kindergarten as a way to determine readiness for first grade, whether or not the child is developmentally able to meet the criteria, has come under heavy criticism. Called into question is the validity of so-called readiness tests that report to predict school achievement at such an early age. Given that some school districts have retained students in kindergarten based on such tests, the policies of assessing students at such a young age are viewed as problematic and, according to some researchers, detrimental to a child’s future. In addition to testing, another recent development within kindergarten is “redshirting,” or parents intentionally waiting to enroll their children in school as a way to ensure that their children are better prepared and oftentimes ahead of younger classmates in cognitive, social, and physical

Kinsey, Alfred (Kinsey Institute)



development. As a result, some school districts have prevented children from enrolling in kindergarten based on age. With the consequence being that the child either begins first grade or the parents have to pay for a private kindergarten program, school districts have enacted policies to prevent parents from “redshirting” their children. John J. Laukaitis North Park University See Also: Education, Elementary; Education, Preschool; Education/Play Balance. Further Readings Beatty, Barbara. Preschool Education in America: The Culture of Young Children From the Colonial Era to the Present. New Haven: Yale University, 1995. Lascarides, V. Celia, and Blythe F. Hinitz. History of Early Childhood Education. New York: Routledge, 2011. Rose, Elizabeth. The Promise of Preschool: From Head Start to Universal Pre-Kindergarten. Oxford: Oxford University Press, 2010.

Kinsey, Alfred (Kinsey Institute) Alfred Charles Kinsey was an accomplished biology professor at Indiana University and a pioneer in the study of human sexuality. Kinsey became known for his publication of The Kinsey Reports, two groundbreaking books on sexual behavior that challenged mainstream ideas and attitudes about sex and opened the lid, so to speak, on sexuality in American families. In addition to authoring The Kinsey Reports, he also founded the Kinsey Institute for Research in Sex, Gender, and Reproduction. The Kinsey Institute, as it is often called, supports continued scholarship and research in the field of human sexuality and promotes Kinsey’s legacy. Kinsey Kinsey was born June 23, 1894, in Hoboken, New Jersey, and died on August 25, 1956, at the age of 62. He completed his undergraduate studies at Bowdoin College and earned a doctorate in biology

787

at Harvard University. Although Kinsey became a household name because of his human sexuality research, he spent more than 20 years of his early academic career specializing in taxonomy. He classified and studied the individual variations of hundreds of thousands of gall wasps collected from widespread locations. Kinsey also improved existing research methodology in his field and published numerous articles and books that contributed to the study of evolutionary theory. By 1938, he was known for his extensive research on gall wasps and was recognized as a leader in his field by American Men in Science. Serendipitously, while teaching at Indiana University, Kinsey was asked to offer a course on marriage and family. While preparing to teach the course he discovered the human sexuality literature was quite limited and that what did exist was dictated by morality and religion. Kinsey then committed to studying human sexuality and creating a sexual taxonomy using methodology as objective and scientific as what he had used to study gall wasps. Kinsey dedicated the remaining years of his career to conducting and overseeing an extensive program of sex research, which included collecting and studying participant sexual histories. Between 1938 and 1963, Kinsey’s research team conducted more than 18,000 comprehensive, face-to-face interviews about participants’ sexual tendencies, acts, fantasies, responses, and performance. Kinsey hoped the objectively derived data would bring a more reasoned, scientific perspective to the subjects of human sexuality, sexual relations, and sex education. In 1948, Kinsey and his colleagues, Clyde Martin and Wardell Pomeroy, published the first volume of results from their research titled Sexual Behavior in the Human Male, followed five years later by the second volume, Sexual Behavior in the Human Female. These reports, often referred to as The Kinsey Reports, presented the scientific study of human sexuality to academia as well as mainstream American society. The reports prompted widespread criticism and controversy for recounting people’s sexual behaviors objectively, without acknowledging feelings and attitudes or deferring to the more conservative and conventional views of sex at the time. One such view was that homosexuality was a threat to the stability of American families. Yet, as Kinsey’s team conducted interviews, it was

788

Korean Immigrant Families

discovered that many people’s sexual behaviors, thoughts, fantasies, and feelings were not always directed exclusively toward one sex and that homosexual experiences and thoughts were reported by some people who considered themselves heterosexual. To account for these findings, Kinsey’s team developed the Heterosexual-Homosexual Rating Scale, also called the Kinsey Scale, which accounts for variations and degrees of homosexuality and heterosexuality. Data generated by the scale suggested that same-sex encounters could be a part of sexual exploration during adolescence and young adulthood as was reported by many participants. Kinsey’s published research on homosexuality is cited as initiating a cultural shift in attitudes toward homosexuality and influencing the American Psychiatric Association’s 1973 decision to remove homosexuality from the Diagnostic and Statistical Manual of Mental Disorders. Although Kinsey’s research and publications have been generally recognized as major contributions to the academic study of human sexuality, his work has not only been controversial but also criticized for its lack of methodological rigor. The American Statistical Association questioned the validity of Kinsey’s data because of sampling procedure limitations, and other academicians questioned the validity of his methodology, which relied solely on participant recollections of their sexual histories, possibly resulting in inaccuracies. Finally, some participants recounted sexual experiences with minors and were not reported to the authorities. Kinsey’s data have been reanalyzed numerous times since his original publications were released and the data remain available for continued study through the Kinsey Institute.

Collections, hosts exhibits at the Kinsey Institute Gallery, provides educational workshops and conferences for scholars, publishes the Kinsey Today Newsletter, and maintains Kinsey Confidential, a Web site and blog offered through the institute’s Sexuality Information Service for Students. Additionally, the Kinsey Institute, in collaboration with Indiana University, seeks to enhance interdisciplinary sex education and research by offering a Ph.D. minor in human sexuality. Today, the Kinsey Institute continues to support Kinsey’s mission to move society toward a healthy and informed approach to human sexuality by supporting those who desire to study and conduct research in the field of human sexuality.

Kinsey Institute In 1947, Kinsey and his research team, with the support of Herman B. Wells, the president of Indiana University, obtained funding from the National Research Council and founded the Institute for Sex Research, which archived all of Kinsey’s research interviews and records. Later, the institute expanded to become the Kinsey Institute for Research in Sex, Gender, and Reproduction. The Kinsey Institute strives to provide leadership in expanding sexual knowledge and health around the world and supports an ongoing interdisciplinary research program. The institute also offers tours of the Kinsey

Korean Immigrant Families

Brenda Moretta Guerrero Ana G. Flores Amanda Rivas Our Lady of the Lake University See Also: Gay and Lesbian Marriage Laws; Hite Report; Masters and Johnson; Open Marriages; Polygamy; Same-Sex Marriage; Sex Information and Education Council of the United States. Further Readings Kinsey, A. C., W. B. Pomeroy, and C. E. Martin. Sexual Behavior in the Human Male. Philadelphia: W. B. Saunders, 1948. Kinsey, A. C., W. B. Pomeroy, C. E. Martin, and P. H. Gebhard. Sexual Behavior in the Human Female. Philadelphia: W. B. Saunders, 1953. The Kinsey Institute. http://www.kinseyinstitute.org (Accessed December 2013).

As the fifth-largest Asian immigrant group (after Chinese, Asian Indian, Filipino, and Vietnamese), Korean immigrants are one of the fastest-growing immigrant groups in the United States. Following the Immigration Reform Act of 1965, many adult immigrants brought their parents and children to the United States in pursuit of better economic and educational opportunities. According to the U.S.



Korean Immigrant Families

789

A nighttime view of Korea Way, a section of Manhattan’s Koreatown. Korea Way is a one-block enclave of more than 100 businesses on 32nd Street in New York City. According to the U.S. Census Bureau, there are about 1.7 million Korean immigrants in the United States, more than half of whom live in New York, New Jersey, California, and Virginia.

Census Bureau (2010), there are about 1.7 million Korean immigrants. Post-1965 Korean immigrants usually came to the United States with a background of higher education, urban living, and middle-class socioeconomic status, and over 50 percent live in four states (California, New York, New Jersey, and Virginia). Because of the vastly different cultures of Korea and the United States, Korean immigrant families demonstrate distinct characteristics, experience life-changing transformations, and face unique challenges. Differences in Cultural Values of Family The traditional Korean culture is based on Confucianism, which emphasizes the hierarchical order in family structure, obedience to authority, respect for elders, and the worship of ancestors. For example, Confucianism is evident in the gender roles of husbands and wives. Korean wives are expected to be

caregivers for their children and husbands. They are also expected to be submissive to their husband and self-sacrificing for their family. Korean husbands are supposed to be the breadwinners and decision makers in the family. However, these expectations often change after immigration because of the emphasis on gender equality in the United States. Further, the Confucian beliefs of filial piety emphasizing the respect for one’s parents affect many aspects of family life. For example, adult children are expected to care for their aging parents and sacrifice their personal needs for their parents’ wellbeing. These fundamental cultural beliefs are often retained among Korean immigrant parents. However, children who are born into these immigrant families and grow up in the United States quickly adopt the U.S. culture, in which individualism and independence are encouraged. These differences in values may affect parenting behaviors, parent–child

790

Korean Immigrant Families

relationship, and child adjustment in Korean immigrant families. Marital Relationships Marital relationships in Korean immigrant couples are best characterized as hierarchical, wherein husbands have more power and authority than do wives. Historically, women’s work was restricted to housework and child care. After immigration, because of the influence of gender egalitarianism in the United States as well as the need for financial survival, this traditional pattern often changes, with wives being involved in work outside the home and husbands participating in household labor. Many Korean immigrants are self-employed in small businesses that are labor intensive. Thus, Korean immigrant women may feel forced out of their traditional gender role into breadwinning to support the family, whereas others may feel more empowered. Korean immigrant men may feel threatened as husbands and fathers as their wives adopt less traditional roles. Regardless of personal preference, such clashing cultural expectations can cause emotional tension and stress in their marriages. Intergenerational Differences and Conflict First-generation Korean immigrants were born and lived most of their early years in Korea, immigrating to the United States as adults. These adult immigrants are vulnerable to the stresses they experience in changes in cultural values and adaptation to the new society. In general, those in this generation desire to maintain the values and the language of their home country, regardless of how long they reside in the new country. Those Korean Americans who immigrated during childhood or adolescence are known as the 1.5 generation; they are socialized in both cultures and speak English and Korean fluently. Second-generation Korean Americans are those who were born and spent childhood in the United States. Although more than half of the first-generation parents identified their second generation as Koreans, all secondgeneration individuals define themselves as Korean Americans. This gap in cultural identity between the first and second generations can cause stress and conflict between them. English proficiency often is an additional source of intergenerational conflict. An overwhelming

majority of Korean parents speak Korean at home, whereas their second-generation children speak English more fluently and with more frequency than Korean. This language gap can cause emotional distancing and difficulties in parent–child communications. Second-generation Korean American children who are fluent in English may gain more power within the family, working as translator or mediator in helping their parents access resources. Such change in the family power structure can threaten a father’s authority, which, in turn, negatively affects the intergenerational relationships in Korean immigrant families. Parenting Traditional Korean parenting and childrearing is based on the idea of collectivism from Confucian values, Taoism, and Buddhism. Specifically, Confucianism focuses on filial piety, Taoism emphasizes harmony, and Buddhism stresses compassion and family cohesion. Collectivistic parenting in Korean parents emphasizes interdependence and cooperation. In turn, children are expected to be compliant and obedient to their parents. Individualistic parenting is emphasized in American culture, in which parents encourage children to be independent and autonomous. Such different parenting approaches may pose particular challenges for Korean immigrant parents and children. For example, studies show that Korean parents tend to use harsh discipline, because they traditionally consider corporal punishment as necessary. However, after immigration, harsh discipline is viewed as negative in the United States and is associated with more child behavior problems in Korean immigrant families compared to children reared in Korea. Also, Korean parents regard children’s educational achievement as their primary responsibility. These parents exert intense pressure and insist on children studying hard, because academic achievement is an indicator of social success and status. In fact, parenting efficacy is judged by children’s academic achievement in Korea. Despite the difficulties, many Korean immigrant parents are involved in their children’s school activities and provide various types of support, such as assigning extra homework, teaching readiness and math after school, and enrolling in private tutoring, as means to assure more academic success and accomplishments.



Behavioral and Emotional Adjustment Among Korean Immigrant Children Intergenerational differences are likely to lead to parent–child conflict among these immigrant families. More such conflict, in turn, may negatively affect the behavioral and emotional adjustment among Korean immigrant children. In particular, studies show that Korean American adolescents experiencing unstable and conflicted relationships with their parents also tend to report depression and poor self-esteem. Stress associated with adapting to the new culture, including dealing with issues about one’s ethnic identity, may also be associated with greater risk for psychological distress in these adolescents. Specifically, Korean immigrant children who have been in the United States for a shorter time are likely to show depression, and girls are more vulnerable than boys. Because Korean immigrant parents are not accustomed to expressing their feelings and thoughts, expression of parental warmth and affection are less likely and when attempted do not reduce other outcomes (e.g., delinquency). The Korean Immigrant Community Due to the loss of social support from one’s family, Korean immigrant churches play a significant role in the adjustment of Korean immigrant families. Over 70 percent of Korean immigrants regularly attend Korean churches or church-affiliated organizations, although they may have different religious beliefs (e.g., Buddhism), compared to the 25 percent of Koreans who participate in a church in Korea. Korean churches provide spiritual supports as well as social activities and services that form the Korean immigrant community. Such activities include weddings and funerals, language classes, and celebrations of Korean holidays. Churches provide a safe place to meet people with the same ethnic background and similar experiences. Studies show that Korean immigrants who attended Korean churches reported less depression, and the social nature of religious activities were helpful in reducing depressive symptoms. Distinct Characteristics of Korean Immigrant Family Among Asian Immigrant Groups Although historical roots and main values are similar among Asian countries, there are differences in marital relationships and parenting behaviors. For

Korean Immigrant Families

791

example, marital relationships in east Asia (Korea, China, and Japan) emphasize men’s authority, whereas Filipino husbands and wives usually share family finance and decision making. Mothers in east Asia also tend to have less power managing family finances than do Vietnamese mothers. Regarding intergenerational conflict, Korean American adolescents experience more such conflicts than do Chinese and Japanese adolescents. Also, compared with Chinese, Japanese, and Filipino immigrants, Korean immigrant parents are more likely to use less directive strategies and prefer to model for their children. Conclusion Many Korean immigrant families experience stress in adjusting to new cultural values and behavioral norms in the United States. During this process, it is inevitable that they face many challenges, including language barriers, identity confusion, marital stress, intergenerational conflict, and parenting issues. Stable and strong social supports from the church and affiliated community can be crucial in determining their success and the quality of life they develop. Despite the challenges, for most Korean immigrant families, immigration provides an opportunity for personal growth and enhanced education for children. Understanding the challenges faced by these families can help program administrators, educators, and counselors to provide appropriate and meaningful social services. Hye-Jung Yun Ming Cui Florida State University See Also: Generation Gap; Immigrant Families; Immigration Policy; Parenting. Further Readings Chao, Ruth, and Tseng, Vivian. “Parenting of Asians.” Handbook of Parenting, v.4 (2002). Chun, Jongserl and Joohee Lee. “Intergenerational Solidarity in Korean Immigrant Families.” Journal of Intergenerational Relationships, v.4/2 (2006). Hoeffel, Elizabeth M., Sonya Rastogi, Myoung-Ouk Kim, and Hasan Shahid. “The Asian Population: 2010.” In 2010 Census Briefs. Washington, DC: U.S. Census Bureau, 2012.

792

Kwanzaa

Kim, Eunjung. “Korean American Adolescent Depression and Parenting.” Journal of Child and Adolescent Psychiatric Nursing, v.21/2 (2008). Kim, Eunjung. “The Relationship Between Parental Involvement and Children’s Educational Achievement in the Korean Immigrant Family.” Journal of Comparative Family Studies, v.33/4 (2002). Lee, Eunju. “Marital Conflict and Social Support of Korean Immigrants in the United States.” International Social Work, v.48/3 (2005). Park, So-Youn and Kunsook-Song Bernstein. “Depression and Korean American Immigrants.” Archives of Psychiatric Nursing, v.22/1 (2008). Yeh, Christine J. “Age, Acculturation, Cultural Adjustment, and Mental Health Symptoms of Chinese, Korean, and Japanese Immigrant Youths.” Cultural Diversity and Ethnic Minority Psychology, v.9/1 (2003).

Kwanzaa Kwanzaa, summarized as a celebration of family, community, and culture, is an African American holiday tradition celebrated from December 26 to January 1 of each year. Kwanzaa began in the late 1960s during the civil rights era. The holiday tradition is an African American and Pan African holiday celebrated in the United States and other countries across the globe by people of African ancestry. Kwanzaa is the first holiday established to celebrate the culture of people of African descent. Kwanzaa’s founder is Dr. Maulana Karenga, professor of Africana Studies at California State University, Long Beach. The purpose of Kwanzaa at its founding in 1966 was to fight against the oppression, discrimination, and poverty at that time. Kwanzaa is said to have been celebrated by middle-class African American families; particularly, the women in the family placed emphasis on the holiday. Karenga’s hope in creating Kwanzaa was to provide people of African descent a time to celebrate their heritage and race together. Kwanzaa is a time of harvest celebration. The holiday tradition takes its name from the Swahili phrase matunda ya kwanza, which means first fruits of the harvest. Nguzo saba, or the seven principles of Kwanzaa, are umoja, unity; kujichagulia, self-determination;

ujima, collective work and responsibility; ujamaa, cooperative economics; nia, purpose; kuumba, creativity; and imani, faith. Each of these principles has a specific meaning. Umoja is to strive for and maintain unity in the family, community, nation, and race. Kujichagulia is to define ourselves, name ourselves, create for ourselves, and speak for ourselves. Ujima is to build and maintain African American community together, taking on the problems of sisters and brothers and solving them together. Ujamaa is for African Americans to build and maintain their own stores, shops, and other businesses and to profit from them together. Nia is to make African Americans’ collective vocation the building and developing of their community to restore their people to their traditional greatness. Kuumba is to do always as much as one can, in the way one can, to leave their community more beautiful and beneficial than they inherited it. Imani is to believe with all one’s heart in one’s people, parents, teachers, leaders, and the righteousness and victory of the struggle. As Kwanzaa was created at a difficult time in American history, the principles reflect a need to uplift African Americans. The holiday has been critiqued as having an antiwhite feeling. The seven symbols of Kwanzaa represent values and concepts that are reflected in African culture. Mazao, the crops, are symbolic of African harvest celebrations and the rewards of productive and collective labor. Myeka, the mat, is symbolic of African American tradition and history and, therefore, the foundations on which African Americans build. Kinara, the candle holder, is symbolic of African American roots, their parent people—continental Africans. Muhindi, the corn, is symbolic of African American children and the future, which they embody. Mishumaa saba, the seven candles, are symbolic of the nauzo saba, the matrix and the minimum set of values, which African people are urged to live by to rescue and reconstruct their lives in their own image and according to their own needs. Kikombe cha umojo, the unity cup, is symbolic of the foundational principles and practice of unity, which makes all else possible. Zwardi, the gifts, are symbolic of the labor and love of parents and the commitments made and kept by the children. The two supplemental symbols of Kwanzaa are bendera, the flag, and nguzo saba poster, poster of the seven principles.

Kwanzaa



The colors of Kwanzaa are black, red, and green. Black is for the African people, red for their struggle, and green for the future and hope that comes from their struggle. Holiday decorations should include these colors as well as traditional African items, including African baskets and art objects. Traditionally, gifts are given to children and must always include a book and heritage symbol. During Kwanzaa, participants greet one another with the Swahili greeting, which is a reflection of African Americans’ commitment to the whole of African culture and to reinforce awareness and commitment to nguzo saba. The greeting is “Habari gani?” The response is aligned with the day of Kwanza, for example, on the first day the response would be “Umoja.” Martha L. Morgan Alliant International University

793

See Also: African American Families; Slave Families; Social History of American Families 1961 to 1980. Further Readings Hamer, L., W. Chen, K. Plasman, S. Sheth, and K. Yamazaki. “Kwanzaa Park: Discerning Principles of Kwanzaa Through Participatory Action Research as a Basis for Culturally Relevant Teaching.” Journal of Ethnographic & Qualitative Research, v.7 (2013). Karenga, M. and T. Karenga. “The Nguzo Saba and the Black Family: Principles and Practices of Well-Being and Flourishing.” In Black Families, 4th ed., H. P. McAdoo, ed. Thousand Oaks, CA: Sage, 2007. Kwanzaa Official Web Site. http://www.officialkwan zaawebsite.org. (Accessed November 2013). McGill, S. “Kwanzaa.” Kwanzaa, v.1 (2009). Pleck, E. “Kwanzaa: The Making of a Black Nationalist Tradition, 1966–1990.” Journal of American Ethnic History (2001).

L Language Brokers Language brokers are children, adolescents, and youth who act as translators for parents and other adults. Because children often acculturate at a faster rate than most immigrant adults, children are solicited to act as language brokers. They may translate interactions face-to-face in school, a parent’s work, or a doctor’s office; conversations on the phone; notes from school; rental agreements, immigration papers, and utility bills; and other items and situations, despite potentially lacking the linguistic or cognitive capacity to accurately convey the intended meaning. Generally, this practice occurs for immigrant families, for individuals who have not mastered the host language, or for those who want to reinforce their understanding of the communication in a language in which they lack sufficient proficiency. These youth are acting as informal translators and are not trained professionals. Language brokers do more than just translate communication verbatim. Since language brokers are untrained as formal translators, they often translate meaning of the communication more than just the content. For example, a language broker translating a note from school may say that the teacher wants to meet about a sibling’s progress rather than literally what the note may say. In addition, language brokering may be closer to the notion of

“interpreting,” as used in the formal translating and interpreting field, where one strives for conveying the structural meaning of the communication rather than direct translation of the words. However, many researchers would advocate that language brokering extends beyond interpretation. For children, language brokering also includes understanding and adhering to typical power structures of adult and child, cultural information, and explanation of cultural practices. Some researchers use the notion of “cultural brokering” to more broadly define the tasks in which children are engaged and to reduce the focus on just the translation from one language to another. Other researchers have discussed the importance of language brokering as an avenue for children providing immigrant parents with the education to engage in necessary tasks to survive in the host country, which has been called procedural brokering. Other researchers have used terms such as natural translation or para-phrasing (a play on words of the Spanish para, which means “for” as in “for another person”). Regardless, the common term is language brokering, with an understanding that the notion of brokering includes conveyance of more information than just communication. On average, language brokers begin in childhood, with some studies indicating that it starts around 10 years old; however, some qualitative studies indicate language brokers begin earlier in childhood. There is mixed evidence that girls and the eldest 795

796

Later-Life Families

may engage in more language brokering than boys or later-borns. Cultural norms and situation-specific circumstances may determine who acts as a language broker within a family. For example, a boy may be brought to the father’s workplace whereas a girl may language broker in a doctor’s office. Culturally, the eldest child may have certain household responsibilities that include language brokering, or the child most proficient in English or who has a compatible personality may be used to language broker. Research has indicated that the experience of language brokering can be positive, negative, or neutral for child language brokers. Greater frequency of language brokering has been associated with greater ethnic identity, heritage value retention, and higher self-esteem and self-efficacy. Frequency of language brokering has also been associated with feeling burdened, feeling stressed, greater acculturative stress, “parentification” or role reversal (i.e., where the youth assumes an adult role in the family), and more problematic relationships with parents. Some evidence also indicates that language brokering may just be considered an aspect of being and becoming bilingual in a family. For language brokers, the impact of the experiences may differ by family dynamics. In families where parents have strong relationships with youth, the children perceive language brokering positively and report feeling as though language brokering is beneficial to their well-being. When parents have poorer relationships or there is a large acculturation gap between parents and their children, children may view language brokering as burdensome, negative, and stressful. There is growing evidence that how parents frame the language brokering experience shapes how children experience the ongoing task and how it influences their well-being in the future. Cultural aspects may also influence the language brokering experience for children. First, engaging in language brokering may be an opportunity to practice and reinforce heritage language skills as well as heritage cultural knowledge, given the focused time spent with a parent. Moreover, research indicates that frequency of language brokering associates with greater family ethnic socialization, where parents teach the child about the heritage culture, traditions, and practices. Children who retain more heritage values tend to view language brokering more positively. That is, many non-Western cultures value practices such as helping parents, placing the

family above one’s needs, and deferring to authority figures. Language brokers who are more culturally consistent with parents see language brokering as wholly syntonic with the expected values for the immigrant group. At the same time, those child language brokers who are more oriented toward the host culture (e.g., United States, Israel, or Germany) tend to report language brokering as burdensome, disruptive, or uncomfortable. Language brokers with greater heritage values may also retain proficiency in the heritage language, which makes interacting with parents and others easier and may parallel similar value structures, minimizing the potential for conflict. Robert S. Weisskirch California State University, Monterey Bay See Also: Acculturation; Assimilation; Immigrant Families; Multilingualism. Further Readings Morales, Alejandro and William Hanson. “Language Brokering: An Integrative Review of the Literature.” Hispanic Journal of the Behavioral Sciences, v.27/4 (2005). Tse, Lucy. “Language Brokering Among Latino Adolescents: Prevalence, Attitudes, and School Performance.” Hispanic Journal of Behavioral Sciences, v.17/2 (1995). Weisskirch, Robert and Sylvia Alva. “Language Brokering and the Acculturation of Latino Children.” Hispanic Journal of Behavioral Sciences, v.24/3 (2002). Weisskirch, Robert, et al. “Cultural Influences on College Student Language Brokers.” Cultural Diversity and Ethnic Minority Psychology, v.17/1 (2011).

Later-Life Families There are two trends underlying population aging. First, people are enjoying longer life spans. Gains in medical technology and healthier lifestyles are adding years to human life spans. Second, fertility rates have sharply declined, especially in wealthy countries with higher literacy rates, access to birth control, and where women participate in the labor force. The result is what Vern Bengtson



Later-Life Families

797

percent in 2010 to 4.5 percent in 2050. Fertility rates in the United States have also declined. According to the U.S. Census, the total fertility rate reached nearly 4 percent at the height of the baby boom in 1957 and has steadily declined so that it is currently about 2 percent. The result of these trends is that the population as a whole is growing older, requiring increased focus on aging in families and policies.

Spending time with grandchildren is something many people experience in later life. Even those segments of the aging population who do not experience declines in health are affected by changes in their family lives, roles, and obligations.

and colleagues referred to as the emergence of the beanpole family structure. In contrast to pyramidal intergenerational family structures that are developed when a small number of older persons have many children and grandchildren, families are increasingly likely to have fewer children and grandchildren. Consequently, there are few younger family members to comprise the bottom of the pyramid. The United States provides a case study for understanding the impact of increasing life spans and declining fertility rates. There were 40.2 million Americans over the age of 65 in the 2010 census, comprising about 13 percent of the total population. However, the oldest of the baby boomers began celebrating their 65th birthdays in 2011. By 2050, there will be more than 83.7 million Americans over 65 years old, comprising over 22 percent of the total population. Population aging is even more dramatic among the oldest old, the result of increasing life spans from gains in medical technology and healthier lifestyles. Whereas there were slightly more than 5.5 million Americans 85 years old and older in 2010, it is projected that there will be nearly 18 million in 2050. This represents more than a doubling in the percentage of the population comprised of the oldest old, from 1.9

Late-Life Intimate Partners Good marriages seem to be beneficial for adults of all ages. People who are married live longer, healthier, and happier lives compared to adults who are divorced or in poor marriages. Older adults also seem to have happier marriages than young and middle-aged adults. That is likely to be partially a selection effect. In other words, those in happier marriages stay in those marriages until late life. People today are getting married later in life, choosing not to get married at higher levels, and divorce rates are higher than a few decades ago. These trends suggest that late-life marriages may be happier than in the past because those who are married are married by choice instead of being trapped in unhappy marriages by social norms or wives’ economic dependence on husbands. It may also be that life-course changes such as retirement, launching children, and declines in social networks contribute to increased importance and focus on intimate ties with spouses. Marital satisfaction increases over the life course for both men and women, but men have a consistent advantage in marital satisfaction. This advantage may be attributable to traditional gender roles that favor men. Although women are participating in the labor market at higher rates than in the past, many intimate relationships continue to be inequitable concerning household labor. These roles place women at a disadvantage as they experience inequities in reciprocal exchange relationships. It is common for those exchanges to become more equitable as couples become interdependent on each other for their well-being and may be linked to higher rates of marital satisfaction with age. Retirement is a critical transition for married couples. For the most part, retirement is often a positive transition allowing couples to spend more time with one another and emphasize their relationship. For some, however, retirement upsets the equilibrium in family systems. For example, some wives who have never been employed report their

798

Later-Life Families

husband’s retirement resulted in territorial issues with regard to household tasks. Dual-earner couples may not report gains in marital satisfaction until both spouses have retired. The transition to widowhood has also been a central focus of study in later-life families. Women are more likely to experience widowhood than men because women live longer than men and are likely to marry men who are older than themselves. At the same time, adjustment to widowhood can be especially difficult for men. Whereas women are likely to have broad social networks to provide them with bereavement support after widowhood, wives are likely to be the main source of men’s emotional support. It is unlikely that there are more gay and lesbian adults today than in the past few decades. Changing social norms and increased acceptance of gays and lesbians, however, have brought attention to how they negotiate late life. Contrary to negative stereotypes associated with homosexuality, gay and lesbian partners are not at a disadvantage to married couples in regard to experiencing loneliness or life satisfaction. Gay and lesbian couples do have concerns about how discriminatory social policies limit their abilities to participate in spousal benefits that are available to heterosexual couples. Also, many have hidden their sexuality from friends and family members because of fears associated with homophobia. They are significantly less likely to have children than heterosexual couples. Consequently, they may have smaller social networks than heterosexual couples and those social networks are more likely to consist of same-aged peers. They may also experience both financial and social disadvantages as they grow older. Increasing acceptance of homosexuality and recent changes in laws regarding same-sex marriages are likely to reduce these concerns, but changes in those policies have been slow and uneven. More, older gay and lesbian couples continue to be affected by the sociohistorical context they experienced earlier in their lives. Aging and the Need for Family Caregiving As people grow older, many of them experience substantive declines in physical and cognitive wellbeing. Activities of daily living (ADLs) represent one way to measure physical functioning and wellbeing. According to a report by the Centers for

Disease Control, less than 1 percent of Americans younger than 65 have three or more limitations in ADLs. However, 3.2 percent of those older than 65 and nearly 10 percent of those older than 85 have three or more limitations in ADLs. Dementia is not exclusively a disease of older adults, but it does affect older adults at substantively higher rates than younger adults. Brenda Plassman and colleagues estimated that prevalence rates of all forms of dementia rose from about 4.7 percent for those 71 to 79 years old to 24.19 percent for those between 80 and 89 years old, and up to 37.36 percent for those older than 90 years. As people experience physical and cognitive declines, they need assistance to negotiate the daily needs of life. It is often assumed that younger family members will provide for the needs of their older family members. They often do. The Centers for Disease Control and Prevention stated that about 21 percent of households provide caregiving to an adult. Caregiving affects the financial and physical well-being of the caregivers. The value of the free care provided by family caregivers is about $375 billion per year according to the National Alliance for Caregiving and Evercare. Caregiving is related to increased stress, declines in immune system functioning, and increased risk for depression. It also has significant financial impacts on caregivers. Not only do they have increased expenses to provide caregiving, but 37 percent of them had reduced hours or quit jobs to fulfill caregiving duties. The assumption that younger family members are likely to be caregiving providers is often based on the intergenerational solidarity model. This model posits that there are strong social connections between older and younger family members. These cohesive connections provide the basis by which younger family members are motivated to provide caregiving. There is some debate on how these connections form. One model suggests that intergenerational solidarity is based on altruism. This model posits that people provide assistance to older kin based on attachment, love, and concern for the well-being of their older family members. Another model, the reciprocity or exchange model, holds that provision of caregiving to older family members is provided as a payback for assistance provided to the caregiver earlier in life. A third model suggests that social norms—generalized rules about behavior—motivate caregiving behaviors. Each of these models has varying levels of



empirical support. At the same time, each theory is challenged by trends in modern families. Families are changing. As already noted, they are often smaller than in the past. They are also increasingly separated by geography. Family members are more likely than in the past to experience divorce and remarriage. All of these family transformations have led some scholars to predict declines in intergenerational exchanges and family caregiving. The empirical evidence has, thus far, belied those expectations. Family members continue to provide caregiving at high rates; they exchange monetary resources and provide social support as family members age. Unfortunately, they sometimes provide services at the expense of their own physical and mental well-being. More, as people age they develop health needs that younger family members may not have the skills to provide. In those cases, younger family members may seek assistance to provide the care that their older family members need. Policy Issues Affecting Aging Families There is little doubt that social policies will be evaluated and realigned with the demands of the population. This is already evident in debates about funding for social security, long-term care, and health care. In an environment of government contraction, debates about funding priorities become particularly salient as older persons and their families face uncertain futures about the programs they rely on. Although some suggest shifting responsibilities to adult children of older persons, studies have generally found that adult children do not displace the assistance of government programs. Adult children often do not have the financial resources to provide their older family members’ financial and health care needs. They may not have the expertise to provide for health care needs. Also, as noted previously, increasingly larger numbers of older persons remain childless and are not able to draw assistance from adult children. The result is that government programs and policies will continue to play a large role in supporting the basic needs of older persons. Financial security is one of the largest concerns for aging families. The majority of retirees in the United States currently receive the majority of their income from Social Security. Given the aging of the population and recent economic downturns, concerns have been raised about the financial solvency of Social Security and the ability of the program to

Later-Life Families

799

provide financial well-being to older persons and families in late life. Some suggest benefits must be cut or new revenues injected into Social Security for the long-term viability of the program. Nevertheless, retirees and near-retirees are anxiously watching these debates as the outcome will have real consequences for their late life financial well-being. Similar debates are occurring with regard to Medicare and Medicaid. As people age, the costs of providing them with the health care they need rise substantially. Persons over 65 years old in the United States are eligible to receive Medicare benefits. Health care costs, in general, are rising much faster than inflation. The rise in health care costs coupled with population aging will substantially increase the burden of health care on families. Long-term care is expensive and most of those expenses are paid through Medicaid. One way to save substantial costs in late life is to age-in-place. Aging-in-place refers to older persons’ abilities to maintain independence so that they can stay in their own homes as they age. When provided with supports and social networks, adults are often able to age-in-place instead of transitioning to long-term care, such as assisted living facilities or nursing homes. Most people will never need long-term care and fewer people need it today than in recent decades. At the same time, when older persons experience significant declines in physical and cognitive health, staying in their own homes or living with younger family members becomes untenable and older persons make the transition to living in long-term care. Although the percentage of older persons needing long-term care is declining, the increase in the number of older persons more than make-up for that decline so that the absolute numbers of people needing longterm care are expected to increase. Those who need long-term care are older and sicker than in recent decades. Conclusion Populations around the world are aging. This demographic transition is a dramatic change that will affect aging individuals and their families. It will also require a realignment of social policies to meet the particular needs of aging individuals and their families. Aging is also taking place in a context in which the meaning of families is changing. There is increased emphasis on individualism in

800

Latino Families

families. Gender roles are becoming more equitable and acceptance of gay and lesbian individuals is increasing. At the same time, those changes have been slow and uneven, and discriminatory attitudes and policies persist. The age of marriage is rising; people are increasingly likely to remain single and more likely to divorce if their marriages are unsatisfactory. The changing sociohistorical context of families coupled with a rapidly aging population provide rich fodder for scholarship on aging families. Timothy S. Killian University of Arkansas See Also: Baby Boom Generation; Demographic Changes: Age at First Marriage; Demographic Changes: Divorce Rates. Further Readings Bengtson, Vern L. and Ariela Lowenstein. Global Aging and Challenges to Families. Hawthorne, NY: Aldine De Gruyter, 2003. Coleman, Marilyn and Larry Ganong. Handbook of Contemporary Families: Considering the Past, Contemplating the Future. Thousand Oaks, CA: Sage, 2004. Connidis, Ingrid Arnet. Family Ties and Aging. Thousand Oaks, CA: Sage, 2001. U.S. Census Bureau. “2012 National Population Projections: Summary Tables (Table 2).” http://www .census.gov/population/projections/data/national /2012/summarytables.html (Accessed May 29, 2013).

Latino Families Families are fundamental social structures that shape societies. In the United States, recent decades have witnessed dramatic changes in the structure of the family, including later marriage and childbearing, decreased fertility, increased divorce rates, and more generally a shift from traditional families to single-headed families. These changes are paralleled by Latino families as they transition from their place of origin to the United States, as well as through the acculturation of successive generations. Latino families tend to be younger, larger, and poorer than

the average American family, with whom they share only some cultural similarities. Although Latino families differ from non-Hispanic white families, they also tend to differ within Latino groups in a number of spheres, including with regard to demographic, socioeconomic, and cultural aspects. Due to the numerical importance of the Latino population, Latino families are likely to alter or redefine family norms, structures, and relations within the United States. Historical Context The term Latino refers to a person of Mexican, Puerto Rican, Cuban, Central or South American, or other Spanish culture or origin. Thus, Latinos may be immigrants or native-born U.S. citizens; they may be of any race and any socioeconomic background. The terminology used to refer to people of Latin American origin or descent has evolved over time. For instance, the U.S. Census Bureau uses the category “Hispanic, Latino, or Spanish origin” to refer to people of Latin American origin or descent. This categorization evolved from an original “Spanish origin” category. The term Hispanic was created and added in the 1980 census, and the term Latino was added in the 2000 census. Further, in spite of the new broader classification, many Latinos still identify in terms of nationality, such as Colombian, Dominican, or Salvadorian, and do not necessarily embrace this pan-ethnic identity. Latino families are socially diverse as their members differ across countries of origin as well as within a same country, notably by generational status and length of presence in the United States. An overview of Latino families must disaggregate the various national groups to take into account the diversity existing among and within the subgroups, while trying to stress commonalities and differences with the general population. As of 2010, three national origin groups account for 75.7 percent of the total Latino population in the United States. The largest group is that of the Mexicans, who represent 63 percent of the total Latino population. Primarily concentrated in the southwest, they had a strong presence in the region even before its annexation to the United States in 1848 and played an instrumental role in establishing and maintaining Latino traditions in the southern states. Puerto Ricans, who mainly settled in the northeast, are the second-largest group and represent 9.2 percent of



the Latino population. Their presence in the United States dates back to the 1900s, as their migration was facilitated by the Jones Act of 1917 and the commonwealth status of Puerto Rico in the 1950s. However, it was not until the 1950s, when the island became the Commonwealth of Puerto Rico, that the Puerto Rican population in the United States started to grow rapidly as a result of mass emigration from difficult economic conditions on the island. Finally, the third most numerically important group is Cubans, who represent 3.5 percent of the Latino population. Prior to 1958, the number of Cubans living in the United States was limited. After the Cuban revolution in 1959, large numbers of Cubans left the country and immigrated to the United States, especially to the southeast. While the first waves of migration brought predominantly upper-class migrants, the subsequent waves, typified by the Mariel Boatlift of 1980, brought mainly economic migrants to the United States. The rapid growth of the Latino population in the United States has increased interest in the group as well as the need to gain a better understanding of Latino families as they adapt to life in America. Although the total Latino population in 1960 was negligible—representing only 3.5 percent of the total U.S. population, according to the U.S. Census Bureau—it has quadrupled in the last 50 years, making this group the fastest-growing group in the United States. With a population of 50.5 million people in 2010, Latinos represent 16 percent of the total population and are expected to reach 30 percent by 2050, making it essential to understand the mechanisms behind the functioning of the 10.3 million Latino families in the United States and their impact on American society. Demographic Profile A close examination of the structure of Latino families reveals not only intergroup dissimilarities, notably with non-Hispanic whites, but also great intragroup variation. Latino families in the United States have distinct features compared to non-Hispanic white families. Overall, they tend to be younger and have a higher fertility rate. Latino families are formed out of rates of marriage as high as non-Hispanic whites but have a lower divorce rate, although more households are headed by single mothers. Significant variations can also be found within subgroups, notably between Cuban

Latino Families

801

and Puerto Rican families, who are often on the opposite ends of the spectrum. With Latinos having a median age of 27.2 according to the 2010 census, Latino families are comprised of members who tend to be much younger than the rest of the U.S. population, whose median age is 37.2. One notable dissimilarity pertains to the difference between the median age of most Latino groups and the median age of Cubans. While the median age of Mexicans and Puerto Ricans is 25.5 and 27.9 years, respectively, that of Cubans is 40 years. This is due to the particularities of Cuban migration to the United States, which brought older exiles following the Cuban revolution of 1959. This 10-year difference in median age between Latinos and the rest of the population has a great impact on family life, as these younger Latino adults are in their prime childbearing period. This partly explains why Latinos on average have a higher fertility rate (76 births per 1,000 women) than the rest of the U.S. population (53/1,000 for non-Hispanic whites). Within the subgroups, this figure is higher for Mexicans (83/1,000), compared to Puerto Ricans (65/1,000) and Cubans (48/1,000), who have the lowest fertility rate of all Latino groups. As a corollary, the average family size for Latinos is 3.86, compared to 3.06 for non-Hispanic whites. Similarly, Mexicans have larger family size average (4.06) compared to Puerto Ricans (3.39) and Cubans (3.31). Generally, Latinos are more likely than the rest of the population to be living in a family household (78 percent and 66.4 percent, respectively). However, Latino families are slightly less likely than non-Hispanic whites to be headed by a married couple (48.5 percent and 51.6 percent, respectively) and more likely to be headed by a single parent in comparison with non-Hispanic whites. In 2010, 19.3 percent of Latino family households were headed by a single mother compared to 12.6 percent of non-Hispanic white households. This figure is higher for Puerto Ricans (26.1 percent) but lower for Cubans (13.9 percent). In contrast, Latino families seem more stable than their non-Hispanic white counterparts, as the proportion of divorce among Latino families (8.2 percent) is lower than that of non-Hispanic white families (11.3 percent). Variation within the subgroups indicates that Cuban families have the highest proportion of divorce (13.2 percent), followed by Puerto Rican families (11.2 percent) and finally

802

Latino Families

Mexican families (7.1 percent). However, if the rate of married couples who no longer live together and are therefore separated is included, the differences between Latinos and non-Hispanic whites are somewhat reduced. For instance, only 1.8 percent of non-Hispanic whites are separated, compared to 3.5 percent of Hispanics. Socioeconomic Attributes of Latino Families Latino families are at a socioeconomic disadvantage in comparison with non-Hispanic white families, as they tend to have a lower median income and suffer from a higher rate of poverty. Their family members also tend to have a lower level of education and are less likely to be professionals or entrepreneurs. On average, Latino children are born into families that present a lower level of education than non-Hispanic white families. Indeed, 43.4 percent of Latinos did not graduate from high school, compared to 12.3 percent for non-Hispanic whites. Similarly, fewer Latinos obtained a bachelor’s degree (8.9 percent versus 17.7 percent for non-Hispanic whites) or a graduate or professional degree (4.1 percent versus 10.4 percent, respectively). Among the Latino subgroups, the Cubans have the highest level of education while the Mexicans are the subgroup with the lowest education level. Lower educational attainment for Latinos compared to non-Hispanic whites may partly explain the lower income of Latino families. Although Latinos and non-Hispanic whites are similarly represented in the labor force, they tend to occupy different types of jobs. Latinos are more likely to work in blue-collar occupations such as construction and maintenance (15.5 percent versus 9.6 percent for non-Hispanic whites) and less likely to be white-collar workers and in management (19 percent versus 37.6 percent), although notable variations occur among subgroups: the Cubans, whose migration patterns diverge from other Latino groups, generally tend to be professionals, as they brought with them social and economic capital that helped ease their economic integration into U.S. society. On the other end, Mexicans tend to do less well than the average Latino, as they are a majority of economic migrants seeking better opportunities in the United States. As a consequence, the median annual income of a Latino family is lower than that of non-Hispanic whites ($41,102 versus $64,818). As anticipated,

Cuban families do financially better than other Latino families ($47,929), and Mexican families do slightly less well than the average ($39,264). The poverty rate is another indicator that illustrates the differences between Latino families and Anglo families. In 2010, 22.2 percent of Latino families lived below the poverty level of $22,314 for a family of four, compared with 8.7 percent of nonHispanic whites. Among the subgroups, Mexicans and Puerto Ricans are the worst off (24.2 percent) while Cubans do better (13.7 percent). The Role of Culture in Family Structure Although Latinos are a heterogeneous group, they share a similar cultural background with immigrants, which reinforces their sense of commonality and identity and partly explains the above-mentioned dissimilarities between Latino families and non-Hispanic white families. The interest of the Latino family often predominates over the individual interests of its members, a social pattern referred to as “familism.” Latino families rely heavily on the extended networks of family members, with whom they interact frequently to seek support and assistance, particularly for newly arrived and undocumented immigrants. Even as Latino families adapt to U.S. society, they remain active in this kin network and still hold a strong desire for geographical closeness with extended family members. Demonstrating attachment to traditions from their country of origin allows these families to maintain a strong sense of collective identity, which they pass on to the next generation. This is reinforced by links with the country of origin through continued immigration to the United States. As of 2010, 37 percent of the Latino population was foreign born (35.5 percent of Mexicans and 58.7 percent of Cubans; Puerto Ricans have a special status, as Puerto Rico is a commonwealth and they have American citizenship). This is exemplified by the maintained use of Spanish within Latino families. According to the 2010 U.S. census, 75 percent of Latinos declared to speak Spanish at home, with slight variations within the subgroups (73.3 percent of Mexicans, 64.1 percent of Puerto Ricans, and 81.9 percent of Cubans). The patriarchal model is often the norm in Latin America, and this is replicated by Latino families. Gender relations are frequently determined according to a form of machismo wherein a woman’s role



Latino Families

803

A Latino family reads together. Latino families are at a socioeconomic disadvantage in comparison with non-Hispanic white families, as they tend to have a lower median income and suffer from a higher rate of poverty. Their family members also tend to have a lower level of education and are less likely to be professionals or entrepreneurs. On average, Latino children are born into families that present a lower level of education than non-Hispanic white families.

and appropriate behavior are clearly determined and usually involve looking after the children and the house while the husband assumes the role of breadwinner for the family. However, constraints imposed by external factors such as financial necessity or other environmental conditions sometimes provide more flexibility and variance to the relationship between husband and wife and to the roles they assume within the family unit. Women may have to step in and fulfill duties generally accomplished by men. Yet processes of acculturation, by which a minority group progressively acquires the values and behavior of the dominant group, increasingly blur the boundaries between Latino family values and mainstream American values and induce cultural change and social adaptation. For the Latino family, this sometimes involves letting go of traditional,

rigid gender-role expectations in favor of more flexible and sometimes egalitarian models wherein male dominance may no longer be a culturally preferred mode. This process is further reinforced through successive generations, as the new generations adopt the American family values and models. This is heavily evidenced by the second generation within a Latino family—they are usually bilingual and bicultural, and they provide a bridge between their foreign-born immigrant parents, grandparents, or great-grandparents and American society. Conclusion The Latino family is a family in transition, adapting, from one generation to the next, to life in the United States. Although the acculturation process and successive generations are generally redefining gender roles and making some families transition

804

Learning Disorders

to American family models, Latino families commonly maintain much tighter intra-family links than Anglo families. However, the dissimilarities between Latino families and non-Hispanic white families cannot be exclusively explained by cultural differences. Structural factors, such as economic constraints and the fact that a large proportion of Latinos are immigrants, also explain the differences, as these are immigrant families undergoing a process of adaption to their host society. The impact of immigration on the family should not be underestimated: it sometimes separates families through voluntary migration and deportation, and it sometimes reunifies families within the United States. More generally, it is difficult to refer to the Latino family as a monolithic construct, as Latino families display great diversity in socioeconomic status, demographic profile, immigration history, and cultural specificities. It is therefore important, when referring to Latino families, to take into account both the unique history and situation of this group as a whole and the subgroup differences that may be hidden by the pan-ethnic “Latino” identity. The Latino family, as a construct, should be taken as a reflection of the diverse human stories that result from varied waves of immigration and the trials of acculturation into U.S. society. Marie L. Mallet Harvard University See Also: Immigrant Families; Immigration Policy; Primary Documents 1960s. Further Readings Sabogal, Fabio, et al. “Hispanic Familism and Acculturation: What Changes and What Doesn’t.” Hispanic Journal of Behavioral Sciences, v.9/4 (December 1, 1987). Sarkisian, Natalia, Mariana Gerena, and Naomi Gerstel. “Extended Family Ties Among Mexicans, Puerto Ricans, and Whites: Superintegration or Disintegration?” Family Relations, v.55/3 (July 2006). Smokowski, Paul R., Roderick Rose, and Martica L. Bacallao. “Acculturation and Latino Family Processes: How Cultural Involvement, Biculturalism, and Acculturation Gaps Influence Family Dynamics.” Family Relations, v.57/3 (2008). Zambrana, Ruth E., ed. Understanding Latino Families: Scholarship, Policy, and Practice. Thousand Oaks, CA: Sage, 1995.

Learning Disorders Specific learning disorder (formerly known as learning disorder or academic skills disorder) is a neurodevelopmental disorder that affects the brain’s ability to receive or process information. It features impairments affecting the academic domains of reading, written expression, and/or mathematics. Individuals with specific learning disorder have average or above average intelligence. These learning difficulties are persistent, meaning that there is restricted progress in learning for at least six months despite extra help. The skills encompassed in academic domains become essential to success in other academic subjects and activities, so individuals with specific learning disorder often have poor grades, exhibit lower self-esteem, and can be targeted for bullying. Adults with specific learning disorder may avoid activities requiring academic skills or have difficulties in their occupations. Specific learning disorder often becomes apparent in the early years of schooling as academic demands reveal the individual’s impairments. History In 1963, Dr. Samuel Kirk defined children with “learning disabilities” as those having difficulty developing skills for social interaction rather than reading, writing, or mathematics difficulties. By 1975, the Education for All Handicapped Children Act used the term specific learning disabilities to encompass difficulties with listening, thinking, speaking, reading, writing, spelling, or doing mathematical calculations. These individuals were identified by a perceived gap in ability and performance that could not be explained by another disability. By the 1980s, educators focused on the effectiveness of reading instruction for all students, especially those with reading impairments. By the 1990s, the Response to Intervention (RtI) movement became popular as a proactive method to identify students at risk for learning disability. Once concerns develop for a young child, RtI could be used to monitor progress to different interventions and therefore track the areas of concerns and the methods that are most effective. The 2000s ushered in trends such as universal design for learning, in which curricula are more flexible such that techniques used for special education are used for the general classroom that benefit both populations of students. For example, the teacher might



accept various forms of media for an assignment such as essays, presentations, graphics, etc. Efforts continue to work toward providing students with specific learning disorder with the least restrictive environment, highly individualized treatment plans, and less marginalization within society. Diagnostic Criteria The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) lists four diagnostic criteria for specific learning disorders. First, the individual must have difficulty learning and using academic skills, as indicated by at least one of the following: (1) inaccurate or slow and effortful word reading, (2) difficulty understanding the meaning of what is read, (3) difficulty with spelling, (4) difficulty with written expression, (5) difficulty mastering number sense, number facts, or calculation, or (6) difficulty with mathematical reasoning. Second, the affected academic skills are substantially and quantifiably below those expected for the individual’s chronological age and cause significant interference with academic or occupational performance or with activities of daily living. Third, the learning difficulties begin during school-age years, although they may not manifest until the demands for those skills exceed the individual’s limited capacities. Last, the learning difficulties are not better accounted for by other impairments such as intellectual disabilities or inadequate educational instruction. Unexpected academic underachievement is often crucial to first identifying specific learning disorder. There is, however, no natural point at which significantly low academic achievement occurs; it is largely arbitrary. The DSM-5 recommends that students test 1.5 standard deviations below the population, or below the seventh percentile, on one or more standardized tests within academic domains for the greatest diagnostic certainty, but notes that a more lenient threshold may be used when there is other evidence of the disability. Therefore, it is necessary that the diagnostic process be comprehensive and include medical, developmental, educational, and family history, as well as a history of the difficulties, the impact of the difficulties, and samples of the individual’s academic work. Several psychometric tests have also been developed that can assist with a diagnosis. Commonly used tests include the Wechsler Intelligence Scale for Children (WISCIII), the Woodcock-Johnson Psychoeducational

Learning Disorders

805

Battery, the Peabody Individual Achievement TestRevised (PIAT-R), the California Verbal Learning Test (CVLT), and the Kaufman Test of Educational Achievement (K-TEA). It is also necessary to ensure that the difficulties are not attributable to intellectual disability, global developmental delay, hearing or vision disorders, neurological or motor disorders, or external factors such as economic or environmental disadvantage, absenteeism, or cultural or language differences. Specific learning disorders have different levels of severity. Individuals with mild specific learning disorder have difficulties with skills in one or two academic domains but are able to compensate when provided with appropriate accommodations or support services. Individuals with moderate specific learning disorder have marked difficulties learning skills in one or more academic domain such that they are unlikely to become proficient without intervals of intensive and specialized teaching. Individuals with severe specific learning disorder have severe difficulties learning skills in several academic domains such that they are unlikely to learn these skills without ongoing intensive individualized and specialized teaching, ongoing tutoring, and a classroom aide. Individuals with impairment in reading, such as dyslexia, may have difficulty with word reading accuracy, reading rate or fluency, or reading comprehension. It is the most common manifestation of specific learning disorder. Individuals with impairment in reading primarily have difficulty with phonological processing, which involves identifying and manipulating phonemes, or individual sounds within morphemes, the smallest meaningful units of language. They may also have difficulty translating the visual symbols into the corresponding sounds. Some individuals may also have visualization-comprehension impairments that lead to difficulty visualizing what is read. For individuals with reading impairments, the increased concentration required during reading often impairs the individual’s attention ability, causing mental fatigue and difficulty comprehending or remembering the reading material. Given the importance of reading in success in other subjects in school, this difficulty will affect a student’s overall academic success and often leads to grade retention, frustration, and overall disillusionment with school. In addition, the orthographies of the individual’s language may pose greater or fewer demands

806

Learning Disorders

on the individual. Alphabetic orthographies, in which symbols represent sounds, can be shallow and have one-to-one ratio of symbols to sound or can be deep and have multiple sounds corresponding to each symbol. Deep orthographies such as English are more difficult to master than shallow orthographies such as Spanish. In turn, logographic orthographies, in which symbols represent morphemes, pose other challenges. Individuals with dyslexia using deep alphabets tend to have slow and inaccurate reading and those using shallow alphabets or logographs tend to have slow but accurate reading. Reading impairment has been linked to underactivity in the left superior posterior temporal lobe, specifically the planum temporale, which is a region important for phonological processing. Heritability of reading impairment may be more than 50 percent. Some genetic investigations have identified possible genes located on chromosome 6 and 15 linked to reading impairment. There are several treatment approaches, such as the GillinghamStillman approach, the Fernald-Keller approach, or the Lindamood-Bell reading program, that focus on phonic practice and phonic association with sensory integration or mnemonic strategies. Individuals with impairment in written expression, such as dysgraphia, may have difficulty with spelling accuracy, grammar and punctuation accuracy, or clarity or organization of written expression. Most causes of impairment in written expression occur because of impairment in translating information from an auditory-oral modality to a visual-written modality. Writing samples of individuals with impairment in written expression often display errors in spelling, punctuation, grammar, and the development of ideas. In some cases these symptoms can be better explained by a motor skills disorder. In addition, cultural differences in storytelling might explain perceived difficulty with the development of ideas. There are several treatment plans, including writing in more natural environments, such as in a diary, or talking-to-writing progressions, in which the individual dictates while another person writes and progresses until the individual can dictate and write, and ends when the individual can independently think and write. Individuals with impairment in mathematics, such as dyscalculia, may have difficulty with number

sense, memorization of arithmetic facts, accurate or fluent calculation, or accurate math reasoning. The cause of mathematical impairment may be due to deficits visually organizing mathematical concepts and manipulation or may be due to working memory impairments that can interfere with processing calculations. Individuals with dyscalculia may over-rely on memory and aids, and many treatment plans include rote memorization and practice drills. Treatment plans for impairments in mathematics are often highly individualized because the impairment can have a variety of presentations and learning best occurs when focusing on the individual’s strengths. Prevalence, Outcomes, and Accommodation The prevalence of specific learning disorder ranges between 5 percent and 15 percent among school-age children across different cultures and languages. It is more common in males than females by a ratio of 2:1 to 3:1. There is evidence of heritability of specific learning disorder, particularly for dyslexia as previously mentioned. Prematurity, very low birth weight, and prenatal exposure to nicotine increases the risk for specific learning disorder. Specific learning disorder commonly co-occurs with neurodevelopmental or other mental disorders. These include ADHD, communication disorders, developmental coordination disorder, autistic spectrum disorder, anxiety disorders, and depressive and bipolar disorders. These comorbidities can complicate the diagnostic process because these disorders can impair learning and activities of daily living. Clinical judgment is required to determine that the difficulties the individual presents are in fact due to specific learning disorder. Individuals with specific language disorder can have a variety of negative outcomes related to school. These include poor grades, bullying, and lower rates of postsecondary education. There can also be a risk for behavioral problems as a result of frustration in school. The high school dropout rate for children with specific learning disorder is nearly 40 percent. There is also concern that children who are undiagnosed or inadequately treated may not attain functional literacy. In addition to these potential issues, individuals with specific learning disorders have high levels of psychological distress, poorer overall mental health, higher rates of unemployment and underemployment, and lower incomes, as well as an increased risk for suicidal ideation and suicide attempts in children, adolescents, and adults. High



levels of social or emotional support, however, predict better mental health outcomes. Similar to the DSM-5, the Individuals with Disabilities Education Act (IDEA) of 2004 defines specific learning disability as a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written, which may manifest itself in the imperfect ability to listen, think, speak, read, write, spell, or do mathematical calculations. The rules and related laws of IDEA require that schools provide individuals with specific learning disorder with free education, special services, and an individualized education program (IEP). The IEP is developed by special education teachers, psychologists, parents or guardians, others who can provide expertise, and sometimes the individual in question. The IEP must address the academic, developmental, and functional needs of the child. It details the individual’s strengths and weaknesses and determines which treatment plans could be best for the individual. This team meets annually to set goals for the next school year and to assess progress. The IEP determines which services or accommodations the individual receives. Most students with learning disabilities receive special instruction within a general education class or attend a special education class for a portion of the day. It is important to remember, as with any disability, there are variations in abilities and functioning between people. Factors such as socioeconomic status, geographical location, and acceptance will greatly affect the services that a person with specific learning disorder receives. Unfortunately, these services are not standardized throughout the United States. People in higher socioeconomic statuses may receive more or different services, such as tutoring, private lessons, individualized help, or an aide to assist, more so than someone who is in a more impoverished situation. Similarly, geographical location will affect services in that school districts receive different funding for services and have different philosophies about services for people with disabilities. Effects of Specific Learning Disorder on the Family Specific learning disorder introduces additional stress to any family. It can affect parent interactions with the child, marital relationships, sibling relationships, and relationships with extended family. The family unit can be the greatest source

Learning Disorders

807

of support for an individual with specific learning disorder, but the challenges associated with a child with specific learning disorder alter the family roles and dynamics. Parents of children with any disability go through processes of recognizing the problem, dealing with a diagnosis, making decisions about treatment, working with the professionals in the child’s life, and becoming an advocate. Having a child receive a diagnosis introduces new demands to marriages as parents confront the aforementioned challenges of specific learning disorder and navigate the available services. Additionally, parents often have different ideas for approaches, treatment, and goals for their child, introducing new sources of conflict and dissonance into the relationship. Several studies have indicated that parents of children with specific learning disorder are more anxious and report more stress than parents whose children do not. Last, the focus the family places on the child’s impairments could inadvertently lead to the child acting as a scapegoat for other family problems and create additional stress for the child. Sibling relationships are also affected by specific learning disorder. Conflicts can arise from siblings vying for attention as parents focus on the needs of the child with specific learning disorder. Siblings could also react to their sibling’s needs in a variety of ways, including frustration, embarrassment, and guilt. Rachel T. Beldner University of Wisconsin–Madison Janice Elizabeth Jones Cardinal Stritch University See Also: Education, Elementary; Problem Child; Society for Research in Child Development. Further Readings American Psychiatric Association. DSM-5. http://www .appi.org (Accessed November 2013). National Center for Learning Disabilities. “What Are Learning Disabilities?” http://www.ncld.org/types -learning-disabilities/what-is-ld/what-are-learning -disabilities (Accessed November 2013). U.S. Department of Education. “Building the Legacy: IDEA 2004.” http://idea.ed.gov (Accessed December 2013).

808

Leisure Electronics

Leisure Electronics The average American home owns more than 25 electronic devices. Those devices include televisions, desktops, laptops, tablet computers, iPhones and android phones, iPods and other music players, e-readers, gaming consoles and handheld gaming systems, and streaming devices. Modern technologies have led to a major transformation in how leisure time is spent. In some households, each family member spends time with his or her own devices, resulting in less leisure time spent in family units. While families still come together for family nights, they are more likely to stream a movie from Netflix or Amazon, watch a Blu-ray on the Sony PlayStation, or play family-oriented games on Nintendo’s Wii than to play board games, engage in a family baseball game, or watch family-friendly television shows on network television. In 2014, Netflix, the most popular source of streamed movies and television shows, reported that it had amassed more than 50 million subscribers worldwide. In 2010, 6 billion mobile phones were being used around the world. By March 2014, 500 million users were using Apple’s iPhone technology. Some experts have expressed concern about the psychological, social, and physical impact of the ubiquitous nature of leisure electronics as opposed to action-oriented leisure, which has been demonstrated to improve interpersonal relations, reduce stress, and make individuals and families more fit. Other experts insist that the interaction found in social media and other online and gaming communities serves to connect individuals to a world of like-minded individuals without regard to gender, class, or culture. E-mail, social networking, text messaging, and video phones all allow families to stay in close contact, even when members are geographically separated. Computers and E-Readers For many American families, the computer is considered a necessity. In 1984, the Census Bureau reported that only 8.2 percent of American households owned a computer. By 1998, that number had grown to 42 percent. Two years later, it had climbed to 51 percent. By 2014, 70 percent of American households owned at least one computer. While computer keyboarding was initially a separate class in American schools, it has now been integrated

across disciplines. Parents and grandparents are likely to use computers at work, and senior centers teach computer classes for the elderly. By 2014, more than 1 billion computers around the world were using some version of Microsoft Windows. Apple also had a major impact on computer use through the traditional Mac and the newer iPad and mini iPad. First introduced in April 2010, 170 million iPads had been sold by October 2013. IPads are used for business and to access the Internet, watch videos, browse photos, and play games. Despite its popularity, sales of iPads accounted for only 36 percent of the total market sale of tablet computers in 2013. Android tablets, which are generally more affordable, accounted for the other 62 percent of 195 million tablets sold that year. Apple’s iPod Touch has also become a necessity for many American families. Ten million were sold in 2004. Within eight years, that number had jumped to 350 million. The iPod does much more than the mp3 players that had previously revolutionized the music scene, allowing users to play games, read books and magazines, and pursue activities such as learning foreign languages or studying philosophy by downloading apps from Apple’s iTunes store. E-readers also have had a major impact on leisure activities. While most e-reading is done on devices such as Amazon’s Kindle or Kindle Fire or Barnes and Noble’s Nook, reading apps are also available for computers and a wide range of mobile and handheld devices. Children are attracted to the interactive nature of e-readers, which may allow stories to include sounds, narration, and animation. Apps are also used to teach basic skills to young children. Mothers of small children find that they can entertain cranky children while waiting in line, making a long trip, or visiting a doctor’s office by pulling out a tablet computer or a mobile phone. Tablet computers and mobile phones also offer instant access to e-mail, the Internet, games, and video streaming services. Gaming By the late 1980s, gaming had become a regular part of leisure time for many American households. Children born during the decade grew up with parents who had grown up playing on early gaming consoles. Instead of putting aside their interests in gaming, those parents introduced their children to video gaming. During the following decade, the

Leisure Time



popularity of home computers provided access to the first online gaming sites, and the introduction of Sony’s PlayStation and Microsoft’s Xbox added a new sense of reality to gaming and offered a wider selection of titles designed to appeal to different age groups. Males continued to dominate the gaming world because females, particularly those with families, found that they had less time to spend on gaming. Once females were able to play games on handheld devices such as the iPhone or the Kindle Fire, developers learned that females were more likely to play games that could be played in short spurts. Thus, new generations of children began growing up exposed to a wide range of leisure electronics that allowed them to play games, listen to music, or watch movies at any time. The major of gaming systems owned by most American families are the PlayStation, the Xbox, and the Nintendo, and many families own all systems. By June 2012, 157.5 million PlayStation 2s and 66.5 million PlayStation 3s had been sold. After being launched in November 2013, Sony sold 20.5 million PlayStation 4 units by April 2014. Microsoft’s Xbox was introduced in 2001, and by 2006, 24 million units had been sold. The Xbox 360 was released in 2004, and 77.2 million units had been sold by April 2013. The Xbox One was introduced in 2013, and within a month, Microsoft had sold 2 million units. While the original Nintendo Wii, introduced in 2006, was considered to be the most family-friendly gaming system of all time, its successor, the Wii U proved to be a major disappointment. Nintendo announced that it planned to sell 9 million units by the end of 2014, but by March of that year, less than 4 million units had been sold. Elizabeth Rholetter Purdy Independent Scholar See Also: Cell Phones; Skype; Technology; Video Games. Further Readings Arora, Payal. “Online Social Sites as Virtual Parks: An Investigation Into Leisure Online and Offline.” Information Society, v.27/2 (March/April 2011). Elkington, Sam and Sean Gammon. Contemporary Perspectives in Leisure Meanings, Motives, and Lifelong Learning. Hoboken, NJ: Taylor and Francis, 2013.

809

Wynn, Jillian and Carrie Heeter. “Gaming, Gender, and Time: Who Makes Time to Play?” http://gel.msu.edu/ carrie/publications/sex_roles_jillian.pdf (Accessed July 2014).

Leisure Time Leisure, broadly defined as time asleep, not spent at work, or in pursuit of religious, family, or civic duties, is a value-laden concept. That is, people have definite ethical opinions about its existence and how it should be put into practice. A strong puritanical heritage, for example, meant that any activity not connected directly with work or religious service was frowned on, and even Sabbath times could be restricted to periods of joyless inactivity. Sunday, in fact, was the only day apart from official holidays on which most people could hope to have leisure time of any sort; it was not until well into the 20th century that the struggle for control of the working day brought, eventually, two-day weekends and a reduction from 12 or 14 hours of work per day to the now-standard eight hours. Children were not exempted from these conditions, and work was so exhausting that the ability to enjoy postwork activities was limited. Even one day off a week was more than could be expected for the majority of slaves who toiled across the country and those employed as domestic labor (servants). The situation was different for the rich and privileged, who could spend much of their time visiting and entertaining, especially if they were male. Women were likely to have their mobility limited by social pressures and their range of leisure activities limited to music or the practice of handicrafts, which would have the benefit of improving their prospects when it came to finding a marriage partner. The modern (and recognizable) world of leisure began in the 20th century and was made possible by wresting some control over the working day by labor and concomitant growth in personal transportation symbolized in the United States by the Ford Model T automobile. For the first time, members of the working classes could aspire to owning a means of traveling for pleasure individually or as part of a small, self-chosen group. This enabled visits to tourist sites and sporting contests, which were emerging in importance, in addition to family and

810

Levittown

friends. The ability of women to participate in such activities increased as labor-saving devices such as vacuum cleaners and washing machines began to be widely available. However, while leisure activities became much more widespread, they were not available to everyone, as people in rural areas and the poor were mostly unable to take advantage of these opportunities. Connectivity with the outside world as a means of creating personal leisure first began with newspapers and the radio, which was followed by movie theaters, including drive-in theaters, television, and eventually the Internet. Newspapers and radio offered a limited amount of interactivity in that it was possible to send letters that might be read or to purchase by mail advertised products. The creation of fan clubs, in different forms, encouraged people to feel part of a linked but distanced community. Meanwhile, locally produced theater was supplemented by traveling shows facilitated by the growth in road and rail infrastructure, including circuses, wild west performers, and new generations of snake oil salespeople—that is, itinerant individuals peddling products of dubious efficacy and relying on the anonymity of distance to prevent being uncovered. The spread of entertainment media to the home via radio, television, and Internet has caused some people to become couch potatoes—passive recipients of entertainment whose intake of snacks and carbonated beverages (or alcoholic drinks) has contributed significantly to the obesity epidemic threatening society. However, another set of people, presented with entertainment, was inspired to go out into the world and create something new of their own. This is particularly evident in the case of the Internet, in which a substantial proportion of people have become willing to provide content through updating status, writing blogs, and posting photos and the like. A smaller subset have been inspired to write their own programs and set up their own companies, just as some members of previous generations were inspired by sports on television to try to become athletes themselves. Leisure activities vary according to demographic and ethnic characteristics of the population, and, as technology has developed, the ability of people to find and enjoy the specific types of leisure activity that they particularly enjoy has increased considerably. Some groups might take their leisure with other members of the same group for cultural reasons,

while others might do so because, like south Asian and Caribbean Americans, they find their interest in cricket is not widely shared in society. When the borders between groups are porous, this can lead to opportunities for fusion and experimentation. An obvious example of this is food. As incomes have risen, eating has for many people become a chance not just to spend leisure time but also a means of exploration and self-realization. As a result, food items that once would have been considered exotic and undesirable have now become part of the regular cuisine, and this has led to more interesting lifestyles and a more rewarding standard of living. With a greater ability for people with specific interests to connect with each other, economies of scale make it possible for commercial industries to support leisure activities, ranging from quilting to raising bonsai trees to tabletop war games. In some cases, what used to be part of the hard daily work of people’s lives has been reinvented as an optional leisure activity for which fees can be charged, including fitness, fishing, and growing vegetables. The monetization of hobby and leisure activities has accompanied the change in the structure of the economy that now features services compared to manufacturing to a much greater extent than ever before. John Walsh Shinawatra University See Also: Games and Play; Internet; Sports; Television. Further Reading Braden, Donna R. Leisure and Entertainment in America. Dearborn, MI: Henry Ford Museum and Greenwich Village, 1988. Cross, Gary S. A Social History of Leisure Since 1600. State College, PA: Venture, 1999. De Grazia, Sebastiaon. Of Time, Work and Leisure. New York: Vintage, 1994.

Levittown Levittown is generally considered the prototypical American suburb, the pioneering postwar development that led the way for all the other huge subdivisions filled with mass-produced tract homes



that would soon encircle cities across the country. In American culture, Levittown is often conceptualized as a single place. However, the Levitt firm actually created three Levittowns: first a New York City suburb on Long Island, then two Philadelphia suburbs—one in Pennsylvania, the other in New Jersey. To the thousands of generally blue-collar new residents who flocked there, the Levittowns represented upward mobility, an affordable opportunity to achieve the American Dream of homeownership and to raise their families in a safe, new community removed from urban problems. For others, from architectural critics and city planners to environmentalists and excluded African Americans, “Levittown” has had a different meaning; it came to symbolize suburban mass conformity, cultural isolation, racism, environmental degradation, and automobile dependency. Regardless of perspective, the impact of the Levittown concept was undeniable. The Levittowns’ creation occurred in response to new factors in the United States after World War II. During the Great Depression and the war, residential construction had mostly ceased. With 16 million returning service members, a steeply rising marriage rate, and the resultant baby boom, the country faced a severe housing shortage. The government responded to this urgent need by providing billions’ worth of mortgage insurance to the Federal Housing Administration (which allowed developers to get payment advances for construction from lenders) and by passing the G.I. Bill. Officially the Servicemen’s Readjustment Act of 1944, it guaranteed service members’ home loans through the Veterans Administration and allowed veterans to make no down payment. Home building and home buying became much easier and safer. Residential construction, previously a small-scale (often individual) enterprise, became big business. During the war, Levitt and Sons had become one of the country’s largest building companies through receiving major government contracts for temporary, war-worker housing—experiences that helped patriarch Abraham and sons William and Alfred gain valuable knowledge about how to build basic housing quickly and economically. From 1947 to 1951, they put their skills to use by creating America’s largest, privately created housing project ever on 4,000 acres of Long Island farmland. They used assembly-line methods,

Levittown

811

prefabricated and standardized materials, vertical supplier integration, and nonunion labor (performing highly segmented, repetitive tasks) to build 17,500 single-family houses. The company offered two small models at incredibly low prices with several exterior designs. The 750-square-foot, $6,990 Cape Cod (originally intended for rental) had two bedrooms and one bath. The slightly larger Ranch House, starting at $7,990, added amenities such as a carport, fireplace, and built-in television. Eager buyers sometimes waited in line for days to purchase a Levittown house, with 1,400 contracts signed in one day alone. However, that opportunity only extended to whites (at first only white veterans), with racial covenants excluding blacks. In 1952, the Levitts began building a suburb on farmland in Bucks County, Pennsylvania, near a large, new U.S. Steel factory, where many of its residents would work. (As of 1960, 49 percent of the residents were working class.) Consuming 5,500 acres, it became the biggest single-builder community in the United States. Its 17,300 houses came in seven models (segregated by neighborhood), with class connotations apparent in names such as Budgeteer and Country Clubber. Like its fellow Levittown, it was entirely white. The Levitt firm soon headed to Burlington County, New Jersey, to create yet another suburban Philadelphia development on agricultural land. On 4,900 acres, they built 11,000 houses between 1958 and 1972. The three- and fourbedroom homes, ranging from $8,900 to $14,500 at the start, came in three models. These types were integrated throughout for diversity of style and income level, but the diversity did not extend to race. One of the development’s first residents was University of Pennsylvania sociologist Herbert Gans, who spent two years conducting participant-observation research. His resultant book, The Levittowners: Ways of Life and Politics in a New Suburban Community, became a landmark work in urban sociology. The Levittowns’ new residents’ house purchase bought a new way of life. For working men, residing in suburbia generally meant new commutes by car. For housewives (as the women typically were), the Levittowns all included retail developments. For children, the family-friendly neighborhoods offered playgrounds, ball fields, pools, and parks, plus new

812

Levittown

Levittown Center shopping center in New York’s Long Island, 1957, in the first of three Levittowns created by the Levitt firm. To the thousands of mostly blue-collar new residents who flocked to Levittowns, these communities represented upward mobility and an affordable opportunity to achieve the American Dream and raise their families in a safe, new community removed from urban problems.

schools (often not open at the start). As Gans’s research demonstrated, the Levittowns’ lack of typical institutions provided residents with an opportunity to form new ones based on their desires and lifestyles and to become politically and socially active. New churches sprang up and filled with parishioners, an array of new voluntary organizations flourished, and community-wide activities and events were frequent. Seeing the Levittowns’ extraordinary success, developers used the Levitts’s methods to create new communities of varying sizes on urban fringes across America. By 1955, over three-fourths of the nation’s new homes were in subdivisions. Central cities quickly began losing population as (usually white) residents fled for a new future in suburbia. Transition However, the Levittowns themselves soon faced a period of transition. As families earned more and outgrew their homes’ small, basic floor plans, they expanded and remodeled in large numbers, giving individuality to once identical properties. Their suburbs also became increasingly middle-class. In 1963, partly due to frequent confusion between the three suburbs, New Jersey Levittowners voted to change their community’s name to Willingboro (the rural area’s original name). By that time, the Levittowns had become a battleground for civil rights. In 1957, Pennsylvania

open-housing advocates helped a white family secretly sell their Levittown house to a black family, leading to violent protests and retributions against them (followed by criminal court convictions of the harassment instigators). In 1959 in New Jersey, two black veterans’ lawsuit against Levitt succeeded, despite multiple appeals, with the state Supreme Court ruling for integration in Levittown. Then, President John F. Kennedy’s 1962 executive order prohibited racial discrimination in new houses created, bought, or financed with federal help. Even afterward, however, two of the Levittowns remained overwhelmingly white. Per the 2010 census, the 32,841 people in Pennsylvania’s Levittown were 88.7 percent white, 4.5 percent Latino, and 4 percent black. New York’s 51,881 Levittowners were 81.2 percent white, 12.1 percent Latino, 5.4 percent Asian, and 0.4 percent black. However, African Americans did come in large numbers to the former Levittown, Willingboro, seeking to escape increasing drug, crime, and gang problems in Philadelphia and New Jersey’s nearby urban areas of Camden and Trenton. In 2010, Willingboro Township’s population of 32,841 was 65.5 percent black, 22.9 percent white, and 6.1 percent Latino. Outside attention came to the three Levittowns again upon their 50th and 60th anniversaries in the 1990s and 2000s. Their history, significance,

Life Course Perspective



and impact were celebrated, analyzed, and critiqued through historical exhibits, documentaries, reunions, oral history efforts, photograph books, and scholarly lectures and panel discussions. As those projects showed, Levittown holds an important place in history—having started development trends that would shape American life in profound ways and change the U.S. landscape forever. Kelli Shapiro Texas State University See Also: Automobiles; Gated Communities; Suburban Families; Urban Families; White Flight. Further Readings Ferrer, Margaret Lundrigan. Levittown: The First 50 Years (Images of America). San Francisco: Arcadia Publishing, 1997. Gans, Herbert J. The Levittowners: Ways of Life and Politics in a New Suburban Community. New York: Columbia University Press, 1982. Harris, Diane, ed. Second Suburb: Levittown, Pennsylvania. Pittsburgh, PA: University of Pittsburgh Press, 2010.

Life Course Perspective The life course perspective is a way of looking at family and individual phenomena and transitions that has its origins in the fields of history and sociology, though it has been applied to useful effect in developmental psychology, gerontology, health, and other fields related to family studies. The prime element of the perspective is how it considers multiple dimensions of time. Time can be considered in terms of a person’s age (chronological or ontogenetic time); family (family or generational time); and sociohistorical events (historical time). The life course perspective emphasizes that each of these elements of time varies and is important for every event or phenomenon to be studied. Family life course scholars examine the trajectories of lives in family and larger contexts, over time, with an emphasis on how transitions are experienced. Multiple temporal dimensions are the heart of the perspective.

813

Additionally, the life course perspective acknowledges the importance of several other elements: social structure, process, and, fundamentally, the dynamic interplay of structure, process, individual, and family meaning of events, along with the elements of time, according to V. Bengtson and K. Allen. Given these emphases, it will not be surprising that development is viewed in this perspective as heterogeneous—interest is in the diversity and variation around “normative” events. Several debates concerning this perspective merit attention. The first is whether the life course perspective is a theory or merely a perspective. Depending on how theory is defined, the answer to this question varies. Generally, it is regarded as a perspective because the life course perspective as a whole (rather than specific applications of the perspective to a particular issue, problem, or population) tends not to yield prediction, though it does generate explanations. Another issue is how the life course perspective in family studies is different from, yet related to, the life span perspective. The latter emanates from life span developmental psychology; therefore, while it acknowledges individual time and to some extent sociohistorical change, it does not attend to the issues related to generational time. It also tends to de-emphasize the larger social structure, culture, and the meaning of events. Finally, it is important to distinguish the life course perspective from the family development perspective. While the life course perspective does include the concept of family or generational time, its focus is on specific elements of family, and its application of family time is typically on the diverse ways families encounter timing. In contrast, the family development perspective focuses on attempting to identify normative family transitions and emphasizes the homogeneity of particular family stages. It is commonly thought that the family development perspective is dated and not up to the task of application to families in all their diversity (e.g., stepfamilies, families with children of divergent ages, multigenerational households) in light of sociostructural location, historical variation, and the tension between individual agency and constraint. The life course perspective has been applied to topics in family studies extensively. Two different research paradigms have been employed in studies

814

Life Course Perspective

that use the life course perspective: secondary analyses of predominantly large, quantitative data sets, and narrative life history qualitative interviews. The fact that these two different paradigms can use the life course perspective is relatively unique among theories or perspectives in family studies. It speaks to the power of the framework but also to the difficulty of incorporating all the dimensions of the perspective in a single research study or even a research career. Analysis Examples Examples of the secondary analysis of data can be found in J. Modell, F. Furstenberg, and T. Hershberg’s classic analysis of adolescents and youth across different historical epochs in the United States. Their conclusion contradicted common wisdom, as they documented that youth’s transitions to adulthood are easier in the current historical time, to the extent that their transitions into an adult state with adult responsibilities tend to occur in a more orderly fashion at the end of the 20th century than in previous times (mostly due to schooling). The life course perspective might suggest that recent historical events such as the Great Recession and the jobless recovery may shape life courses of today’s youth and emerging adults. One of the first and still valuable life course studies was undertaken by Glen H. Elder, Jr., and was reported in his classic book Children of the Great Depression (1974). He used archival data that had been collected over time from children in two birth cohorts who grew up in either a predominantly working-class or a predominantly middleclass city. These children were slightly different ages when the Great Depression started, and then as young adults or adolescents during World War II were in different positions to benefit from the G.I. Bill and the expanding postwar labor market. Elder’s body of work demonstrated how sociohistorical events contribute to shaping opportunities, in interplay with other systems such as social class and gender. His work includes multiple generations, gathering data from the children and grandchildren of the Great Depression’s children. In subsequent journal articles, he presented multiple generational influences of the experiences of deprivation, opportunity, family caregiving, and many other aspects of individual and family life and well-being.

Another example of using historical data and illustrating a life course perspective comes from the work of Tamara Hareven, a family historian who combined detailed employment records from the Amoskeag Mill in Manchester, New Hampshire, from the beginnings of the Industrial Revolution until the closing of the mill with birth, marriage, and death records (as well as other historical documentation) to trace individual and family (including extended kin) transitions. One example finding from her work was that, contrary to assumptions about how industrialization broke kinship ties, families were in fact instrumental and active agents in facilitating industrialization. Kin recruited for the mill and worked alongside each other, often mentoring each other and housing each other. Relatedly, in times of labor oversupply, such as when mill production decreased, the kin in the rural areas from which workers came absorbed millworkers back into rural life; these rural kin also functioned as supports should unanticipated family events (such as out-of-wedlock pregnancies) occur in the city. Her rich data provide vivid examples of how the life course perspective enriches historical data. A prime example of narrative life histories employing the life course perspective comes from the early work of Katherine Allen, who studied never-married adult women’s life courses. She found, for example, that these women would have been seen as without families or as “phantoms” using other family theories with narrow definitions of family (especially those that conflate family with household); however, these women’s life histories typically included deep obligations to family caregiving occurring at the time in their life courses when their age peers were making romantic and marital commitments. Her current work on elderly women with histories of gynecological cancers similarly illustrates women embedded in family ties of care and reciprocity, historical forces, and how events and transitions forgone earlier in life influence the future lives. The life course concept of linked lives and family ties beyond biological children are central to these inquiries. The work of two other life course scholars veers more toward the sociological. Phyllis Moen has focused on the connections between paid work and family life. Her Cornell Couples Study illuminated the forces of gender, paid and unpaid work, and how couples make decisions that change over

Living Apart Together



time, foregrounding the notions of trajectories and careers over time. Much of her work focuses on couples. Vern Bengtson, via his USC3G study, examined multiple generations of individuals with the first generation being subscribers to a health care plan in California. His work demonstrates the creative interplay between utilization of secondary records over time, combined with interviewing, history, and trajectories, illustrating agency and change in social norms across generations. Life course perspectives thus demonstrate the interactions of age, period, and cohort. Anisa Zvonkovic Virginia Polytechnic Institute and State University See Also: Ecological Theory; Family Development Theory; Generation Gap; Industrial Revolution Families; Middle Class Families; Symbolic Interaction Theory; Systems Theory. Further Readings Allen, K. R. Single Women, Family Ties. Thousand Oaks, CA: Sage, 1989. Bengston, V. L. “The Problems of Generations: Age Group Contrasts, Continuities, and Change. In The Course of Later Life: Research and Reflections, V. L. Bengtson and K. W. Schaie, eds. New York: Springer, 1989. Bengtson, V. L. and K. A. Allen. “The Life Course Perspective Applied to Families Over Time.” In Sourcebook of Family Theories and Methods: A Contextual Approach, P. G. Boss, W. J. Doherty, R. LaRossa, W. R. Schumm, and S. K. Steinmetz, eds. New York: Plenum, 1993. Elder, G. H., Jr. Children of the Great Depression. Chicago: University of Chicago Press, 1974. Hareven, T. Family Time and Industrial Time. Cambridge: Cambridge University Press, 1982. Modell, J., F. Furstenberg, and T. Hershberg. “Social Change and the Transition to Adulthood in Historical Perspective.” Journal of Family History, v.1 (1976). Moen, P., ed. It’s About Time: Couples and Careers. Ithaca, NY: Cornell University Press, 2003. Zvonkovic, A. M., M. L. Notter, and C. L. Peters. “Family Studies: Situating Everyday Family Life at Work, in Time and Across Contexts.” In The Work and Family Handbook, E. Kossek, S. Sweet, and M. Pitts-Castouphes, eds. New York: Erlbaum, 2006.

815

Living Apart Together Modern family life continues to evolve. Families are more often being negotiated across households and outside the confines of legality. For couples, new ways of pair bonding advance the understanding of what it means to be a couple, how couple relationships are constructed and maintained, and how new ways of partnering impact overall family functioning. Although a cohabiting relationship—married or otherwise—is the most common living arrangement for American couples in committed longterm relationships, another approach to “doing” relationships is gaining more visibility—living apart together (LAT). In the United States, approximately 6 percent of men and 7 percent of women age 23 and older reported in the 1996 and 1998 General Social Surveys that they were involved in an LAT relationship. In these relationships, couples live separately but define themselves as committed couples. Percentages of LAT relationships are higher in several European countries, where LAT is a wellestablished concept and perceptions of relationship formation are less conventional than in the United States. The term LAT was first used in 1978 to describe couples living apart together in the Netherlands—the word lat in the Dutch language means “stick.” In France, cohabitation intermittente is used to describe LAT couples, and in the Scandinavian countries the term sarbo is used (sar means apart and bo means live). In the United States there are no agreed-on terms to label or describe this relationship structure, and registration of LATs in any official statistics is limited. Measurement and Rationale for Living Apart Together The measurement and conceptualization of LAT varies, with some researchers defining LAT as exclusive to unmarried partners, whereas others include both married and unmarried partners so long as they reside in separate households and view themselves as a committed couple. Much of this debate centers on the reasons why partners are in an LAT relationship, whether or not the relationship is constraint motivated or choice motivated. Some researchers see LAT relationships as a new family form in which couples live apart together by choice as a way of experiencing the intimacy and satisfaction associated with being in a romantic

816

Living Apart Together

relationship while retaining their autonomy and independence. This is often referred to as a “both/ and solution” to partnering. Others posit that living apart together is pursued by individuals based on a range of constraints that prevent (or interrupt) the couple from cohabiting. These constraints most often involve the job market (e.g., commuter marriages), educational needs, or parenting/caregiving responsibilities. For unmarried partners, constraints against living together may also be due to religious convictions against cohabitation outside marriage. In this sense, LAT represents a cautious or conservative approach to partnership and is just another stage along the partnering continuum that begins with courtship and ends with cohabitation or marriage. In reality, LAT is a relationship couples pursue for a multitude of factors, including both constraints and personal preferences. In addition, LAT partners vary in their degree of certainty and ambivalence about being in an LAT relationship; while some couples are resolute about their choice, many others are ambivalent or view their LAT relationship as temporary. Those who cite ideological reasons (e.g., independence and autonomy) as their primary reasons for LAT represent a minority of those living apart, compared to those who cite necessity or convenience as the main reasons for being in an LAT partnership. Age Differences in Living Apart Together Although reasons for living apart together are complex, the data on LAT couples to date does suggest that reasons for living apart together vary by age. When older adults are included in this research, the desire to live apart together to maintain independence or autonomy is more often noted than are reasons of necessity. Younger couples tend to cite that they are LAT because it is “too early” in their relationship to cohabit or marry, or because of constraints—financial (e.g., feared loss of benefits, costs associated with moving) or situational (e.g., employment, institutionalization). Nonetheless, these age differences reflect trends rather than absolute categorizations. Gender Differences in Living Apart Together Some scholars argue that gender differences may also exist regarding the rationale for LAT. These gender differences appear more pronounced among older LAT couples. In qualitative research, previously

married LAT women said their reasons for living apart from their romantic partners is because they wish to eschew the demands of traditional marriage—the obligations to provide care for a spouse or gender inequality in the division of household work. Researchers argue that these findings suggest that for older LAT couples, women more so than men may be the ones driving the decision-making process surrounding the establishment of such unions. Union Status Differences In addition to understanding what factors motivate couples to live apart together and to identifying within group differences regarding these motivations, researchers have also compared LAT couples to those married or cohabiting to understand group differences in age, education, and race, and attitudes about work, individualism, and gender roles. Both in the United States and abroad, LAT couples tend to be younger than married couples and are more likely to have a college degree compared to cohabiting couples (for men and women) and married couples (for women). Both men and women in LAT relationships are more ethnically diverse than individuals in married relationships. In sum, individuals in LAT relationships more closely resemble never-married single people in terms of age, education, and racial composition. In terms of attitudinal differences, LAT partners are more work oriented, individualistic, and egalitarian than their married counterparts. Overall, what influences people’s reasons for LAT are life stage and the nature of the relationship. Primary reasons for LAT may run contrary to secondary reasons as few LAT partners have an unambiguous preference for LAT over cohabitation or marriage. Although LAT partners tend to express more liberal views about work and family life, without proper longitudinal data there is no way to ascertain whether these views are cause or consequence of being in an LAT relationship. The study of LAT relationships is in its infancy and much understanding remains to be discovered regarding the meaning LAT has on the individual, family, and society as a whole. Jacquelyn J. Benson University of Missouri See Also: Cohabitation; Dating; Individualism; Living Together Apart.

Further Readings Benson, Jacquelyn. From Living Apart, To Living Apart Together: Older Adults Developing a Preference for LAT. Ph.D. dissertation, University of Missouri, 2013. De Jong Gierveld, Jenny. “Remarriage, Unmarried Cohabitation, Living Apart Together: Partner Relationships Following Bereavement or Divorce.” Journal of Marriage and Family, v.66 (2004). Duncan, Simon, Julia Carter, Miranda Phillips, Sasha Roseneil, and Mariya Stoilova. “Why Do People Live Apart Together?” Families, Relationships, and Societies, v.2 (2013). Haskey, John and Jane Lewis. “Living Apart Together in Britain: Context and Meaning.” International Journal of Law and Context, v.2 (2006). Karlsson, Sofie and Klas Borell. “Intimacy and Autonomy, Gender and Ageing: Living Apart Together.” Ageing International, v.27 (2002). Levin, Irene. “Living Apart Together: A New Family Form.” Current Sociology, v.52 (2004). Strohm, Charles, Judith Seltzer, Susan Cochran, and Vickie Mays. “‘Living Apart Together’ Relationships in the United States.” Demographic Research, v.21 (2009). Upton-Davis, Karen. “Subverting Gendered Norms of Cohabitation: Living Apart Together for Women Over 45.” Journal of Gender Studies (November 21, 2013). http://www.tandfonline.com/eprint/NbpaRga ZXx72MIU7QDVI/full (Accessed April 2014).

Living Together Apart In recent decades demographic shifts related to family formation—declining marriage rates, increasing cohabitation, and increasing numbers of nonmarital births—have led many family scholars to extend a greater research focus on the unconventionality of family life. As a result, new ways of forming conjugal relationships and constructing families have come to light. For example, it has recently been revealed that some disillusioned couples are electing to live together apart (LTA) as a way of organizing and negotiating their family lives, primarily in the context of parenting. LTA describes partners—married or unmarried—who continue to live together while considering themselves to be separated. Though most LTA families

Living Together Apart

817

include children, some do not. In addition, some LTA partners may remain legally married after forming an LTA family, others may divorce, and still others may have never married but maintain their status as a cohabiting couple, albeit estranged. Although the empirical research available on this topic is limited and the rate of LTA families is currently unquantifiable, early qualitative analysis suggests LTA families are a heterogeneous group regarding the reasons why they live together apart and the ways this arrangement is created. Reasons to Live Together Apart Within the United States, the LTA family phenomenon has been examined only among low-income, unmarried parents of childbearing age. The media, conversely, has done a more prolific job of revealing how couples are living together apart as a solution to the various dilemmas that couples face after relationship dissolution. Although U.S. researchers have focused explicitly on studying low-income families who are LTA, journalists and researchers from other countries (e.g., France, Canada) have explored this atypical family situation with a stronger focus on middle-class families. Taken together, the findings from these qualitative studies or journalistic enterprises uncover several reasons as to why couples decide to live together apart, including concerns that one or both partners may have regarding housing needs and financial stability, child rearing and maintaining parent-child relationships, maintaining social legitimacy, and fear of loneliness. Housing Needs and Financial Stability Much of the media’s attention on LTA families stems from the 2008 economic crisis, which had a significant impact on the housing and job markets. According to several news outlets across the country, the financial crisis caused many “involuntary or forced cohabitation” arrangements between estranged couples who could not afford a divorce and/or could not manage their household bills alone. Research comparing LTA families in the United States and France demonstrated that fear of financial hardship was a strong factor in the decision to live together apart, and this finding was prevalent among low-income and middle-class couples alike. Based on reports from women in low-income LTA families, without the combined financial contributions from both

818

Living Together Apart

partners to maintain the household, the prospect of homelessness was a real concern, primarily for their male ex-partners. Most of these women instituted a “pay and stay” rule for their former partners—even modest contributions were acceptable, particularly because these women were resolute about wanting to keep their ex-partners involved as fathers. Parenting Dimension Aside from financial concerns, partners’ primary reason for living together apart is to maintain the parenting role. Across the socioeconomic spectrum it appears that LTA families are constructed as a way to share parenting responsibilities and optimally maintain parent–child relationships within the context of an estranged partnership. For some LTA families, coparenting needs and financial insecurity are linked. The research to date on low-income LTA families suggests that due to severe financial constraints, mothers may allow the estranged ex-partner to remain in the home so long as the ex-partner performs housework and provides child care that would be otherwise unaffordable for these working mothers. Thus, in this context the provision of parenting is also part of the “pay and stay” rule. For resource-secure LTA families, the parenting dimension as it relates to reasons for living together apart is less about sharing child care responsibilities and more about maintaining parent–child bonds. This is not to say parenting ties are unimportant to lowincome mothers; however, the fact that low-income women in LTA families are struggling to remain financially afloat and provide for their children may preclude them from being able to completely disentangle themselves from their partner out of economic need, even if the relationship is riddled with violence and abuse. In these situations, the negative repercussions of continuing to co-reside with one’s ex-partner are noteworthy, demonstrating that motivations to live together apart are often driven by the fear of suffering a worse economic fate. Fear related in other ways to the parenting dimension may be a motivator as well. Low-income mothers have reported that living together apart with an abusive ex-partner was necessary not only due to fear of not being able to provide financially for their children but also because they feared their children would suffer negative consequences growing up fatherless. Some LTA fathers echoed this concern. Interviews with middle-income LTA fathers

on the brink of a contentious divorce or split from their partners have reported that they are scared of leaving and having their children “taken away” from them in retaliation. These LTA fathers describe ongoing tension and turmoil within the family despite having stayed—noting retrospectively that although their children were able to grow up with both their parents present in the home, they did so in the midst of a “marital battlefield,” which these fathers perceived caused considerable damage to their children. They believed the LTA family configuration they created with their ex-partners was a mistake, and their children would have been better off if they had only had the courage to break up. In sum, constructing an LTA family for reasons related to parenting is multifaceted. Paired with the divorce literature, the interpretation gleaned from the initial research on LTA families suggests that when fear motivates partners in abusive or high-conflict homes to stay together for the sake of the children, the consequences may be dire. Conversely, when partners in noncontentious relationships—capable of mutual respect and cooperative coparenting—construct an LTA family, the results may have benefits over the alternative of establishing two separate homes and shuffling children back and forth. Social Legitimacy Some couples in LTA families have also said cited the desire to maintain social legitimacy as a rationale for constructing an LTA family. In these cases, LTA partners wished to be viewed by other families, friends, and neighbors as a conventional family. These LTA partners believe that living together presents an image of conjugal life and respectability to the outside world. Preliminary research on LTA families suggests that living together apart may be a strategy pursued by individuals who place considerable value on traditional family life as a way of avoiding social disapproval. Fear of Loneliness Although fear of adversely affecting children and financial hardship are prominent motivators for living together apart, the fear of loneliness is also relevant. When children have left the home, coparenting needs, maintaining parent–child bonds, and the financial constraints associated with the parenting role become less germane to decision making about

Living Wage



LTA. For LTA empty nesters the status of their relationship and the decision to remain LTA is ambiguous, and feelings of ambivalence about being alone arise as their LTA configuration is reconsidered. The idea of discontinuing the LTA arrangement after so many years of cohabitation—although veiled in estrangement—can seem like a radical departure from the life they have grown accustomed to, no matter their feelings of dissatisfaction. Ways of Living Together Apart: Physical Boundaries LTA families configure their living arrangements in several ways. Financial resources have much to do with the level of separation a couple is able to achieve within the same housing structure. Some LTA couples continue to share the same home they lived in prior to the breakup, inhabiting separate bedrooms and sharing remaining living quarters. To afford more physical separation, other LTA families have reported remodeling their current homes by adding additional kitchens, bathrooms, and/or exit doors so as to create an upstairs or downstairs apartment independent of the remainder of the house. Still other LTA partners have reported selling their original family home to jointly purchase a multifamily home that is already configured into separate living spaces. Ways of Living Together Apart: Emotional Boundaries LTA partners vary in their level of emotional separation, and some continue to have a sexual relationship. If married, legal divorce is not pursued in every circumstance. Similar to postdivorce families, some LTA partners interact and communicate on a daily basis and do so with respect and accord. They may include one another in family meals, go on vacation together as a family, and spend holidays together. These couples still see one another as friends, and they rely on each other for emotional support from time to time. Conversely, other LTA partners may choose to communicate with one another as little as possible, sometimes communicating through their children or almost exclusively in writing. These relationships appear to be more volatile, and partners try to avoid one another as much as possible. Regardless of their differences, they have strong convictions about keeping their family together, whether for the perceived sake of the children, because of financial

819

constraints, to maintain social legitimacy, or because they fear suffering a worse fate upon separation. Jacquelyn J. Benson University of Missouri See Also: Cohabitation; Coparenting; Divorce and Separation; Stepfamilies; Stepparenting. Further Readings Cochran, Cate. Reconcilable Differences: Marriages End, Families Don’t. Toronto: Second Story Press, 2008. Cross-Barnet, Caitlin, Andrew Cherlin, and Linda Burton. “Bound by Children: Intermittent Cohabitation and Living Together Apart.” Family Relations, v.60 (2011). Martin, Claude, Andrew Cherlin, and Caitlin CrossBarnet. “Living Together Apart in France and the United States.” Population-E, v.66/3–4 (2011). Roy, Kevin M., Nicolle Buckmiller, and April McDowell. “Together but Not ‘Together’: Trajectories of Relationship Suspension for Low-Income Unmarried Parents.” Family Relations, v.57 (2008).

Living Wage A “living wage” is the concept that working people should earn an income that provides them and their families with a basic standard of living, a decent level of dignity, and opportunities for selfsufficiency and participation in the civic life of their society. The living wage concept was the initial justification for the passage of minimum wage laws in the 1930s when Franklin Delano Roosevelt (FDR) was leading the country. Since 1968, when the minimum wage was at its highest (adjusting for inflation), U.S. workers have endured a greater than 40 percent decline in the absolute value of the minimum wage. Even more striking is that a minimum-wage worker with a family of three in 1968 earned 20 percent above the federal poverty line. Today, that same worker’s earnings are roughly 30 percent below the federal poverty line. These workers, who number today more than 10 million, can be classified as the “working poor.” The reductions in the value of minimum wage work in the United States have occurred in the

820

Living Wage

context of increasing worker productivity. During a time when productivity has soared, low-wage workers’ economic dignity and buying power have been diminished. As such, modern living wage movements seek to raise minimum wages, usually at a local level, as a rationalized response to a free market economy. Since 1994, approximately 140 living wage ordinances have been passed by cities, counties, townships, and universities. The first federal minimum wage was passed in 1938 as part of the Fair Labor Standards Act (FLSA). The most recent raise to the federal minimum wage was in 2009, making it $7.25 per hour for employees covered by the FLSA, although state-level minimum wages vary. About 30 years after the first federal minimum wage was passed, the first official poverty “thresholds” were adopted, calculated by estimating the cost of an “economy” supply of food multiplied by three. This is the same procedure used today, with adjustments only for the consumer price index. Noting the insufficiency of the poverty threshold calculation, numerous agencies and organizations have developed instruments to calculate the real wage rate needed to afford a basic standard of living, including the Self-Sufficiency Standard (SSS) developed by the Wider Opportunities for Women (WOW) workforce advocacy organization; the Basic Needs Budget Calculator (BNBC), developed by the National Center for Children in Poverty; the Basic Family Budget Calculator, developed by the Economic Policy Institute; and the National Low Income Housing Coalition’s (NLIHC) yearly Housing Wage. For example, the NLIHC’s Housing Wage is calculated so that housing costs do not exceed 30 percent of a family’s income. In 2013, the NLIHC estimated that, to rent a unit costing $900 per month, a working person would need to earn $17.31 per hour, or $36,000 annually. At the current minimum wage of $7.25, a person would need to work 95 hours per week to afford this modest rental unit. Resistance Despite the commonsense logic of living wage movements—that working people deserve, and have earned the right, to be able to house, feed, and clothe their families and provide them with child care, transportation, and medical care—there are several arguments against passing such legislation. First, some believe that instead of raising the minimum wage, it should be eliminated. The idea here is

that government intervention is against the precepts of the free market economy. The argument against this is that historically, government intervention has been necessary to eliminate business practices such as slavery and child labor. It is further argued that today, the free market economy drives wages down to subsistence levels and fosters monopolies that primarily serve the wealthy and their stakeholders. This view holds that it is no accident that shares of income and wealth in the United States continuously grow more disproportionate, favoring the upper classes. What proponents of living wage legislation are suggesting is not that wealth and income become equally distributed. Rather, given the living wage, the free market economy should continue to determine the distribution of wealth and income. Other common arguments against passing living wage legislation are related to the claim that living wage laws create negative unintended consequences that ultimately harm the working poor they aim to assist. These arguments generally revolve around the idea that living wage laws reduce the number of jobs available to low-wage workers and hinder the competitiveness of businesses. In essence, those opposed to living wage laws suggest they are not viable long-term in the United States. The counterargument is that since most living wage legislation deals with wages and benefits paid to workers employed under contract to provide goods and services to the government, the cost-of-living wage rate is diffused among all taxpayers. Furthermore, proponents of living wage ordinances point to social and economic research showing that living wage laws are effective at reducing poverty among the working poor, that the costs do not seriously burden business, that they tend not to predict negative effects on employment and in fact are predicted to cause greater efficiency and technological change, increasing the nation’s wealth—that there is a wage rate that can allow for a decent livelihood among all working people that also does not create excessive cost burdens for businesses. For example, in a study published in 2001 estimating the magnitude of health improvements resulting from a proposed living wage ordinance in San Francisco, R. Bhatia and M. Katz found that adopting the living wage proposed at that time ($11 per hour) would decrease rates of premature death, days sick in bed, limitations in work and daily living, and depressive symptoms. For the children of



full-time workers affected by the legislation, the living wage increase predicted an increase in the number of years of completed education, increased odds of completing high school, and a reduced risk of early childbirth. Living wage proponents point out how such a small economic initiative could have such positive and wide-ranging effects in the real lives of working people. Costs Rates of labor force participation in the United States have continued to fall since the start of the recession in 2008. Although a number of factors are thought to drive this trend, many believe it is primarily due to the lack of higher-paying jobs in the economy. For example, the Occupational Outlook Handbook, published by the U.S. Bureau of Labor Statistics in 2012, states that “personal care aides” and “home health aides” are the two fastestgrowing occupations for the period 2010 to 2020. The median pay for these occupations is $19,640 and $20,560, respectively. Furthermore, since the recession, the greatest increases in employment opportunities have been in service-sector jobs with low pay, such as restaurants and retail. These types of jobs do not compensate, in many cases, for even the cost of child care, leaving many without adequate motivation to join, or rejoin, the workforce. Individuals working in these low-paying servicesector jobs are the most likely to be classified by the U.S. Bureau of Labor as the working poor. Many, even noneconomists, find this troubling, because the economy is based on consumerism, and people cannot consume if they do not have the financial resources to do so. In this view, the future of the nation’s economic health is dependent on protecting and fostering an environment in which working- and middle-class buying power is reestablished without the need for wage earners to go into debt to support a basic standard of living. Living wage proponents argue that to do this, the country needs to pass living wage legislation. Typically, living wage legislation is successfully passed in communities where poverty levels are high, there is little slack in the local labor market, local union and religious groups are strong, and community members are Democratic or socially oriented. In any community seeking to develop living wage legislation, two major issues need to be addressed: (1) What wage rate provides workers

Living Wage

821

and their families with a basic standard of living, a decent level of dignity, and opportunities for selfsufficiency and participation in the civic life of their society? (2) How high can a minimum wage rate be set before it creates excessive cost burdens for businesses? Living wage scholars have suggested that living wages should contain an evolving definition of what having a decent livelihood means and that raises should be provided to working people in line with average U.S. productivity and inflation, after taking account of employment effects. In 1995, the National Academy of Sciences’ (NAS) Panel on Poverty released a report summarizing what official measures of poverty should include. They determined that these measures needed to include basic budgets for food, clothing, shelter, utilities, household supplies, and personal care. They stated that reference data should be based on a family of four—two adults and two children— which should be adjusted for geographic area, work-related expenses, child care, transportation, and out-of-pocket medical expenses. As many have argued, if it is possible to develop a poverty threshold, it should also be possible to agree on a living wage benchmark. The calculations from the NAS Panel on Poverty are suggested as a starting point in determining what wage rate constitutes a living wage. Allyson Drinkard Kent State University at Stark Leonard N. Drinkard II U.S. Department of Labor See Also: Fair Labor Standards Act; Minimum Wage; Poverty and Poor Families; Poverty Line; Standard of Living; Working-Class Families/Working Poor. Further Readings Adams, S. and D. Neumark. “Living Wage Effects: New and Improved Evidence.” Economic Development Quarterly, v.19/1 (2005). Altman, M. “The Living Wage, Economic Efficiency, and Socio-Economic Wellbeing in a Competitive Market Economy.” Forum for Social Economics, v.41/2–3 (2012). Bhatia, R. and M. Katz. “Estimation of Health Benefits From a Local Living Wage Ordinance.” American Journal of Public Health, v.91/9 (2001).

822

Love, Types of

Casselman, B. “Five Takeaways From August Jobs Report.” Wall Street Journal. http://www.blogs.wsj .com/economics (Accessed September 2013). Cronin, B. and B. Casselman. “Labor Recovery Leaves More Workers Behind.” http://www.online.wsj.com /article (Accessed September 6, 2013). Glickman, Lawrence B. A Living Wage: American Workers and the Making of Consumer Society. Ithaca, NY: Cornell University Press, 2009. Heller Clain, S. “Explaining the Passage of Living Wage Legislation in the U.S.” Atlantic Economic Journal, v.40 (2012). Izzo, P. “Unemployment Rate Drops for Wrong Reasons.” http://www.blogs.wsj.com/economics (Accessed September 2013). Morath, E. “Why Is U.S. Work Force Shrinking?” Wall Street Journal. http://www.blogs.wsj.com/economics (Accessed September 2013). Pollin, R., M. Brenner, J. Wicks-Lim, and S. Luce. A Measure of Fairness: The Economics of Living Wages and Minimum Wages in the United States. Ithaca, NY: Cornell University Press, 2008. Rossi, M. M. and K. A. Curtis. “Aiming at Half the Target: An Argument to Replace Poverty Thresholds With Self-Sufficiency, or ‘Living Wage’ Standards.” Journal of Poverty, v.17 (2013). Solman, P. “Is Structural Unemployment ‘Humbug’ or Are Krugman and Baker Biased?” http://www.pbs .org/newshour/businessdesk (Accessed August 2013). Solman, P. “Structural Unemployment? Why Not Throw Money at the Problem?” http://www.pbs.org /newshour/businessdesk (Accessed August 2013). U.S. Bureau of Labor Statistics. “Fast-Growing Occupations.” In Occupational Outlook Handbook. http://www.bls.gov/ooh/fastest-growing.htm (Accessed March 2012). U.S. Department of Labor. “Changes in Basic Minimum Wages in Non-Farm Employment Under State Law: Selected Years 1968 to 2013.” http://www.dol.gov /state/stateMinWageHis.htm (Accessed April 2013).

Love, Types of Love, characterized by caring, intimacy, and commitment, is essential for the physical and emotional well-being of humans. Usually people begin to experience love in families, in which they learn

A young couple holds hands as a display of affection. Because of its complexity and elusiveness, the concept of love has been explored, discussed, and studied by scholars from many disciplines, including philosophy, psychology, and sociology.

what love is and how to love. Still, people experience love differently in terms of what they think, feel, and behave. Because of its complexity and elusiveness, the concept of love has been explored, discussed, and studied by scholars from many disciplines, including philosophy, psychology, and sociology. Love is a multifaceted concept. It varies in degree, in intensity, and across different social contexts. To understand the concept of love, typologies of love have been developed across time. In Plato’s Symposium from ancient Greece, Pausanias noted a distinction between common love and heavenly love. In more contemporary times, early scholarly work on love was undertaken by Canadian sociologist John Lee, who identified six forms of love (eros, ludus, storge, pragma, mania, and agape). The best-known approach to classifying love was proposed by psychologist Robert Sternberg. Sternberg’s theory is known as the triangular theory, which posited that love was a compound of

Love, Types of



three psychological elements: intimacy, passion, and commitment. By combining these three elements in various ways, eight types of love were derived (non-love, liking, infatuation, romantic love, conjugal love, fatuous love, empty love, and consummate love). Other taxonomies of love include the distinction between “passionate love” and “companionate love,” the distinction between “masculine love” and “feminine love,” and the distinction between “freely chosen love” and “socially controlled love.” Distinction of Love in Plato’s Symposium Pausanias claimed that love could be broken into two types: common love and heavenly love. The former type of love was about sensual desire, while the latter love-type was noble—a kind of love seeking virtue rather than physical charm. Heavenly love regarded the soul of the beloved. The Symposium presented an idea that common love only happened between a man and a woman, while heavenly love could exist between a man and a man. John Lee’s Six Love Styles Lee, the Canadian sociologist, developed one of the most cited theories of love. According to Lee, there are six styles of love: eros, ludus, storge, pragma, mania, and agape. People may view love in different ways or more than one way. • Eros (romantic). The eros love style is characterized by physical attraction. This is the kind of love show in romantic movies and is commonly visible in young adults. Erotic lovers may fall in love at first sight and report passionate experiences. • Ludus (playful). Ludus love is considered a game of fun. These lovers do not seek relationship commitment and are not dependent on others, nor do they allow others to be dependent on them. Ludus lovers often have several sexual partners at one time and do not develop deep relationships with their lovers. • Storge (friendly). The storge love is also known as companionate love. Friendship, respect, feelings of tenderness, togetherness, commitment, deep affection, and support are characteristics that define this type of love relationship. It is a kind of love in which affection develops over years, and

823

that is more likely to endure compared to eros (romance). • Pragma (rational). Pragmatic love happens when a person sees compatibility between their own characteristics and those of their partner. These characteristics include financial ability, education, occupation, religious views, and recreational interests. Pragma lovers do not get involved in relationships that are not logical and practical, such as long-distance relationships. • Mania (manic). Manic lovers are characterized by obsessive jealousy, extreme control, and intense dependency. They must possess their partner and are consumed by thoughts of them. • Agape (selfless). Agape love is altruistic. It is characterized by a focus on the welfare of the beloved, without thinking of reciprocation. Agape lovers are compassionate and undemanding. Clyde Hendrick and Susan Hendrick expanded on Lee’s theory. They have found that men tend to be more ludic, whereas women tend to be storgic and pragmatic. The Triangular Theory of Love Robert Sternberg, a Yale psychologist, proposed that love has three important components: intimacy (closeness), passion (sexual attraction), and commitment. The mix of these three elements varies from one relationship to another. By combining these three elements in various ways, eight types of love experienced between individuals were identified: • Non-love. This is the absence of all three elements. It is when two strangers look at each other. • Liking. A liking relationship is based on intimacy alone, often described as friendship. • Infatuation. This is when both parties are drawn to each other only physically. It is a form of love associated with obsession, without knowing each other well. • Romantic love. This is when passion and intimacy come together. This might be at the early stage of a relationship or a kind of relationship in which both parties

824









Love, Types of recognize that a lasting commitment is impossible. Conjugal love. This is when intimacy and commitment come together. It is a kind of love when a couple has been married for many years and there is deep friendship between them. Fatuous love. This is love characterized by the connection of passion and commitment, but the couple may not like each other or hardly know each other. When passion fades over time, they may find only obligation left in the relationship. Empty love. This is when only commitment exists in the relationship. It happens in some arranged marriages. The relationship stays intact because of social, legal, or religious reasons. Consummate love. This is a kind of ultimate and all-consuming love that combines intimacy, passion, and commitment. It is associated with relationship stability and satisfaction.

Sternberg’s triangular theory of love allows lovers to identify the degree to which they are matched in terms of intimacy, passion, and commitment in their relationships. Although the love types may vary depending on many factors such as marital status, this theory is useful for counseling purposes. Passionate Versus Companionate Although many ways to classify types of love exist, it is possible that perhaps the most accessible explanation involves two fundamental forms of love: passionate (romantic) love and companionate (conjugal) love. In 1978, psychologists Elaine Hatfield and G. W. Walster defined passionate love as a state that involves intense feelings and sexual attraction; compassionate love involves feelings of mutual respect, trust, and affection. Passionate love more often exists at the beginning of a relationship, and it inclines to transform to compassionate love with time. Passionate love might quickly fade, while compassionate love endures. Masculine Versus Feminine Although gender differences in views of love exist, both women and men in intimate relationships

value friendship, passion, companionship, and selfsacrifice as characterized by Lee’s typology. Freely Chosen Versus Socially Controlled The meaning and expression of love may vary widely from one culture to another. The United States and other Western societies emphasize individualism. People have the freedom to love and to choose marriage or not. In collectivist cultures, arrangements for whom one marries and perhaps loves may be made by families. Typically, it is said that romantic relationships start from feelings of attraction between two people, especially among young people. Commitment comes later, when intimacy arises after knowing each other well and both parties decide they want a long-lasting committed relationship. In other cultures and societies, respect for parents’ wishes is more important than romantic love, and the harmony of the whole family is highly valued compared to individual feelings. A person may marry a partner chosen by a matchmaker and approved by parents or relatives without knowing or even seeing the partner. It is expected that their love will grow after marriage. Whether it is freely chosen or socially controlled, love will continue to be one of the most treasured experiences in life. Love provokes positive emotions that enhance physical and mental well-being. As society becomes more diverse, more types of love may be identified and characterized. Society’s understanding of love will deepen as people continue to experience and study love. Xiaohui Li Catherine Solheim University of Minnesota See Also: Arranged Marriage; Attachment Theories; Christianity; Cohabitation; Companionate Marriage; Courtship; Date Nights; Dating; Emerging Adulthood; Engagement Rings; Family Development Theory; Living Apart Together; Living Together Apart; Midlife Crisis; Open Marriages; Polyamory; Polygamy; Promise Keepers; Promise Rings; Rational Choice Theory; Speed Dating; Valentine’s Day. Further Readings Gray, John. Men Are From Mars, Women Are From Venus: A Practical Guide for Improving

Communication and Getting What You Want in Your Relationships. New York: HarperCollins, 1992. Hatfield, Elaine and William Walster. A New Look at Love. Lanham, MD: University Press of America, 1978. Hendrick, Clyde and Susan Hendrick. “A Theory and Method of Love.” Journal of Personality and Social Psychology, v.50 (1986). Hendrick, Susan and Clyde Hendrick. “Linking Romantic Love With Sex: Development of the Perceptions of Love and Sex Scale.” Journal of Personality and Social Psychology, v.19 (2002).

Love, Types of

825

Lee, John. The Colors of Love: An Exploration of the Ways of Loving. Don Mills, Canada: New Press, 1973. Lee, John. “The Styles of Loving.” Psychology Today (October 1974). Sternberg, Robert. “A Triangular Theory of Love.” Psychological Review, v.93/2 (1986). Sternberg, Robert. The Triangle of Love. New York: Basic Books, 1988. Sternberg, Robert and Michael Barnes, eds. The Psychology of Love. New Haven, CT: Yale University Press, 1988.

M MADD After the Eighteenth Amendment repealed Prohibition in 1933, alcohol consumption increased in America. By 1980, more than 25,000 people died annually in drunk driving crashes, but there was public apathy about this random threat that could destroy a life. Mothers Against Drunk Driving (MADD) is one of the earliest and most successful grassroots activist groups in America. One woman channeled her anger over the criminal justice system’s ineffectual handling of drunk drivers into creating an organization to protect families from being victimized by substance-impaired drivers. The Personal Is Political When Candy Lightner’s daughter was killed in California in 1980 by a repeat-offender drunk driver, she was shocked by the criminal justice system’s lenience toward driving under the influence (DUI). Enraged by plea bargaining, minimal penalties, and the barriers DUI victims faced in criminal courts, Lightner sought others who had lost relatives in DUI crashes and were interested in changing the situation. They named their group MADD, reflecting their anger at the status quo. By 1981, MADD had 11 chapters in four states. Lightner was so effective using media to raise public awareness about DUI that the term designated driver entered the English language in 1982, and politicians had to pay attention to MADD.

In 1982, California added a victims’ rights amendment to its constitution allowing crime victims to attend sentencing and parole proceedings, establishing their entitlement to financial restitution, making earlier felony convictions admissible in court, and directing judges to consider public safety when setting bail. President Ronald Reagan appointed Lightner to the 1982 Presidential Commission on Drunk Driving. In 1984, ardent states’ rights advocate Reagan enacted the Federal 21 Minimum Drinking Age Law, which would reduce federal highway grant funding to states failing to raise their minimum drinking age to 21. All 50 states were in compliance by 1988. From 1981 to 1990, more than 1,250 new state and local DUI-related laws were passed. Some banned “happy hours” and established server liability penalties. Victims’ Assistance While the U.S. Constitution addresses the rights of people accused of crimes, it was silent concerning the rights of crime victims. The American crime victims’ movement was under way in the 1970s. Providing help to victims of DUI-related crimes was part of MADD’s charter mission statement, making it one of the pioneer groups for victims’ advocacy. MADD changed public dialogue about drunk driving, characterizing it as a deliberate decision by irresponsible individuals who should be held accountable for the harm they cause. 827

828

MADD

DUI victims may experience personal injury or loss of life and face funeral, legal, medical, and property damage expenses. Catastrophic injuries and loss of loved ones can cause long-term anger, anxiety, and grief, which can result in divorce, job loss, and suicide. In 2010, the National Highway Traffic Safety Administration reported annual DUIrelated costs of $132 billion. MADD provides relatives and friends of DUI victims, as well as surviving victims, a way to give meaning to their loss. It provides guides on navigating the criminal justice system, one-on-one support, referrals to regional resources, and volunteer participants on victim impact panels at sentencing hearings. MADD promotes outreach to criminal justice judges and government officials concerning deficient DUI laws, enforcement of existing laws, and the deterrence value of quick, guaranteed punishment of DUI offenders. MADD maintains a national database of DUI legislative efforts and statistics. It develops educational resources for families and schools, along with death notification training for first responders who communicate with families of DUI victims. MADD is now one of the largest victim service organizations in America. Underage Drinking In the early 1980s, car crashes accounted for 50 percent of deaths of Americans ages 16–19. Prevention of underage drinking has always been one of MADD’s goals. Citing data linking disproportionate DUI fatalities to young drivers, MADD sought to raise the national minimum drinking age to 21. Despite achieving that goal in 1988, underage DUI offenders kill about 6,000 people annually. The life expectancy for 15- to 24-year-old Americans did not improve from 1970 to 1990 due to DUI fatalities. The average age for first drinking alcohol in America is 13. While 20 percent of teenagers binge drink, 99 percent of parents do not believe their children are binge drinkers. Despite a national initiative for using designated drivers, peer pressure leads many young people to accept rides from drinking drivers. MADD encourages parental involvement in guiding children toward responsible alcohol use. The Parent Pledge commits them to not serving alcohol to anyone under 21, keeping illegal drugs

out of their homes, and having responsible adults at young people’s parties. MADD supports stronger laws against the use of fake IDs, the loss of licenses for underage possession of alcohol, and criminal penalties for adults who provide alcohol to anyone under 21. MADD’s Strengthening Families program addresses children’s social skills, helping them resist peer pressure concerning illegal substances. Recognizing parents’ importance in influencing children’s attitudes about alcohol, MADD provides parental guides about responsible drinking. Impact Today the rate of DUI-related deaths is half what it was when MADD began in 1980, but someone is injured in a DUI crash every 90 seconds. Detractors claim MADD has become a neotemperance group whose “nanny state” goals erode personal liberties. Improvements in vehicle safety design and enforcement of seat belt laws make clear statistical interpretation difficult, but MADD is credited with reducing fatal crashes by teenagers. MADD has influenced significant social change by lobbying for passage of DUI legislation, raising awareness about the harm impaired drivers cause their victims, and enabling ordinary citizens to transform personal tragedy into effective activism for the public good. Betty J. Glass University of Nevada, Reno See Also: Alcoholism and Addiction; Child Safety; Education, High School; Teen Alcohol and Drug Abuse; Texting. Further Readings Brown, Katherine A. “A National Study of the Association Between Mothers Against Drunk Driving and Drunk-Driving Laws, Driving-Underthe-Influence Arrests and Alcohol-Related Traffic Fatalities.” Ph.D. diss. Ohio State University, 2002. MADD. http://www.madd.org (Accessed May 2013). Zeman, Laura D. “Mothers Against Drunk Driving: How Two Mothers’ Personal Pain Birthed a Social Movement.” In The 21st Century Motherhood Movement, Andrea O’Reilly, ed. Bradford, Canada: Demeter Press, 2011.



Magazines, Children’s

829

Magazines, Children’s Children’s magazines serve as a way to entertain, enlighten, and educate young people in a manner that is both appropriate and enjoyable. While a wide variety of children’s magazines exists, each targets a certain audience. There are magazines intended for boys, girls, teenagers, those with specialized interests, and other groups. For much of the 20th century, magazines were one of the most significant entertainment options for children. Many depictions of the American family have been contained in children’s magazines, and changing social conditions have been reflected in their pages. Many have argued that children’s magazines have indeed shaped conceptions of the American family. Mainstream Children’s Magazines Pulp magazines, dime novels, and other publications intended for the lower end of the market contrasted with those intended for the upper classes. One of the earliest of these magazines, The Youth’s Companion, was first published in 1827. Early issues of The Youth’s Companion centered on religious themes, and the magazine was well received by many clergymen and civic reformers. In keeping with its stellar reputation, The Youth’s Companion included articles and stories by such leading American authors as Emily Dickinson, Jack London, Harriet Beecher Stowe, Mark Twain, and Booker T. Washington. The Pledge of Allegiance first appeared in The Youth’s Companion in 1892, the work of staff writer Francis Bellamy. Other children’s magazines sprung up at this time, forcing The Youth’s Companion to focus more on entertainment after 1890. The magazine was merged with The American Boy, a similar publication, in 1929. Many children’s magazines joined the market forged by The Youth’s Companion. St. Nicholas Magazine, for example, was founded by Scribner’s in 1873, and was first edited by Mary Mapes Dodge, who continued in this role until 1905. Although never a circulation juggernaut, St. Nicholas Magazine was known for its contests of the best drawings, essays, photographs, poems, and stories submitted by children. Over the years such luminaries as Bennett Cerf, William Faulkner, F. Scott Fitzgerald, Edna St. Vincent Millay, and E. B. White won prizes for their entries in these contests. Such contests affected American families insofar that it

A page from the 1922 issue of The Youth’s Companion, one of the earliest magazines targeted toward children. Founded in 1827 and intended for the upper classes, early issues centered on religious themes; adventure and entertainment themes also appeared.

made participating in artistic endeavors acceptable. St. Nicholas Magazine ceased publication in 1940. Popular Press Throughout the 19th century, literacy rates in the United States increased. As a result of this increase in literacy, a market developed for reading material that was entertaining to children, especially boys, and inexpensive. “Dime novels” emerged as a generic term for several distinct but related forms, including story papers, thick-book reprints, 5- and 10-cent weekly libraries, dime novels, and early pulp magazines. Although the last true dime novels were published during the 1920s and pulp magazines ceased publication during the 1950s, descendants of the forms exist today, including comic books, massmarket paperback novels, and television programs and films based on popular genres first developed decades ago.

830

Magazines, Children’s

The publishing house Beadle & Adams inaugurated Beadle’s Dime Novel Series in 1860. The first of this series was Ann S. Stephen’s Maleaska, the Indian Wife of the White Hunter. This book, a reprint of an earlier serial that appeared in the Ladies’ Companion, is generally regarded as the first dime novel. The Beadle & Adams dime novels varied in size, although many measured approximately 6.5 by 4.25 inches, and most were limited to 100 pages in length. After 28 “books” were published with a plain salmon wrapper, Beadle & Adams added an illustration to the covers, all of which sold for 10 cents. The series was immediately popular and ran to 321 issues, many of which were reprinted through the 1920s. Many of the Beadle & Adams books focused on themes from the frontier and the American West, and initially reprints of serials and other novels were used exclusively. Beadle & Adams’s success led to many competitors. Although all magazines of this genre were referred to as dime novels, actual prices ranged from 10 to 15 cents. Although there was a certain “look” to the genre, the formats of dime novels varied over time and from publisher to publisher. In the interest of cutting costs, some publishers produced dime novels as short as 32 pages in length, although readers at first resisted these. Beginning in the 1880s, weekly dime “libraries” became increasingly popular. These publications were essentially tabloids in form, and varied in size from 7 by 10 inches to 8.5 by 12 inches. Dime novels tended to feature a single story, unlike story papers and other similar genres. Fierce competition between various publishers generated colorful covers, which attracted readers’ attention and increased sales. In addition to Beadle & Adams, major publishers of dime novels included Street & Smith and Frank Tousey. Even after competition forced the price of many of these publications to down to five cents, the general public continued to refer to them as dime novels. In the United States, pulp magazines, also known as “pulps,” enjoyed great popularity with children, especially boys. The pulps often featured stories that focused on crime. At the height of their popularity during the 1920s and 1930s, many pulp magazines sold up to a million copies per issue. Popular titles included Adventure, Amazing Stories, Black Mask, Dime Detective, Flying Aces, Horror Stories, Marvel Tales, Oriental Stories, Planet Stories, Spicy Detective, Startling Stories, Thrilling Wonder

Stories, Unknown, and Weird Tales. Pulp magazines remained popular through the 1950s, when paperback books reduced their popularity. Pulps allowed many new voices to gain a following, and their lack of respect permitted many conventions to be broken. Pulps’ name derives from the cheap wood pulp paper on which the publications were printed. Unlike magazines printed on more expensive paper (known as “glossies” or “slicks”), pulps featured lurid and exploitative stories and sensational cover art. The pulps took advantage of new high-speed presses, low payments to authors, and inexpensive paper to reduce the price of the magazine to 10 cents per issue, as opposed to glossies that generally sold for a quarter. Although held in low regard at the time, authors such as Raymond Chandler, Dashiell Hammett, Erle Stanley Gardner, and Rex Stout all began writing for the pulps. Teen Magazines Teen magazines represent a category of children’s magazines that are geared toward teenaged girls. In the United States, teen magazines became popular after World War II, as the affluence that swept the era translated into more disposable income for girls. Popular teen magazines included CosmoGirl, Sassy, Seventeen, Teen, and YM. While the various magazines had slightly different target audiences and were varied in their emphases, all shared certain features. All featured many advertisements from cosmetic firms, clothing companies, shampoo manufacturers, and the like. The teenage girls who were the target audience of teen magazines were treated as independent and knowledgeable, which recognized their important role as consumers. Seventeen was the first, and in many ways the most influential, of the teen magazines. Founded in 1944, Seventeen was aimed at teenaged girls between the ages of 12 and 19. Containing a variety of articles and features, Seventeen consistently focused on fashion and romance, topics that were of great interest to its readers. Seventeen was an early user of consumer polls to determine the interests and concerns of its readers, which permitted it to consistently act as a source of guidance and counsel to generations of girls. Influential as a source of inspiration and encouragement to its readers, Seventeen mixed serious articles with literature, cartoons, columns, and photographs in a blend that was highly successful.



School-Age Children As the United States grew more prosperous during the 20th century, many children had increased amounts of free time. Child labor was mostly abolished, and children remained in school longer, often until at least the completion of eighth grade. As a way of embracing this new degree of leisure enjoyed by many children, a host of organizations sprang up to provide them with constructive activities. Many affinity groups were begun, permitting children of similar backgrounds or with like interests to join organizations devoted to horses, sports, farming, camping, and a variety of other interests. Of these, the Boy Scouts and the Girl Scouts were perhaps the best-known. Boy’s Life was founded in 1911 and became the official publication of the Boy Scouts of America two years later. Intended for boys between the ages of 11 and 18, Boy’s Life skews toward the older end of that spectrum. While the magazine contained content related to camping and buying guides rating various products, it also featured Bible stories and guides related to proper family behavior. Boy’s Life continues to publish today, with a monthly circulation of more than 1 million. The Girl Scouts of the United States of America also published a magazine for its members. First named The Rally, the magazine was renamed The American Girl in 1920. The American Girl was similar to Boy’s Life insofar that it contained articles, columns, cartoons, and other features. These articles and other features supported the programming of the Girl Scouts, which encouraged young women to take a more active role in a variety of fields than did traditional family expectations. Other children’s magazines also appeared during the first half of the 20th century. My Weekly Reader, first appearing in 1928, was provided to students at school and was geared to a variety of age groups. A success from the start, My Weekly Reader had a circulation of more than 1 million by 1931 and was often paid for by school districts interested in exposing students to current affairs. While My Weekly Reader was intended for children at school, a host of other magazines appeared that were intended for children at home. These magazines were purchased by a new, postwar generation of parents who sought additional ways to enrich their children’s education and entertainment outside the school. Some of the more prominent of the children’s magazines focused at the home market included Jack

Magazines, Children’s

831

and Jill, Highlights for Children, Humpty Dumpty, Cricket, and Ladybug. Highlights for Children, first published in 1946, has appeared monthly since that time and contains a variety of features intended to improve children’s play and social skills. The magazine includes sections where children can submit art, poems and jokes, cartoons, craft activities, advice columns, and more. Containing no paid advertisements, Highlights for Children varies from more commercially oriented children’s magazines such as Humpty Dumpty or Jack and Jill, which do include ads although they also heavily feature children’s art and writing. Jack and Jill, intended for readers between ages 7 and 10, was founded in 1938 and has featured a variety of well-known authors, including David A. Adler, Pearl S. Buck, Charles Ghigna, and Ben H. Winters. Jack and Jill’s sister publication, Humpty Dumpty, uses largely the same format for younger children. Cricket, first published in 1973, is more literary in its aspirations than the other publications. Cricket’s founder intended it to be similar to The New Yorker in format, only geared for children. A variety of children’s magazines also exist that focus on a specific interest or demographic group. For example, Ebony, Jr.!, published by Johnson Publishing Company from 1973 until 1985, sought to augment the education of African American children by providing literature, black history, popular culture, and other features of interest to that group. Certain children’s magazines also catered to specific interest groups, such as Cobblestone (American history), Dig (archaeology), Ranger Rick (the environment), and others. Additionally, adult magazines such as Time and National Geographic also publish editions intended for children, with the content in these roughly approximating that of the adult editions. Since the advent of the Internet, children’s magazines have seen serious competition from other sources of information. The circulation of many of these have declined, and some have ceased publication. The slashing of library and public education budgets has also reduced the market for children’s magazines in many public venues. Although a market for quality children’s magazines exists, their future is uncertain. Stephen T. Schroth Jason A. Helfer Knox College

832

Magazines, Women’s

Magazines for women are among the most popular magazine publication genre, offering content that is generally directed toward women as an audience. The content of magazines for women is mainly related to femininity, featuring style news, shopping guides, information on beauty products, diet and exercise, and relationship advice. Americans learn basic lessons about social life from the mass media. Magazines for women reflect the transformations in social conditions and expectations for American women and American family life. Magazines act as instruction manuals that speak authoritatively about everyday understandings of what it means to be a woman. At present, magazines offer women a broad range of subjects related to all aspects of their lives, including love and romance, family and children, household management, and leisure and professional life. Through consumption, magazines sell women the idea that they can have it all.

for women also included issues related to women’s suffrage, reflecting an increased interest in the tastes and values of a predominantly middle-class readership. An interest in the tastes and desires of readers led to the construction of women as being both readers and consumers, fundamentally transforming women’s magazines and locking femininity together with consumption. By the end of the 19th century, women had become the key consumers of household goods, drawing women into the public sphere. As many magazines were pro–women’s suffrage, they offered women some of the first socially accepted professional opportunities. Magazines became an important site for debates about working- and middleclass women entering the labor force. The period was also marked by a democratization in the practices of consumption, so the increasing numbers of periodicals for women reflected an increase in readership. Advertising content increased, allowing publishers to reduce the cover price. The Industrial Revolution, also dubbed the Consumer Revolution, meant that ordinary women could aspire to consume the goods advertised in magazines and identify with this broader American dream. The domestic space was increasingly idealized and established as a feminine space, or the ideal woman’s place. By the early 20th century, conservative attitudes with regard to gender meant that consumer magazines privileged the ideal of a passive and dependent woman. More specifically, following World War II, magazines more intensely prioritized the role of the happy housewife-mother for women. Women’s magazines were in high circulation by this time and the repetition of this image is argued to have promoted self-denial and the denigration of self-realization for women. The spread of suburban life and emergence of the nuclear family as a discrete unit all worked to further isolate women in the home.

The History of Women’s Magazines The emergence of contemporary genres of magazines for women is connected to the emergence of consumer culture at the end of the 19th century. The genre of women’s magazines transformed in this period from offering romantic fiction to include aspects related to domesticity in their content. In this period of transformation, magazines

Becoming Women Magazines act as instruction manuals for women, leading toward an idealized femininity. The visual and editorial content of most magazines for women emphasize that the work of femininity is never done. Magazines have always offered women visual representations of idealized femininity. In the United States these images have privileged white,

See Also: Books, Children’s; Magazines, Women’s; Toys; Video Games. Further Readings Henderson, L. “Ebony, Jr.! The Rise and Demise of an African American Children’s Magazine.” Journal of Negro Education, v.75/4 (2006). Lerer, S. Children’s Literature: A Readers’ History From Aesop to Harry Potter. Chicago: University of Chicago Press, 2008. Swain, C. “‘It Looked Like One Thing but When We Went in More Depth, It Turned Out to Be Completely Different’: Reflections on the Discourse of Guided Reading and Its Role in Fostering Critical Response to Magazines.” Literacy, v.44/3 (2010).

Magazines, Women’s



middle-class femininity represented by the slender body, for example. Advertising and editorial images repeat and normalize this limited view, framing the dominant and popular constructions of beauty within this language. Magazines not only repeat the images of feminine perfection, they act as instruction manuals for individual women to become this ideal. However, the quest for femininity can never be fulfilled, as there is always a new dress, a new beauty product, a new treatment, or a new aspect of one’s body or self that can be improved through consumption. The constant transitions with regard to fashion or trends reflect the never-ending work that femininity requires. The new genre of women’s “health” magazines reflects the strength of dominant femininity. Following the critique of impossible beauty standards set by women’s magazines and other media that was charged with normalizing unhealthy body images for women in the past, the new drive toward health and fitness rerepresents the feminine body as “new,” less frail, healthy, and toned. Women are encouraged to read the healthy body as different from the thin body, yet both bodies require strict self-surveillance and discipline to maintain. The work of femininity also requires that women be desirable to men, and the quest that women are faced with according to these texts is to seek heterosexual romance. Magazines work as an intimate public space where the writers and editors act as “friends,” advising readers as part of a community of women on how to succeed in love. The stories repeat the desire for the quest and the constant disappointment necessary on the road to true love. This aspect of women’s magazines reflects the history of their emergence, as the original periodicals for women were short pieces of romantic fiction that sold the dream of Cinderella-like transformation. Marriage, celebrated with a big white wedding, is presented as the main prize for the hard work of femininity. It also happens that all of the activities associated with achieving this idealized femininity require consumption of many goods and services. Young women who are in the liminal space of actually becoming women are targeted as readers of teen magazines. These publications offer adolescents a space to construct subjectivities while training them to become adult women. These magazines, like those directed at adult women, offer an intimate space for teenage girls to identify with one

Magazines, Women’s

833

another particularly through their shared personal problems and experiences. These magazines work the anxieties of teenagers, engaging with the aspiration to be physically attractive and to succeed in heterosexual romance, reinforcing dominant gender and sexual norms. Discussions of sex and sexuality in teen magazines present messages that are complex and at times contradictory, both perpetuating young women’s subordinate status in heterosexual romance and emphasizing the opportunities for pleasure. Through fashion and sex, magazines offer young women a subcultural space through which they can negotiate their status in becoming women. From Domesticity to Consumerism Magazines for women emerged with an interest in women as consumers of domestic goods during the period of the Consumer Revolution. The writers and editors of these magazines were increasingly aware that the tastes and values of their readers mattered, and a reciprocal relationship between publishers and readers led to transformations in the content of the magazines as they increased their advertising space to include other goods, such as pages on fashion and etiquette that set the foundation for the kinds of editorial content seen in consumer magazines today. These early magazines did reflect complexity and contradiction with regard to representations of women, for example, depictions of the all-American girl, establishing standards of beauty for women around young white femininity. This “girl” was playful and had the freedom to be bold and confident, and through consumption she could express herself through style. The new consuming woman was also represented as a “vamp,” or party girl, who was constructed as dangerous, perhaps even a gold digger, and reinforced the need to keep women in the private sphere. The figure of the flapper worked similarly, emphasizing that beauty and consumption could be fun, but women portrayed in this way were portrayed as frivolous, immature, and self-absorbed, reflecting the dangers that consumption holds for women. The modern American family with a domestic goddess was idealized in representations in these magazines, emphasizing and normalizing the white, middle-class, nuclear family. This ideal family life has undergone transformations as women’s work in

834

Magazines, Women’s

the public sphere has become the norm. The space for the ideal homemaker has been remade, while the woman’s role as consumer has been expanded as magazine publishers responded, transforming themselves toward the “new” or “modern” woman. The range of women’s magazines includes interests in the consumption of goods related to fashion, home, cooking, fitness, leisure, technology, sex and sexuality, music, travel, business, and cars. However, these magazines still emphasize the work of femininity and act as sites for intimate friendship and romantic mentorship as women are advised of their responsibility in improving and maintaining romantic heterosexual relationships. The greatest transformation of women’s magazines in the 21st century has been the intensification of advertising as magazines have literally become imaginary shopping worlds like department stores where women can browse and then shop. Postfeminism The emergence of women’s magazines as a genre has always been connected to feminist debates, as many of the first magazines for women were spaces where women’s emancipation could be discussed. As magazines increasingly idealized marriage, motherhood, and the home for women, they became sites for critique and debate, as women sought opportunities for self-actualization outside the home. Once outside the home, liberal feminism through the language of equality led to other kinds of transformations as writers and editors paid closer attention to the work of femininity through the language of equality between men and women. Women’s responsibilities in sex and romance come to signal agency and equality rather than dependence and passivity, despite the continued investment in women being beautiful and appealing for men. Contemporary magazines emphasize the degree of choice that individual women have as they can reflect their personal choice and agency through consumption. The lifestyles sold to women in magazines become signs of women’s liberation, as the choices women make about clothes or shoes or cars reflect their increased financial independence. Sex, like consumption, becomes another kind of choice that women have, so many new magazines for women maintain a sex-positive perspective. This has been described as the pornification of mass media, as women are encouraged to take control of

their lives and their pleasure as an extension of the broadening of women’s roles. Postfeminism describes a set of approaches to culture aimed at examining the practices of femininity as they relate to the constitutions of other categories of difference, such as race, class, age, ethnicity, and/or gender. Magazines for women reflect many post-feminist themes, for example, the idea that femininity as a process does not objectify women but instead offers women the ability to possess themselves. Through self-motivated acts, women have the power to change their lives, their bodies, and themselves through hard work and consumption. Magazines for women are replete with the “makeover” narrative, demonstrating the power women have to overcome the “before” and be the beautiful, sexy, and successful “after” represented in the images in magazines. Offering representations of the new and empowered, women are no longer merely the objects of the male gaze as sexuality becomes a new source of feminine power. Magazines for women are therefore no longer mere instruction manuals for women but further serve as platforms for women to become independent and empowered individuals who can have it all. Danai S. Mupotsa University of the Witwatersrand See Also: Adolescence; Advertising and Commercials, Families in; Advice Columnists; Courtship; Cult of Domesticity; Cultural Stereotypes in Media; Dating; Domestic Ideology; Family Consumption; Feminism; Gender Roles; Gender Roles in Mass Media; Homemaker; Household Appliances; Individualism; Information Age; Leisure Time; Love, Types of; Magazines, Children’s; Middle-Class Families; Myth of Motherhood; Nuclear Family; Self-Help, Culture of; Third Wave Feminism. Further Readings Currie, Dawn H. Girl Talk: Adolescent Magazines and Their Readers. Toronto: University of Toronto Press, 1999. Ferguson, Marjorie. Forever Feminine: Women’s Magazines and the Cult of Femininity. Portsmouth, NH: Heinemann, 1983. Friedan, Betty. The Feminine Mystique. New York: Norton, 1963.

Gill, Rosalind. “Postfeminist Media Culture: Elements of Sensibility.” European Journal of Cultural Studies, v.10/2 (2007). Rooks, Noliwe M. Ladies’ Pages: African American Women’s Magazines and the Culture That Made Them. New Brunswick, NJ: Rutgers University Press, 2004. Tuchman, Gaye, Arlene Kaplan Daniels, and James Benét, eds. Hearth and Home: Images of Women in the Mass Media. New York: Oxford University Press, 1978. Walker, Nancy A. Shaping Our Mother’s World: American Women’s Magazines. Jackson: University Press of Mississippi, 2000. Zuckerman, Mary Ellen. A History of Popular Women’s Magazines in the United States, 1792–1995. Westport, CT: Greenwood Press, 1998.

“Mama’s Boy” and “Daddy’s Girl” The terms mama’s boy and daddy’s girl reference specific familial relationships between mothers and sons or fathers and daughters, but the terms encompass varied meanings. While these relationship labels seem aligned as they describe parent and child bonds, because of the culture’s contrasting expectations for the roles mothers and fathers should embrace and the contrasting gender roles expected for girls and boys, the ways these relationships are perceived in the culture vastly differ. History Perceptions of parental roles and responsibilities varied throughout the family’s social history in the United States. In the 17th century, the concept of paternal dominance and an authoritative father as head of the family ruled theories of proper family roles. Children were to be “broken” into submission by their parents, with a tie to religiosity, thinking that “sinful” traits needed to be eliminated through physical consequences. Prescribed parental roles shifted in the 18th century, emphasizing mothers’ primary responsibility for raising children, focusing on expressing affection and emotions. This shift ostracized

“Mama’s Boy” and “Daddy’s Girl”

835

fathers and focused on women’s ability and competence in child rearing. This change continued into the Industrial Revolution era with the removal of fathers from homes into factories. Fathers did not have time to focus on child rearing. However, this changed again in the early 1920s with the advent of new fatherhood, which proposed the concept of masculine domesticity and the idea that normal, well-adjusted children required their fathers’ involvement in their lives. Concerns about “absent fathers” were raised when men left for World War II, but upon men’s return from war, the heterosexual, gender-segregated roles were again idealized. Successful fathers financially provided for their families. Fathers were required to be physically absent during the day as breadwinners, but they were to maintain some degree of emotional connection with their children and to serve as disciplinarians. The effects moms’ parenting had on their children’s development became a central focus of parent–child relationships during the mid- to late 20th century. Socialization taught children to rely on financial and physical security from their fathers and emotional security and affection from their mothers. Thus, the labels daddy’s girl and mama’s boy describe relationships that diverge in perception and practice. Daddy’s Girl The term daddy’s girl describes various potential components of father/daughter relationships, but most often references a protective father who dotes on his “little girl” and a daughter who enjoys being spoiled by her father, no matter her age. The term can also reference a daughter’s strong emotional attachment or preference for her father. The concept of new fatherhood in the early to mid-20th century introduced the rise in “daddy’s girl” wherein it is a father’s responsibility to protect, emotionally connect with, and indulge his daughter. This concept has endured through time. The most basic premise of this relationship is that a father will protect his daughter in all ways. Girls and women who are “daddy’s’ girls” are often portrayed as pure, innocent, and childlike, regardless of their ages. Central determinants of females’ purity and value in society are based on sexual choices and experiences. It is implied that daddy’s responsibility is to protect his little girl’s virginity or sexual purity, and thus her morality and social

836

“Mama’s Boy” and “Daddy’s Girl”

standing. A daddy’s girl obeys her father’s rules and abides by his standards. While prescribed social norms for masculinity promote toughness rather than emotion and vulnerability, daddies’ bonds with their daughters are an exception to this rule. Verbal and physical expressions of love and affection between fathers and daughters emphasize the “daddy’s girl” relationship. The perpetuated stereotype that men are breadwinners continues to teach girls to rely on daddies to provide for them financially. A daddy’s girl may be one who wins over her daddy’s indulgence as he caters to fulfilling her wants and needs by buying all that she desires. Related terms to daddy’s girl are the spoiled girl or princess, which have a negative or derogatory connotation. Daddys’ girls overwhelmingly embrace the feminine expectations of being compliant, dependent, affectionate, and emotional. The “daddy’s girl” relationship is predominantly culturally accepted and embraced as a positive bond. Mama’s Boy A boy or man who has not emotionally or physically separated from his mother or does not establish his independence can be labeled a “mama’s boy.” The reasons for labeling males with this title and perceptions of whether these are positive or negative relationship bonds shift throughout different developmental stages for boys and men. It is culturally accepted for boys to have a strong emotional attachment or preference for their mothers when they are very young and when they are within the home. However, as boys develop and have access to the public sphere, they are expected to assert their masculinity and independence by physically and emotionally separating from their mothers. Those who do not are often punitively labeled “mama’s boys” and may be bullied, taunted, or teased as a means of social control to punish boys who deviate from these masculine norms or boy culture. Masculinity in late childhood, adolescence, and into adulthood is often defined through toughness, lack of emotional vulnerability, and rejection of all that is feminine. “Mama’s boys” in these age categories are males who continue an emotional relationship with their mothers and who turn to moms for support. Historically, and still in some cultures, there is a myth that mothers who are too

A father gives his daughter a piggyback ride. “Daddy’s girl” most often references a daughter who, no matter her age, enjoys being spoiled by her protective father who dotes on his “little girl.”

affectionate and overbearing with their sons, particularly adolescent boys, will influence their sons’ sexual orientation, “turning” them gay. The term momism is used to describe this myth. Adult males who live at home (especially if by choice) who are still cared for and coddled by their mothers are often identified as mama’s boys. Peers inflict them with negative judgments and they receive a social stigma from others. Normally, males internalize these negative judgments; however, men who exhibit hypermasculinity as evidenced through their physicality, careers, hobbies, or the embodiment of other “manly” characteristics are more willing to embrace the mama’s boy label without succumbing to internalizing pejorative connotations. Their hypermasculinity cancels out the negative stigma of “mama’s boy,” making the label a positive one. These men compensate for their strong relationships with their mothers by expressing their masculinity in other ways. Examples of these

Marital Division of Labor



exceptions are evident in widespread media stories that emphasize professional male athletes who selfidentify as mama’s boys. Conclusion While daddy’s girl and mama’s boy are terms describing parent–child relationships, the definition of these relationships, the cultural expectations, the social acceptance, and the value placed on these relationships varies significantly, with daddy’s girls perceived more favorably compared to negative connotations for mama’s boys. Marta McClintock-Comeaux Rebecca L. Geiger California University of Pennsylvania See Also: Domestic Masculinity; Fatherhood, Responsible; Myth of Motherhood; New Fatherhood; Overmothering; Social Fatherhood. Further Readings Fahs, B. “Daddy’s Little Girls on the Perils of Chastity Clubs, Purity Balls, and Ritualized Abstinence.” Frontiers: A Journal of Women Studies v.3/116 (2010). Griswold, R. L. Fatherhood in America: A History. New York: Basic Books, 1993. Pollack, W. Real Boys: Rescuing Our Sons From the Myths of Boyhood. New York: Henry Holt, 1999. Rotundo, E. A. American Manhood: Transformations in Masculinity From the Revolution to the Modern Era. New York: Basic Books, 1993.

Marital Division of Labor The marital division of labor generally refers to the gender specialization of work between spouses both within and outside the household. The traditional image of an American family is that of wives being responsible for the household labor and husbands serving as the primary family breadwinner working in the labor market. The marital division of labor references the separate spheres that men and women occupy: women’s private sphere and men’s public sphere. This split means that women are

837

primarily confined to unpaid tasks such as reproduction, motherhood, and housekeeping chores within the household while men are engaged in paid employment outside the home. Scholars and politicians argue that this gendered division of labor leaves unemployed wives economically dependent on their husbands. In comparison to their husbands, employed wives are more likely to encounter the second shift—an additional amount of unpaid domestic work that awaits them at home. The pattern of marital division of labor depends on many factors. For example, the traditional division of labor is more prevalent in middle-class families, while each spouse in working-class and racialethnic minority families has to work outside the home to generate income. Moreover, distinct gender roles of middle-class families were challenged in the latter half of the 20th century when more women began to join men in the workforce. The traditional division of labor is increasingly changing over time as women pursue education, employment, and breadwinning; families of single parents, cohabiters, and same-sex couples become more prominent; and men shift their values and perceptions about performing housework and child care duties. It is progressively more common for men to perform housework and fathers to invest more time in child care. Husbands’ greater contribution to household labor aids in reaching balance. On the other side, in the context of rapid marriage and family changes (i.e., increases in divorce and unwed births), many working mothers struggle to complete their unpaid domestic tasks in addition to their paid job, and this leads to a variety of consequences, including depression and decreased marital satisfaction. History and Current Trends of the Division of Labor in the United States During the colonial era of the United States, the family functioned as one economic unit. Although labor was divided by gender, with husbands working in the fields or the forests and wives laboring in the home as they cooked, cleaned, and cared for children, the work of both men and women was recognized as essential to the successful operation of the household. When industrialization swept over the country, many agrarian families sought work in the city. The division of labor remained gendered, especially among middle- and upper-class families, with

838

Marital Division of Labor

women responsible for household chores while men worked in the public labor force. During this time the marital division of labor was redefined so that men were described as “breadwinners,” and women’s unpaid household labor was treated as care rather than work. However, the financial strains on working-class families and many immigrant families required that all household members work. The surge of husbands that left home to serve in World War II provided job vacancies for wives to fill. At the government’s behest, wives completed paid work in conjunction with their housework. The return of husbands after the war pushed women back to homemaking roles, but the experience of earning their own income caused women’s labor force participation rates to rise during the 1950s. The consecutive women’s liberation movement further encouraged women to pursue jobs in the paid workforce. The latter half of the 20th century witnessed rapid changes in the traditional division of labor as more women pursued higher education and full-time careers. Women today perform fewer household tasks than they did in the past (but still more than their husbands). Various technological advances alleviate women’s workload and many jobs that were once the responsibility of the homemaking wife have changed to become purchasable commodities in America’s rising service economy. Wealthier families are able to afford domestic household cleaning services and hire nannies for their children to reduce the amount of housework. Note, it is largely middle-class white women who pay working-class immigrants and racial-ethnic minority women for their domestic services. The traditional labor division is further challenged by current rapid changes in American families that have shifted social norms and gender roles. For example, nontraditional families, such as cohabiting heterosexual and same-sex couples, do not fully embrace traditional labor roles. These couples have greater egalitarianism in their relationships than married different-sex couples, which results in a more equal performance of domestic tasks between the partners. Additionally, the increase of men suffering unemployment has caused some fathers to embrace an at-home lifestyle that involves more housework and child care while their wives serve as the family breadwinner. There is also growing demand for women’s emotional labor to be considered work. Emotional labor represents

demonstrative caregiving acts, such as consolation and encouragement, that are often considered feminine and largely the responsibility of women to provide to all members of their family. Explanations for the Marital Division of Labor Several theoretical frameworks are proposed to explain the marital division of labor. First, according to the relative resource perspective, the spouse with the most resources, either from their education, income, or job prestige, can negotiate the amount of housework he or she performs. When husbands work in the public sphere, they hold more of these resources. This makes women dependent on their husbands and responsible for the housework and child care. This argument is criticized for assuming that doing housework is a negative task. Additionally, family economists argue that specialized homemaker and breadwinner functions in marriage are most efficient. They argue that maintaining assigned roles will maximize the economic productivity of the family and promote marital stability. This notion, however, has been challenged with the rise of successful dual-earner couples who share household chores and enjoy stable marriages. The third argument is the time availability perspective, which explains that the marital division of labor results from time commitments. Women who stay at home are responsible for the housework and child rearing because men are engaged in employment outside the home. This perspective is limited, as many studies have found that even when husbands and wives work outside the home, women remain responsible for a majority of the household chores. Another explanation is the gender role ideology perspective, which states that individuals who perceive men and women as equal embrace a more balanced division of labor. This perspective challenges the previous theories because it demonstrates how certain tasks are not related to time or resources but rather are connected to beliefs about gender roles. Greater changes in sharing work are most often driven by husbands’ egalitarian beliefs because wives are unable to influence them to support a balanced division of labor. Finally, the gender construct theory focuses on the roles that society assigns to different genders to explain the division of labor. Wives are expected to be mothers and perform housework while husbands should successfully fulfill the role of family provider



and protector. This theory contends that household chores are assigned based on these gender expectations, and that by performing them as such, men and women perpetuate the division of labor and limit men’s greater participation in housework. Factors That Affect the Marital Division of Labor The customary marital division of labor depends on many factors, such as education, employment, the presence of children, race, and socioeconomic status. Patterns in the division of labor are significantly related to husbands’ and wives’ education. Wives with higher levels of education tend to do less domestic work than wives with lower levels of education, while husbands with higher levels of education tend to do more domestic work than husbands with lower levels of education. The marital division of labor also changes with employment. Employed husbands and wives do less household tasks than those who are unemployed. As wives’ employment hours increase, they devote fewer hours to housework. Conversely, husbands with jobs that do not demand long working hours are more likely to help with housework. It is noteworthy that the amount of housework husbands do is increasing, but wives still complete more household chores even when both spouses are working. Women’s contribution to household income also influences domestic work, as wives who earn greater salaries due to higher levels of employment experience a more equal division of labor than do wives with lower salaries. The presence of children further affects the division of labor. When couples have children, the amount of time spent on domestic work increases for both husbands and wives. However, mothers still do more housework than fathers, which results in wives performing more domestic work as the family expands. Wives are more likely than husbands to interrupt their paid employment when they begin having children, and mothers who do remain in the labor force decrease only the amount of time they spend doing housework, not child care. Due to the imbalance in care, mothers have greater difficulty simultaneously managing their public and private roles. A great amount of variation exists in the marital division of labor across racial and ethnic groups. Black couples in both working-class and middle-class

Marital Division of Labor

839

families embrace a more equal division of labor, perhaps because black women often experience greater power in their marriages than white women. Some evidence also suggests that black husbands complete a greater amount of housework than white husbands, but black wives still perform almost double the amount of housework as black husbands. Black women have the highest labor force participation rates, as they are most likely to be unmarried and must independently provide an income. Conversely, Hispanic couples embrace more traditional divisions of labor than white and black couples because Hispanic husbands largely perceive men as the family breadwinner. Hispanic husbands perform less housework than Hispanic wives, although they still spend more time doing domestic labor than white husbands. Hispanic women have lower labor force participation rates and spend more time doing household labor compared to white and black women. Finally, the division of labor depends on socioeconomic status, which has a greater influence on wives’ domestic work than husbands’. Working-class wives often complete a greater proportion of the housework than do middle- and upper-class wives. As working-class husbands’ dependence on their wives’ income increases, the amount of domestic work the husband completes decreases. The service economy in the United States lightens middleclass, white women’s housework because they pay for the domestic services of minority, working-class women. Compared to families with lower socioeconomic status, families with higher socioeconomic status generally embrace more equal gender roles, which results in husbands’ and wives’ greater sharing of household tasks. At the same time, while the amount of domestic work that upper-class wives complete differs, research demonstrates that upperclass husbands do smaller portions of household tasks than lower-class husbands. Consequences of the Division of Labor The marital division of labor continues to be unbalanced between husbands and wives, and this has different health consequences. For instance, although domestic labor is negatively related to the health of both men and women, this association is stronger for women than for men. Wives who engage in more repetitive household tasks experience higher levels of psychological stress, which can lead to depression and poor physical health. Studies show

840

Marketing to and Data Collection on Families/Children

that couples who share an equal balance of paid and unpaid work report fewer signs of depression. It is wives’ (but not husbands’) perception of household labor as fairly and equally divided that is associated with their greater psychological well-being and thus better physical health. The greater hours that wives spend on housework may increase their rate of mental and physical illness more so than their husbands. Marital quality is also affected by the division of labor. Wives who feel that housework is unfairly balanced report lower marital quality, while husbands who believe that paid employment distribution is unfair experience reduced marital quality. When husbands perform more domestic tasks, wives’ marital quality increases and husbands’ decreases. At the same time, ideas about gender roles also affect husbands’ marital quality because men who believe in more equal sharing of housework enjoy higher-quality marriages. Overall, dividing the household labor in a way that is perceived by each spouse as equal results in greater marital quality for both husbands and wives. Shannon Brenneman Hui Liu Michigan State University See Also: Breadwinner-Homemaker Families; DualIncome Couples/Dual-Earner Families; Gender Roles; Homemaker; Mothers in the Workforce; Stay-at-Home Fathers; Working-Class Families/Working Poor. Further Readings Bianchi, Suzanne M., Melissa A. Milkie, Liana C. Sayer, and John P. Robinson. “Is Anyone Doing the Housework? Trends in the Gender Division of Household Labor.” Social Forces, v.79 (2000). Coltrane, Scott. “Research on Household Labor: Modeling and Measuring the Social Embeddedness of Routine Family Work.” Journal of Marriage and the Family, v.62 (2000). Shelton, Beth Ann and Daphne John. “The Division of Household Labor.” Annual Review of Sociology, v.22 (1996). Wight, Vanessa R., Suzanne M. Bianchi, and Bijou R. Hunt. “Explaining Racial/Ethnic Variation in Partnered Women’s and Men’s Housework: Does One Size Fit All?” Journal of Family Issues, v.34 (2013).

Marketing to and Data Collection on Families/ Children The marketing and advertising to data collection on families and children is a widely discussed topic. Marketing is the process of strategically adding value and designing services and goods to a specific target audience. Advertising, on the other hand, is the process of creating and communicating the value of products and services to a target audience. This persuasive and appealing message is disseminated to the target audience as ads and commercials via television, radio, magazine, newspapers, mobile phones, the Internet, and many other channels. To better create these messages, marketing and commercial researchers invest resources annually to collect data and analyze the behaviors and attitudes of families and children. Marketing and commercial researchers are particularly interested in the attitudes, behaviors, and consumption-related knowledge of children, who were first identified as a target market in the 1960s and have grown to become a significant and profitable segment. Children now have their own purchasing power, discretionary income, the ability to influence their parents on buying decisions, and eventually become adult consumers. Another factor that has led marketers to pay special attention to children is the increase of cable and satellite television channels with a specific and narrowed audience. Marketing to children is a lucrative business and companies spend more than $17 billion annually. Children, in particular, are a captive audience and are exposed to commercial messages and advertising daily. Studies report that an average child watches approximately 25,000 to 40,000 television commercials a year in the United States. In 2006 alone, the Federal Trade Commission (FTC), an independent agency of the U.S. government that promotes consumer protection, reported that 44 food and beverage companies spent $2.1 billion marketing food to youth. In 2009, 48 companies spent $1.79 billion on youth marketing, of which $1 billion was directed to children ages 2 to 11, $1 billion to teens ages 12 to 17, and $263 million overlapping the two age groups. Digital technology and the introduction of personal devices, such as tablets and smartphones,



Marketing to and Data Collection on Families/Children

have also opened new ways for marketers to reach out to children and their families. Many children and adolescents own their own devices and spend hours visiting gaming sites, using educational and entertaining applications, some of which are also loaded with advertising, or watching videos, especially if they are free. An FTC report indicated that in 2009 approximately $122.5 million was spent in new media marketing to youth. The top three categories directed to children were breakfast cereals, fast food restaurants, and snacks. The top three categories directed to teens were carbonated beverages, candy frozen desserts, and snacks. Marketing and advertisement to children have included television spots, online mobile advertising featuring food and toys, packaging, and cross-promotions with popular movies and TV characters. The tactic of marketing and advertising to children, in many instances, goes beyond purchasing children’s items; it also includes promotion of goods and services used primarily by adults, as in the case of purchasing a bigger minivan, for example, because children demanded more room. Research conducted by the National Restaurant Association showed that nearly seven out of 10 restaurant consumers take into account a restaurant’s family or child friendliness when choosing dining locations. Restaurant marketers believe that positive experiences in dining establishments in their childhood can lead them to become brand loyal as adults, and even work for those establishments. Some tactics restaurant marketers use to attract families and their children include designing a family night, targeting specifically to “mom,” developing coupon strategy, engaging young guests with activities, and considering digital entertainment. Marketing and Advertising and the Rights of Children and Families Historically, there has been a growing movement and policies to protect the rights of children and family against predatory marketing and advertising tactics. The Federal Communication Commission (FCC) and the FTC provide some protection to children from advertising and marketing practices. For example, the FCC has established that commercial television stations and cable operators should limit the amount of advertising to 10 and one-half minutes per hour on weekends and 12 minutes per hour on weekdays during children’s programming, referred

841

to as commercial limits. The FTC has a long history of attempting to protect children from predatory advertising practices including an attempt in 1978 to ban television advertising to children too young to understand persuasive content. Other organizations include the Advertising Self Regulatory Council (ASRC), which provides policies and procedures for advertising industry self-regulation, including the National Advertising Unit Division (NAD), Children’s Advertising Review Unit (CARU), Online Interest-Based Advertising Accountability Program (Accountability Program), and the Advertising SelfRegulatory Program. The underlying principles that drive these organizations are to ensure that children are not exploited with persuasive advertising, some of which they do not understand. The National Advertising Review Council (now the Advertising Self-Regulatory Program), for example, was created in 1974 to promote responsibility in advertising targeted to children. It recognized that children are vulnerable and lack experience and cognitive skills to sort deceptive, unfair, or inappropriate messages, and one of its core principles states that the advertisement should not stimulate children’s unreasonable expectations about product quality or performance. The scope of the self-regulatory program for children’s advertising applies to national advertising directed to children under the age of 12 in all media, as well as online data collection and other online practices that target children under 13. Besides these policy-driven or regulatory agencies, other organizations have emerged to stop commercial exploitation of children, including CommercialFree Childhood, the American Psychological Association, the American Academy of Pediatrics, and the World Health Organization, to name a few. They all have called for measures to restrict marketing to children and their families. The overall sentiment is that children and youth marketing has contributed to or created many health and social issues they are facing today. They believe that marketing to children has contributed to the obesity epidemic and eating disorders with the research, marketing, and promotion of sugary foods; encouraged division of sex and gender identities, with the many products selling gender roles and promoting beauty stereotypes; and violence, with the marketing and promotion of video games with violent content. Commercials appeal to children but their prevalence did not become as strong until the adoption of

842

Marketing to and Data Collection on Families/Children

television and then cable, which allowed marketers to better target their goods and services. The opportunities to advertise to children further increased with the explosive growth of the Internet, and thousands of children-oriented Web sites started to emerge. The Advent of the Internet Today’s children are increasingly drawn to the Internet, and many critics have raised concerns about online advertising to children, who can be easily misled or confused about the purpose and intent of the advertising message, especially in the online environment. A report released in 2009 revealed that the time children ages 2 to 11 spent online had increased 63 percent between 2004 and 2009 in the United States, outpacing the growth in time spent by the total U.S. population. Advertisers, encouraged by the ability of the Internet to reach children, have devised various tactics to appeal to them, including placing entertaining and interactive promotional content on Web sites and social communities; embedding brands in online games; and exposing children and youth to forms of online promotions that do not look like typical advertising. The fact that children might not perceive these practices as advertising has raised concerns among parents and consumer advocates. Studies have already established that online advertising can influence children’s product consumption. In fact, a study conducted in 2003 concluded that online advertising and e-mails from companies often prompted online impulse purchasing. Research by the American Psychological Association showed that children under 8 years of age were unable to comprehend televised advertising messages as commercial speech; instead, they would accept advertising messages as truthful, accurate, and unbiased. Some attribute this to the obesity epidemic and unhealthy eating habits. Food companies have responded to growing criticism and vowed to voluntarily cut advertising directed to children. Some even went on to vow not to advertise food and beverages on television and Web programs for which children under 12 could be the target audience, except for products that met nutrition criteria. Data Collection on Children and Their Families As children and their role in society have become more prominent, so has their participation in research. Marketers are especially eagerly concerned

about the effectiveness of their messages and how to continue to design messages to influence consumers. To obtain these data, marketers conduct research using surveys, observations, focus groups, etc. When it comes to understanding children, for many years researchers would rely on parents’ reports to understand their attitudes and behaviors. But they are now moving beyond the reliance on parents to include children in the data collecting process, having them as participants of focus groups, interviews, and other methods. A key challenge for researchers collecting information on children is to secure and find a balance between children’s rights to be heard and the right to be protected. Since the 1980s, children have been regarded as a special population. The United Nations Convention on the Rights of the Child (UNCRC) has established a universal code of rights for children’s participation in research. Research participants, adults and children alike, have human rights that are, for the most part, embedded in the research ethics codes. In the United States, the Department of Health, Education and Welfare (HEW) drafted rules for obtaining data and information from research participants and required a consent form describing procedures. These rules eventually were referred to as the “common rules,” based on three human rights principles: the well-being of the research participant, the voluntary participation and informed consent, and the assurance of privacy and confidentiality of the research participant. Although the common rules are primarily concerned with social researchers working for the government, federal agencies, charities and universities, marketing and commercial researchers still have the responsibility to protect the rights of children and their families. As a general rule, research aimed at children should be written in a way that they can understand. Because of children’s different levels of understanding and comprehension, research materials geared toward children often include films, cartoons, and other visual stimuli. The advent of new technologies has also offered a host of new methods and techniques to observe and collect information on children, particularly on game and social network sites, which raises concerns in regard to participants’ confidentiality and privacy and the ability to compile a reliable and valid sample. Juliana Maria D. Trammel Savannah State University

See Also: Advertising and Commercials, Families in; Information Age; Internet; Primary Documents 1990. Further Readings Advertising Self-Regulatory Council. “Self-Regulatory Program for Children’s Advertising” (2009). http:// www.asrcreviews.org/wp-content/uploads/2012/04 /CARU-GUIDELINES-Revised-ASRC-4-3-122.pdf (Accessed August 2013). American Psychological Association. “Report of the APA Task Force on Advertising and Children.” http://www.apa.org/pubs/info/reports/advertising -children.aspx?item=7 (Accessed September 2013). Beales, Howard. “Advertising to Kids and the FTC: A Regulatory Retrospective That Advises the Present.” http://www.ftc.gov/speeches/beales/040802adsto kids.pdf (Accessed September 2013). Campaign for a Commercial-Free Childhood. “Marketing to Children Overview.” http://www .commercialfreechildhood.org/resource/marketing -children-overview (Accessed September 2013). Council of Public Relations Firms. “Diversity Inclusion Resources” (2011). http://prfirms.org/resources/ diversity-inclusion-resources (Accessed August 2013). Fast Food F.a.c.t.s. Food Advertising to Children and Teens Score. http://www.fastfoodmarketing.org (Accessed September 2013). Federal Trade Commission. “A Review of Food Marketing to Children and Adolescents” (2012). http://www.ftc.gov/os/2012/12/121221foodmarket ingreport.pdf (Accessed 2013). Grunig, James. “Paradigms of Global Public Relations in an Age of Digitalization.” PRism, v.6/2 (2009). Holtzhausen, Derina. “Postmodern Values in Public Relations.” Journal of Public Relations Research, v.12/1 (2009). Huh, J. and R. Faber. “Developmental Antecedents to Children’s Response to Online Advertising.” International Journal of Advertising, v.31/4 (2012). National Restaurant Association. “Manage My Restaurant.” http://www.restaurant.org/Manage -My-Restaurant/Marketing-Sales/Promotion/Mark eting-to-families-Promotions-pricing-present (Accessed September 2013). Pal, Mahuya and Mohan Dutta. “Public Relations in a Global Context: The Relevance of Critical Modernism as a Theoretical Lens.” Journal of Public Relations Research, v.20.2 (2008).

Maslow, Abraham

843

Poutasse, John D. and T. Jennifer Miller. “Understanding the FCC’s Revised Children’s Television Rules.” http://www.bcfm.com/docs /030407Understanding%20the%20FCC.pdf (Accessed September 2013). Public Relations Consultants Association. “Maximizing Opportunities: Broadening Access to the PR Industry.” http://www.prca.org.uk/assets/files /Broadening%20access%20to%20the%20PR%20 industry.pdf (Accessed August 2013). Stimson, Sarah. “Why the PR Industry Lacks Diversity” (February 25, 2013). http://careers.the guardian.com/pr-industry-lack-diversity (Accessed August 2013). Unite. “‘Unfair’ Working Conditions of Parliamentary Interns to Be Discussed at Speaker’s Summit” (October 7, 2009). http://archive.unitetheunion.org /news__events/archived_news_releases/2009_arch ived_press_releases/_unfair__working_conditions _of.aspx (Accessed August 2013).

Maslow, Abraham Abraham Maslow was born in 1908 to poor Russian immigrant parents. The oldest of seven children, Maslow was a good student who seemed destined for an academic career. He started college in New York City but received all of his academic degrees in psychology (B.A., M.A., and Ph.D.) from the University of Wisconsin. He began his career as a psychology professor, author, and researcher in New York but spent many years at Brandeis in Massachusetts. The father of two children, he married his first cousin when he was a young man. Although he started his career as a behavioral psychologist, he is most known for his contributions to humanistic psychology. Abraham Maslow theorized that all people are predisposed, motivated, and capable of fulfilling their potential. Humanistic psychology departed from the prevalent American psychology theories that used a pragmatic approach to explain the ways in which people chose to behave. This approach, Maslow argued, neglects the very core of the human experience, failing to take into account the joy, love, and happiness that motivates people to reach the levels of growth and self-actualization critical to the

844

Maslow, Abraham

Figure 1 Maslow’s Hierarchy of Needs. The pyramid that illustrates this theory explains that at the foundation of human need is physiological stability, moving higher through the basic needs to a peak of aesthetics, a place in which a person can appreciate and enjoy the beauty of life without worry for the basic needs of survival.

human experience. Maslow explored the impact of social and emotional aspects of life, looking closely at the processes necessary for individuals to achieve a comprehensive understanding of both themselves and the environments in which they lived. Although adults and children alike face many challenges in American society to move beyond their pragmatic and everyday concerns, it was Maslow’s view that certain rare individuals could become self-actualized as complete human beings. He further hypothesized that family dynamics and relationships were critically important in facilitating or hindering individuals’ potential. Hierarchy of Needs Maslow claimed that fulfilling physiological needs is the foundation necessary for motivation, growth, and self-actualization to exist. Malsow’s Hierarchy of Needs outlines a pyramid of needs, both deficiency (physiological, safety, belongingness, love, esteem) and growth (self-actualization, knowing and understanding, aesthetics). This theory postulated that as the lower needs such as safety, belonging, and esteem are met, there is a tendency

for people to strive for new and more cerebral and complex ways of thinking, which is motivated by the human desire to grow and learn throughout life. Maslow’s pyramid illustrating this theory explains that at the foundation of human need is physiological stability, moving higher through the basic needs to a peak of aesthetics, a place in which a person can appreciate and enjoy the beauty of life without worry for the basic needs of survival. Parental Needs Within families, the Hierarchy of Needs plays a pivotal role in the well-being of the family as a whole. If the adult needs within a household are unmet, they are more likely to struggle to meet the needs of their dependents. Humanistic psychology explains that individuals supersede the sum of their parts. Adults are unable to piecemeal needs and wants, and cannot provide a stable family unit able to reach its full potential if aspects of life are lacking or unfulfilled. Although Maslow claimed that humans are goal oriented, able to make choices based on their own needs and the needs of those around them, it is ultimately a balance of how to allocate their resources



to reach that potential of self-actualization and stability that creates an environment of prosperity. As the dominant figures in the household, adult behaviors and motivation toward growth need to be met for their dependents to thrive and reach their own individual potential. Needs of Children Adults in the family face the responsibility of providing children with the supports necessary to grow and develop into adulthood. As children’s lower-level needs are met, they become more likely to become motivated to function at higher levels of creativity and self-actualization. Due to their dependency on the family to provide them with their basic needs, children are vulnerable to the atmosphere created within the home. Much of their growth is highly dependent on the adults in the household. In cases of domestic violence, for instance, children lack the safety and security needed to feel motivated to focus on anything but how to become secure enough to get through the day, let alone focus on individual aspirations and aesthetics. The power of ungratified deficiency needs in children becomes the antithesis for childhood growth, often leading to regressive and delinquent behaviors. The family unit is a critical factor when examining the ability children to attain their basic needs, as well as the opportunity to grow into well-developed, prosperous adults. Human Potential Human potential is fostered in an environment that minimizes danger and enhances creativity. Maslow saw human motivation and free will as positive elements of the human experience. When given the opportunity, in an environment that provides both safety and freedom, people will seek further learning and enhance their growth and reach self-actualization. Within American families, pervasive cultural pragmatism may interfere with reaching one’s full potential. According to the 2010 U.S. Census, 46.2 million people in American are living in poverty. Environmental, financial, and structural hardships often make it difficult for families to operate with all deficiency needs met, causing a struggle and imbalance for adults and children alike. While Maslow implores people to be aware of these deficiencies and rise above them, the reality of American life often prohibits families from finding the balance between safety and freedom that allows for

Masters and Johnson

845

Maslow’s Hierarchy of Needs to reach the ultimate human experience. Maslow’s influence on family professionals and educators peaked in the 1960s and early 1970s. Selfhelp groups for adults were formed to help them try to reach their full potential, and self-help books for parents (and other adults) were written to assist in raising self-actualized beings. There is some evidence that Maslow’s ideas about acceptance, belonging, and esteem contributed to the development of the movement among parents and educators that began in the late 1970s and that continues today in which children’s self-esteem is a major focus of child rearing and education. Whitney Szmodis Lehigh University See Also: Family Stress Theories; Food Shortages and Hunger; Poverty and Poor Families; Standard of Living. Further Readings Maslow, A. H. “Humanistic Psychology.” Journal of Humanistic Psychology, v.19/3 (1979). Maslow, Abraham Harold. “A Theory of Human Motivation.” Psychological Review, v.50/4 (1943). Maslow, Abraham Harold and Richard Lowry. “Toward a Psychology of Being.” New York: Van Nostrand, 1968.

Masters and Johnson William Howell Masters and Virginia Eshelman Johnson are recognized as two of the most influential sex researchers of the 20th century. They were the first to use laboratory-based research to study the anatomy and physiology of the human sexual response and also studied other controversial topics of their time, such as sexual dysfunction and disorders, sexual bonds between partners, and homosexuality. Masters and Johnson also developed clinical approaches to treat sexual dysfunction based on their research, published numerous books on the topic of human sexuality, and founded the Reproductive Biology Research Foundation in St. Louis, Missouri, which later became the Masters

846

Masters and Johnson

and Johnson Institute. Their many contributions to the study of human sexuality were chronicled in a television documentary titled Masters of Sex: The Life and Times of William Masters and Virginia Johnson, the Couple Who Taught America How to Love, based on a book of the same title written by biographer and author Thomas Maier in 2009. William Masters was born in Cleveland, Ohio, in 1915 and began his career as a gynecologist and faculty member in the School of Medicine at Washington University in 1957. Virginia Johnson was born in Springfield, Missouri, in 1925. She was a student at Washington University when Masters hired her as a research associate, introducing her to what would become her lifelong career in sex research. In 1964, Masters and Johnson founded the Reproductive Biology Research Foundation in St. Louis, Missouri, where they conducted their research discreetly, provided treatment for sexual dysfunction and disorders, and provided training workshops. The couple married in 1971 and in 1973 they became codirectors of the Masters and Johnson Institute. They divorced in 1993 but continued to work together after their divorce until 1994, when Masters retired and the institute closed. Johnson continued her work independently through the 1990s; she died in 2013. Masters died in 2001 at the age of 85. Inspired by Alfred Kinsey’s groundbreaking reports on sexual behavior in America, Masters and Johnson extended Kinsey’s research with their laboratory-based observations and measurements of human physiological sexual responses. They observed and recorded the sexual responses of roughly 700 male and female research participants as they engaged in intercourse or masturbation. Based on data derived from those laboratory observations, in 1966 Masters and Johnson published Human Sexual Response, their revolutionary and best-selling text on human responses to sexual stimulation. They detailed the human sexual response, which they found to cycle through four phases: excitement, plateau, orgasm, and resolution. Their text also included details of gender differences in physiological sexual responses, including women’s capacity for multiple orgasms and the indistinguishable physiological responses of clitoral and vaginal orgasms. They were also the first researchers to report on the sexual responsiveness of older adults.

Masters and Johnson’s research on human sexual response led to a more open discussion on a range of sexual topics, such as reproduction, contraception, sexual interest, and pleasure, and fueled their in-depth investigations into human sexual inadequacy, culminating in another important work published in 1970, Human Sexual Anatomy. Masters and Johnson believed that effective treatment for sexual inadequacy should include co-therapists and treatment of the relationship through both partners, even if only one partner exhibited symptoms of sexual dysfunction. Masters and Johnson then built on their research findings and clinical experience to address elements of the sexual bond in relationships and published The Pleasure Bond in 1974. The researchers emphasized that many couples experience sexual inadequacy and that a fulfilling sex life can positively affect a couple’s relationship. Additionally, Masters and Johnson stressed how nonverbal communication, equality, loyalty, and trust play major roles in intimacy and sexual experiences between partners in a relationship. Although Masters and Johnson contributed to the existing body of human sexuality research in unique and significant ways, some of their work has been criticized. Their laboratory-based research is considered a methodological strength, but it also presents a limitation because, as with all laboratory research, there is the possibility that behavior measured in the laboratory will not generalize to other settings. Masters and Johnson have also been criticized for their sampling practices, which included more than 145 prostitutes and excluded participants who were attracted to same-sex partners from their sexual response research. Both practices may have limited the generalizability of study results. Other researchers have challenged some of Masters and Johnson’s views on the female orgasm, but Thomas Maier has raised the most serious criticism of Masters and Johnson’s work to date. In 1979, Masters and Johnson published Homosexuality in Perspective, in which they addressed clinical sexual dysfunction in homosexuals and argued that their “conversion therapy” successfully converted 12 homosexual men and women to heterosexuality. In Masters of Sex, Maier alleges that Masters fabricated the therapy cases and offers support for his position, though Masters continued to defend his “conversion therapy” data until his death.

Maternity Leaves



Masters and Johnson received several awards in recognition of their contributions to human sexuality research and therapy throughout their careers and, although their work fueled controversy over sensitive sexual issues and has been challenged methodologically and ethically, many of their contributions are still acknowledged as highly influential. Continuing their legacy in human sexuality, Virginia Johnson donated a collection of Masters and Johnson’s records to the Kinsey Institute library, which currently holds the Masters and Johnson collection of letters, papers, reports, and correspondences. Brenda J. Guerrero Ana G. Flores Amanda Rivas Our Lady of the Lake University See Also: Gay and Lesbian Marriage Laws; Hite Report; Kinsey, Alfred (Kinsey Institute); Open Marriages; Polygamy; Same Sex Marriage; Sex Information and Education Council of the United States. Further Readings Maier, T. Masters of Sex: The Life and Times of William Masters and Virginia Johnson, the Couple Who Taught America How to Love. New York: Basic Books, 2009. Masters, William H. and Virginia E. Johnson. Homosexuality in Perspective. Toronto: Bantam Books, 1979. Masters, William H. and Virginia E. Johnson. Human Sexual Response. Toronto: Bantam Books, 1966.

Maternity Leaves Maternity leave refers to time that new mothers take off from employed positions upon the birth or adoption of a child. The development of maternity leaves in the United States has evolved through much controversy and struggle. The unpaid status of available leave in the United States continues to be a source of contention between governments, organizations, and women’s activists. Further, differences between federal and state laws, employee negotiation skills, and manager support all affect the process of gaining approval to take a maternity leave in the United States.

847

Maternity leave provides many benefits for mothers and families. Psychologists suggest that there are positive cognitive and socioemotional outcomes for young children who are able to have increased bonding time with their mothers. Perhaps one of the most important benefits that comes from maternity leave is the increased length of time mothers on leave spend breastfeeding their children. There is a strong correlation between the time mothers stop breastfeeding and the time they return to work. Put simply, it is very difficult for employed U.S. mothers to continue breastfeeding. Yet infant health improves dramatically with longer breastfeeding. Infant and child health also improves when children spend more time at home and less time in child care facilities. Maternity leaves provide most women the opportunity to recover from childbirth and to bond with their new children (both birth and adoption). Most women accessing their rights to maternity leave return to their jobs with the same employers after six to 12 weeks of unpaid time off. History of Maternity Leave Maternity leaves began in Germany in 1883 when policies about women’s confinement arose as part of a new social insurance system. Other countries adopted similar policies, and by 1919 maternity leave practice extended to 33 countries, after the International Labour Office hosted the first Maternity Protection Convention. These early leave policies directed that a woman should not be permitted to work for six weeks after confinement, could leave work six weeks before giving birth with a doctor’s note, would be paid, and could take time to nurse her baby upon her return to work. While most industrialized countries implemented leave in this way, some took more time. The United Kingdom implemented paid maternity leaves in 1976, and the United States has yet to implement paid programs. In the United States, two pieces of legislation are important in the development of maternity leave. First, the Pregnancy Discrimination Act of 1978 (PDA) amended the Civil Rights Act of 1964 by prohibiting discrimination in employment practices against pregnant women or women who left work temporarily to give birth. This act provided six to eight weeks of time away from work to recover from labor and childbirth and promised

848

McDonald’s

job security. Second, the Family and Medical Leave Act of 1993 (FMLA) provides for up to 12 weeks of unpaid leave time for eligible employees. Employees deemed to cause a grievous problem to the company through absence are not covered by FMLA, nor are employed women who work for companies with fewer than 50 people, or if they do not work full-time, or if they have not been working in their jobs for one year. Currently in the United States, FMLA covers roughly half of employed individuals. Employees not covered by FMLA have no legal rights to maternity leave. Barriers to Maternity Leave Use Although maternity leaves are fairly common in the United States, there are a number of barriers that influence the use and length of individual leaves. First, many women cannot afford to take three months away from work without pay and may forgo taking a leave or may shorten the length of their time off. This economic hardship affects a broad spectrum of women. Low-wage and middle-class earners typically cannot afford to give up three months of pay. However, female breadwinners, either single or married women with stay-athome or unemployed partners, a growing segment in the U.S. population, also often cannot afford to go three months without pay. Although the loss of pay during the leave period deters many women from taking a maternity leave or from taking a full 12 weeks, the real and perceived threat to mothers’ career paths also dissuades mothers from taking full maternity leaves. The “mommy track” refers to the actual and presumed assumption that employed women who have children will slow or permanently derail their careers. Employees (both men and women, but mostly women) on the mommy track are viewed as family-first employees rather than career-first employees and are therefore promoted less and earn less than employees on the “fast track” or otherwise normal career progression. Perceptions of the mommy track hurt all employees, including women without children who are assumed to reproduce, mothers, and men in organizations who may desire to take time off work to spend time with families. Taking a maternity leave often signifies a move onto the mommy track for employers, whether the mother intends to slow her career or not. Advocates for women and families

continue attempts to counter narratives about the mommy track and to promote parental leaves and mothers’ subsequent return to employed positions. Sarah Jane Blithe University of Nevada, Reno See Also: Breadwinners; Egalitarian Marriages; Family and Medical Leave Act; Mothers in the Workforce; Parenting. Further Readings Berger, L., J. Hill, and J. Waldfogel. “Maternity Leave, Early Maternal Employment, and Child Health and Development in the U.S.” Economic Journal, v.114 (2005). Buzzanell, P. and M. Liu. “Struggling With Maternity Leave Policies and Practices: A Poststructuralist Feminist Analysis of Gendered Organizing.” Journal of Applied Communication Research, v.33 (2005). Kamerman, S. and P. Moss, eds. The Politics of Parental Leave Policies: Children, Parenting, Gender, and the Labour Market. Bristol, UK: Policy Press, 2011. Meisenbach, R., R. Remke, P. Buzzanell, and M. Liu. “‘They Allowed’: Pentadic Mapping of Women’s Maternity Leave Discourse as Organizational Rhetoric.” Communication Monographs, v.75 (2008).

McDonald’s McDonald’s is the largest fast food restaurant chain in the world, employing more than 1.8 million people, and its golden arches are one of the most globally recognized symbols of Americana. An iconic cultural American symbol, McDonald’s ranks among the top 100 global brands. McDonald’s holds 19 percent of the fast food market share, operating more than 34,000 restaurants in 118 countries worldwide, serving 68 million customers daily. A true testament to the global reach, importance, and market penetration of McDonald’s is the “Big Mac Index” published by The Economist magazine. Indicating purchasing power of various currencies based on local prices in U.S. dollars, the Big Mac was chosen because of its popularity and availability around the world. Although not scientific, the index is a fair measure of cost of living around the world.



McDonald’s started as a small drive-through BarB-Que restaurant in 1937 by two brothers, Richard and Maurice McDonald. A few years later, they moved the restaurant to San Bernardino, California, and renamed it McDonald Brothers Burger Bar Drive-In. In 1948, after growing dissatisfied with high employee turnover, the brothers decided to change their business model. Richard McDonald designed a new building for the restaurant that he hoped would be simple, memorable, and easy to recognize. The now famous golden arches formed the letter M and were highly visible at night. It became the world’s best-known marquee. After three months of renovations, the brothers reopened in December 1948 with a condensed menu consisting of hamburgers, coffee, milk shakes, soft drinks, potato chips, and pies. Their aim was to have a speedy, high-volume, and low-cost operation, allowing working-class families to eat out at restaurants. After a slow start, the new self-service restaurant concept became hugely successful. After visiting McDonald’s in 1954 and seeing the long lines, Ray Kroc, a milk shake mixer salesman, convinced the brothers to sell him the nationwide franchising rights. On April 15, 1955, Kroc opened the first official McDonald’s store in Des Plaines, Illinois. Many credit the success of McDonald’s to Kroc’s business acumen and incredible salesmanship. Referred to as the Henry Ford of the service industry, Kroc believed in streamlining operations while putting emphasis on quality, service, and cleanliness. He acquired full control of McDonald’s from the McDonald brothers for $2.7 million in 1961 and started to expand aggressively in the United States, and internationally in 1967. While using the franchising system to build a global infrastructure, he also perfected a restaurant system as no other. McDonald’s offers an efficient system of providing fast, inexpensive food to customers, while requiring employees to follow predesigned processes, ensuring speed, accuracy, and consistency of products and services, such as food temperature, wait time, and menu display, regardless of location. The success of McDonald’s is apparent, as many companies have adopted the McDonald’s business model, which itself coined new terminology: the “McDonaldization of society.” McDonald’s and Families The McDonaldization of the world had a major impact on the American family. Every day, about

McDonald’s

849

25 percent of the adult population in the United States eats at a fast food restaurant, while 75 percent of U.S. population lives within three miles of a McDonald’s. People in the United States are working longer hours and constantly seek to make their life easier, children happier, and budget more cost effective. The growing popularity of fast food in general and McDonald’s in particular helped transform the U.S. economy, values, eating habits, and nutritional values. From the 1980s, it also contributed to or is blamed for the dramatic rise in childhood obesity in the United States due to the higher calories and saturated fat of fast food. Due to public and regulatory pressure and the need to be a good public citizen, McDonald’s adjusted portion sizes, disclosed nutritional information, and discontinued its supersize portions while introducing healthier options such as salads and fresh fruits. Offering better nutritional food is a delicate balancing act because it is more expensive and less appealing to consumers. Shedding its “super-size me” image came at a cost. While McDonald’s growth recently stalled, rivals such as Wendy’s and Taco Bell successfully introduced best-selling products such as Pretzel Bacon Cheeseburger and Doritos Locos Tacos. Some industry insiders are concerned that McDonald’s is trying to be all things to all people while losing its focus and neglecting its core customers who place less value on nutrition and more value on price, taste, and fast food. McCafé Serving close to 3 million cups of coffee a day, McDonald’s is a major player in the coffee business. Of its 34,000 stores, 13,900 offer the McCafé concept. Since the inception in 1993 of McCafé brand coffee in Melbourne, Australia, store revenues increased by 7 percent after it came onto the menu, and the coffee business has more than doubled. It was not until May 2009 that the McCafé signature coffee line joined the McDonald’s national menu. Although a late entrant to the specialty coffee retail business, McCafé enjoys the great infrastructure of the largest restaurant chain in the world and the ease of converting existing McDonald’s stores into McCafé locations. It also recently started selling McCafé packaged coffees in supermarkets. The McDonald’s concept, strong brand name, and infrastructure make McCafé a serious competitor in the coffee business, and as one of the most

850

Me Decade

recognizable family symbols in the world second only to Santa Claus it makes this iconic brand a formidable competitor in any business segment. Judging by its historical, family, and cultural impact, McDonald’s is more than a global brand: it is Americana. Hagai Gringarten St. Thomas University See Also: Mealtime and Family Meals; Obesity; Shopping Centers and Malls. Further Readings Haig, Matt. Brand Royalty: How the World’s Top 100 Brands Thrive and Survive. Sterling, VA: Kogan Page, 2004. Jargon, J. “At McDonald’s, Salads Just Don’t Sell.” Wall Street Journal (October 19, 2013). http://search.pro quest.com/docview/1442929282?accountid=14129 (Accessed April 2014). McDonald’s Corporation. http://www.mcdonalds.com /us/en/home.html (Accessed April 2014). Ritzer, George. The McDonaldization of Society 5. Thousand Oaks, CA: Pine Forge Press, 2008. Schlosser, Eric. Fast Food Nation: The Dark Side of the All-American Meal. Boston: Houghton Mifflin, 2001.

Me Decade With the rather incongruous message of “but enough about me . . . what do you think about me?” the Me Decade announced its arrival. Coined by author Tom Wolfe in an August 1976 New York magazine article, the Me Decade exemplifies the lifestyle choices that became dominant when baby boomers left behind the social activism highlighted on 1960s college campuses in favor of therapeutic lifestyles of self-aggrandizement. Spurred by a postwar economic boom that lasted from the end of World War II through the early 1970s, the Me Decade abandoned issues of social justice to concentrate on “me.” A result of the late-1960s transition from direct action to the introspective alternatives embodied in the psychedelic and hippie movement, the Me Decade signified a shift from the political to

the spiritual. For Wolfe, this mystical moment is expressed best in the 1970s Oriental meditation craze that enrolled thousands, often making small fortunes “selling” personal fulfillment. The narcissism that underscores this turn inward became symbolic of a changing social, political, and economic climate that sacrificed traditional values and relationships in favor of “me.” With such an emphasis, the Me Decade represents a moment when selfindulgence overrode social responsibility. A Selfish Response While concrete changes in the 1960s cannot be underestimated—from civil rights to women’s rights to the environmental movement to heightened global awareness—its successes often fall victim to perceived failures. For the baby boomers, a generation born between 1946 and 1964, the 1960s manifested as a reactionary moment when the idealized image of social and political life failed to accord with inequalities that spread from the streets of America to the fighting fields of Vietnam. Responding against the second Red Scare of Joseph McCarthy, the brutal realities of Jim Crow, and the violence of war, 1960s activists conducted radical experimentations often communal in nature and established through principles of social responsibility and self-sacrifice. By the mid-1960s, with frustrations mounting, the hippie counterculture emerged as a utopian alternative to both social activism and postwar American society; but the moment was short-lived as the Summer of Love (1967) fizzled under the weight of its own inclusivity. As the war dragged on, and as the counterculture experienced its violent dark side in Charles Manson and the murders at the music festival at Altamont, a mood of unapologetic hedonism displaced social awareness. In its wake, the Me Decade inaugurated a consumptionoriented culture centered on lifestyle choices promising immediate self-fulfillment. Political Mistrust, Affluence, and Family The Me Decade combined an extraordinary rise in the standard of living—a standard the 1960s Left never foresaw under capitalism—with a general sense of malaise regarding political projects and social improvement. Americans in the 1970s grew more disillusioned with American politics and politicians: from Watergate to Richard Nixon’s eventual



resignation, politicians proved not only fallible but also morally bankrupt. Such distrust was exacerbated further by an economic downturn that saw a massive spike in inflation and the rationing of gas due to oil embargoes. The dynamics of disposable income, when combined with this reality of rising unemployment, the continuing erosion of faith in traditional institutions, and a general sense of aimlessness, led members of the Me Decade to turn insular, believing that psychological well-being required focusing on consumer processes (and products) in order to be able to find oneself. An aristocratic luxury bent on excess and vanity, this new American sentiment made “me” the permanent star, ostensibly relegating traditional family values and structures to the past. Secondwave feminism continued to fight for greater rights and power, both in the home and in the workforce. Divorce became a norm as rates pushed above 50 percent, while providing another excuse to talk about me during the psychological consultations that often accompany divorce. As the sexual revolution took hold across America, swinging and sex parties became hip, expressing a new theology in which the orgasm, rather than the relationship, signified the high point of heaven. The 1974 publication of The Courage of Divorce perhaps captures this transformation best, as it encouraged individuals to put their own happiness above that of their partners and children. Consumer Therapy and Fads of Happiness As the utopian visions of the 1960s faded into the self-realization lifestyles and therapeutic cults of the 1970s, self-fulfillment itself became the most important cultural aspiration. In the 1960s, the baby boomers rebelled communally against a stultifying cultural atmosphere epitomized in the conformity and paranoia of the 1950s. By the 1970s, this rebellion found its ultimate expression in Me movements—products often sold, advertised, and consumed under the guise that self-realization is only a purchase or therapy away. The working-class figure gave way to the ever-expanding middle-class “man” who, as Tom Wolfe originally argues, took the money and ran toward anyone or anything that promised self-gratification. From exercise crazes to disco to the proliferation of self-help therapies and programs (such as Warner Erhard’s Erhard Seminars Training, or EST), the

Me Decade

851

effort to “find oneself ” included an endless train of spiritual fads. Conspicuous consumption pervaded American life, indicating a transformation in the economic characteristics of Americans; regardless of actual economic health, many Americans believed themselves to be part of an aristocratic renaissance in which purchasing power promised personal salvation. Items such as the “pet rock,” which were no more than common rocks sold along with accessories and guidebooks, and mood rings, which represented an inauthentic attempt to genuinely display one’s inner feelings outwardly, embody the insane consumerism and superficial excesses of the Me Decade. The Third Great Awakening The Me Decade describes a moment when Americans gave up trying to perfect the world and instead sought to perfect themselves. For Tom Wolfe, this move toward introspection signaled a new theology that transformed America’s religious climate from doctrines and institutions to the immediacy of self-experience. This Third Great Awakening signified the coming together of piety and therapeutic encounter groups; it combined the massaging of one’s ego with new religious movements founded upon the correlation between self-fulfillment and divine wisdom. Within this Third Great Awakening the idol was “me,” and the aim was feeling good. Therapy, self-help programs, and wild expressions of individualism—possibly captured best in the streaking craze that overtook the 1970s—replaced the repression that dominated the 1950s and situated 1960s activism. Practices such as the encounter groups (or “lemon sessions”) of the Esalen Institute, which were designed to lay bare one’s soul by having peers strip away one another’s defensive facades, became standard procedure: It provided an excuse to declare unequivocally that it was not only OK but healthy to “talk about me.” Making “me” the star on the stage of life offered each individual an experience of universal significance—it signaled a religious moment that celebrates the individual soul as a spark of God’s divine light. The apex of this religiosity occurred as New Lefters abandoned chanting slogans for chanting mantras, ultimately suggesting that what makes the Me Decade so intriguing is its capacity to meld together a sentiment of pure narcissism with a perspective of divine righteousness. As the

852

Mead, Margaret

baby boomer generation shifted from street theater to self-help, from political activism to psychological analysis and spiritual renewal, America experienced a decade of egotistical navel gazing and flippant hedonism marked, as Wolfe originally charted, by the sanctification of “me” as America’s new crucible. Morgan Shipley Michigan State University See Also: Baby Boom Generation; Divorce and Separation; Individualism; Social History of American Families: 1961 to 1980. Further Readings Bailey, Beth and David Farber, eds. America in the Seventies. Lawrence: University Press of Kansas, 2004. Frum, David. How We Got Here: The 70s, the Decade That Brought You Modern Life—For Better or Worse. New York: Basic Books, 2000. Jenkins, Phillip. Decade of Nightmares: The End of the Sixties and the Making of Eighties America. New York: Oxford University Press, 2006. Kent, Stephen. From Slogans to Mantras: Social Protest and Religious Conversion in the Late Vietnam Era. Syracuse, NY: Syracuse University Press, 2001. Lasch, Christopher. The Culture of Narcissism: American Life in an Age of Diminishing Expectations. New York: Norton, 1991.

Mead, Margaret Margaret Mead (1901–78) is considered by many to be the most famous anthropologist of the 20th century. As an anthropologist, she is considered a cultural determinist, emphasizing the role of culture (nurture) over heredity (nature) in determining what constituted a family and the different roles family members took on. She emphasized how the social environment/culture in which a child is raised, rather than the person’s race or instinct, affects the type of person they become. Mead’s long career, which included writing 34 books, including A Rap on Race, cowritten with James Baldwin, often bridged the gap between

academic and popular writing. Besides her many publications, she was an accomplished public speaker and served as president of the American Anthropological Association in 1960. From 1961 to 1978 she wrote a column in the popular magazine Redbook dealing with a variety of issues, including race, family, homosexuality, abortion, and communes. She deplored the decline of the extended family, the anonymity of city life, and the generation gap. In 1969, Time magazine named her Mother of the Year. Mead studied under Franz Boas, considered the father of American anthropology, at Columbia University. Upon receiving her master’s degree she went for nine months to Samoa in the Pacific Ocean to study female adolescents. The popular version of her account of that study, published in 1928 as Coming of Age in Samoa: A Psychological Study of Primitive Youth for Western Civilization, quickly became a bestseller, based a lot on her comparison of Samoan extended families and their child-rearing practices to nuclear families and child-rearing practices in the United States, and she found strengths and weaknesses in both. In 1929 she received her Ph.D. in anthropology from Columbia. Mead wrote about how she went to Samoa and other faraway places to learn more about other human beings, who she maintained were like ourselves in every way except their culture. She followed up her study of Samoans with another study that she wrote about in another popular book, Growing Up in New Guinea. Here she compared and contrasted child-rearing and families there with both American and Samoan practices. She again emphasized malleability of human nature as shaped and formed by cultural tradition. In 2009, Paul Shankman documented how the anthropologist Derek Freeman in his well-publicized attacks on Mead’s scholarship after her death misrepresented her work, and noted how Mead’s work was criticized by proponents of the role of heredity/nature in shaping human personality and behavior as opposed to those who saw culture/ nurture as the major force. Mead and other anthropologists recognized how what constitutes a family and the roles that various family members play is largely determined by the child-rearing/enculturation practices of a particular group. Critical factors in shaping children include how they are disciplined and rewarded, who does that disciplining



Mealtime and Family Meals

853

Mead saw her study of people in faraway places as helping Americans understand themselves better. In her autobiography, she wrote the following: I think we shall continue to value diversity and to believe that the family—perhaps more widely assisted by grandparents, aunts and uncles, neighbors and friends and supplemented by more varied experience in other settings—provides the context in which children are best reared to become full human beings. Jon Reyhner Northern Arizona University See Also: Extended Families; Multigenerational Households; Spock, Benjamin.

Many consider Margaret Mead to be the most famous anthropologist of the 20th century. She emphasized how the social environment in which a child is raised, rather than race or instinct, affects the type of person that child becomes.

and rewarding, and what kinds of experiences they are exposed to, including how much freedom males and females are given to explore the world they live in. Shankman asserted that Mead in Coming of Age in Samoa was the first American anthropologist to use enthnographic data from another culture to critique American society. Mead’s life contrasts somewhat with the cultural determinism she documented in her work. She was raised by educated parents whom she describes in her autobiography as treating her like an adult and encouraging her to think for herself. Instead of following the dominant American cultural pattern of her time, she was an early feminist and retained her maiden name through three marriages and divorces. She was an advocate for civil rights and supported causes such as child nutrition legislation and the legalization of marijuana, including testifying before Congress in 1969. Dr. Benjamin Spock was her pediatrician, and her attitude toward child rearing—such as letting babies feed when they are hungry, contrary to most pediatricians’ advice at the time—became part of his popular book, The Common Sense of Baby and Child Care.

Further Readings Freeman, Derek. Margaret Mead and Samoa: The Making and Unmaking of an Anthropological Myth. Cambridge, MA: Harvard University Press, 1983. Lutkehaus, Nancy. Margaret Mead: The Making of an American Icon. Princeton, NJ: Princeton University Press, 2008. Mead, Margaret. Blackberry Winter: My Earlier Years. New York: William Morrow, 1972. Mead, Margaret. Coming of Age in Samoa: A Psychological Study of Primitive Youth for Western Civilization. New York: Harper Perennial, 2001. Mead, Margaret. Growing Up in New Guinea: A Comparative Study of Primitive Education. New York: William Morrow, 1930. Shankman, Paul. The Trashing of Margaret Mead: Anatomy of an Anthropological Controversy. Madison: University of Wisconsin Press, 2009.

Mealtime and Family Meals Family meals have become an important symbol in the conception of the American family. Meals are one of the few activities in which every person partakes, but there is a large degree of variability in the way people engage in mealtime. Adages such as “the family that eats together stays together” highlight the

854

Mealtime and Family Meals

idea that shared meals can facilitate familial unity and create important traditions. Family meals also reflect cultural norms and traditions and can serve to socialize children. Family meals can vary in several ways: the preparation and form of the food, the expectations of behavior of the family members, and the degree to which family members interact or share the experience. For some families, meals are the only consistent activity in which family members share and engage with each other without distraction. For others, meals can be shared but include other activities, such as the typical “TV dinner.” Other families may eat at different times or even eat different food, having completely separate mealtime experiences. Mealtimes can serve the family in numerous ways, including as a means to reaffirm cultural identities, values, and ideals. Mealtimes are also a way to socialize children and teach roles and norms. During mealtimes, parents model, monitor, and critique their children’s behavior. Meals become a family tradition; each family member has expectations of what occurs at mealtime but chooses how to behave in order to reflect the current feelings or issues he or she has in the family. Thus, family meals can become grounds to reinforce behaviors or work out conflicts. The stereotypical idea of the proper American meal usually features a homemade meal prepared by the mother of a family. Children contribute by setting the table. During the meal, the family sits at a table to eat and converse with each other about the day’s events or various family issues. Although this concept seems integral to the American family, this model of family meals did not exist until the mid-19th century. Prior to that time, many families did not have a table and instead ate more sporadically throughout the day. Agrarian families in particular interacted with each other throughout the day, so there was no need to designate a time of the day to spend with each other. The concept of mealtime only emerged when urbanization and industrialization separated private from public spheres. Formal shared family meals began with the Victorian middle class in the late 19th century and the practice soon became a symbol of achieving middle-class status. Factors related to poverty prevented disadvantaged families from partaking in such meals, thus reinforcing the concept of family meals being something desirable to achieve. By the 20th century, experts in various fields advocated family mealtimes as a way to facilitate child

development, improve nutrition, and enrich familial relationships. It was only in the 1950s, however, that the majority of American families acquired the means to practice shared family meals. Although by the 1950s the standard of living enabled most families to engage in family dinners, subsequent changes to American families, such as women entering the workforce and the rise of extracurricular activities for children, altered family mealtimes, making it harder for family members to share dinner with one another. Despite these changes, the 2011–12 National Survey of Children’s Health (NSCH) indicated that among children age 0 to 17, 46.7 percent eat dinner with all family members every day, 31.7 percent do so four to six days per week, 18.1 percent do so one to three times per week, and only 3.5 percent do not eat dinner with their families. These data are fairly stable across ethnicities and socioeconomic status. Hispanic households had slightly higher averages of meals spent together, while African American households had slightly lower averages. Despite common perceptions, households of lower socioeconomic status had more meals together than those of higher socioeconomic status. These trends are in part due to the variability of family meals. Although the majority of families share meals together, these meals are not served in the formal conditions modeled by the Victorians. Although mothers often remain responsible for preparing the meal, families now frequently eat at restaurants or have take-out food. In addition, many families watch television while eating dinner. Families also display a variety of expectations of interaction; some families maintain structured meals, whereas others are comfortable with conflicts and arguments during the meal. These expectations can reflect cultural differences and contribute to the acculturation of the children. Rachel T. Beldner University of Wisconsin–Madison Janice Elizabeth Jones Cardinal Stritch University See Also: Acculturation; Ethnic Food; Family Consumption; Rituals. Further Readings Data Resource Center for Child and Adolescent Health. “National Survey of Children’s Health Data” (2011–

12). http://www.childhealthdata.org/learn/NSCH (Accessed April 2014). Fiese, B. H. Family Routines and Rituals. New Haven, CT: Yale University Press, 2006. Larson, R., A. Wiley, and K. Branscomb, eds. Family Meals as Contexts of Development and Socialization. San Francisco: Jossey-Bass, 2006.

Medicaid Medicaid is a social welfare program introduced in 1965 as Title XIX of the Social Security Act. It was intended to help states provide medical coverage for low-income families. In essence, it is a federal–­state matching program that pays for medical treatment of certain low-income families and individuals. In contrast to Medicare, Medicaid is a medical social assistance program. However, Medicaid programs vary considerably from state to state. Nevertheless, it is an important part of the American welfare state because it serves as the nation’s primary source of health insurance coverage for low-income populations. The legislative reforms from the beginning of Medicaid in 1965 until the 2010 Obamacare reform show a diverse picture. During the first years, the program was expanded. The 1980s saw severe cuts to the welfare system. The 1990s and 2000s are characterized by two attempts (during the Bill Clinton and Barack Obama administrations) to reform the American health care system fundamentally. The latest reform shows a future impetus for the American health care system. Structure and Eligibility The medical coverage program for low-income families is administered by the states and differs in its structure exceedingly. That is because every state establishes its own eligibility criteria and determines the type, amount, duration, and scope of Medicaid services and benefits. Accordingly, coverage differs across the states. However, states must cover certain basic services to receive federal grants. Generally, inpatient hospital services, outpatient hospital services, parental care, vaccines for children, family planning service, pediatric services, midwife services, ambulatory services, and screening diagnostics are included. States may

Medicaid

855

receive additional funds for providing optional services (e.g., rehabilitation or physical treatment). The program does not provide medical assistance for all low-income families. To be eligible, people have to meet strict eligibility criteria. Pregnant women can receive Medicaid assistance if their family is below 133 percent of the federal poverty line. However, the service is limited to care related to pregnancy. Additionally, recipients of the Supplemental Security Income (SSI), all children younger than 19 years of age who live in families below the federal poverty line, and poor Medicare beneficiaries are eligible. Legislative Development of Medicaid Even before 1965 poor people have received medical treatment. State, county, and municipal governments funded medical services for the poor. The costs increased steadily and many poor people had no access to those services. To solve the problem, Medicaid was enacted in 1965 to address the medical needs of welfare recipients and “medically indigent persons” who, although not destitute, could not pay their medical bills. Many legislators assumed that Medicaid would be a small program that would focus on the needs of AFDC (Aid to Families With Dependent Children) recipients. President Jimmy Carter expanded Medicaid and other welfare programs. Under the 1965 legislation, the federal government was required to screen and treat low-income families. However, compliance was inadequate and screened children were not helped even when serious health problems occurred. Carter determined to fund screening and health treatments for low-income children and health care assistance for pregnant women. As a consequence, Medicaid spending increased rapidly. Federal and state funding increased from $10.9 billion in 1970 to $25.4 billion in 1980. This immense increase established a basis for the cutbacks of the 1980s, when President Ronald Reagan cut the federal share of Medicaid spending in the Omnibus Reconciliation Act (OBRA). The law offered states incentives to reduce the growth rates of Medicaid. Reagan encouraged the states to offer contracts for Medicaid only to those hospitals and clinics that offered low bids. The consequences for many American families were sweeping. Medicaid recipients could use even fewer providers, and inequalities in the health care system increased. Patients on Medicaid could obtain treatment only in inner-city clinics. At that

856

Medicaid

time, many physicians did not treat poor Medicaid recipients because of financial interests—they received larger fees from other insurances. By the mid-1980s, roughly 30 million Americans had no health insurance at all or were ineligible for Medicaid or Medicare. In 1992 about 40 million Americans had no private health insurance or access to Medicare or Medicaid. Nevertheless, the years after the Reagan reforms can be characterized as a period without any significant legislative reform. President Bill Clinton attempted to implement a comprehensive health care reform. However, there was immense pressure to cut entitlements such as AFDC and Medicaid. Clinton compiled health care reform in the early 1990s that emphasized universal coverage. To do this, he relied on a market-driven system with private insurances instead of a governmental single-payer system. For Medicaid recipients, his proposal implied that the program would be restrained. However, recipients could have joined competing plans by regional health care alliances. For Medicare and Medicaid recipients (compared to workers), reform would not have been that far reaching. Many researchers argue that, despite the need for an overhaul of the system, the time was not ripe for comprehensive health care reform. The welfare reform in 1996 made extensive changes in the welfare system. However, most Medicaid regulations remained intact. Nonetheless, Clinton gave the states the option of terminating legal immigrants from Medicaid. A deteriorating quality of medical provision for immigrants was the consequence. In the following years, spending on Medicaid continued to rise. The combined costs of social security, Medicare, and Medicaid reached 66 percent of gross domestic product (GDP) in 2000. Politicians called for a general overhaul of the American health care system. While President Barack Obama promised to back health care reform, he remained vague about the details. In 2010, 45 million Americans were uninsured or had only poor-quality care. After a long legislative fight, the Patient Protection Affordable Care Act (Obamacare) was passed in the same year. The primary aim was to provide health care coverage to 32 million Americans by 2014. The reform aimed at ending any abuses by private insurance companies and increasing the competition among private insurance plans. Nevertheless, the reform has left the main structures of Medicaid and Medicare intact.

Another aim of the reform was to establish general, wide-ranging health care coverage for all Americans. Obama included Medicaid reform in his goals. On one hand, he has expressed the intention of expanding clinics in underserved areas. On the other hand, the president’s reform still aims at increasing the coverage of Medicaid. In 2014, 17 million low-income individuals were covered by Medicaid. Obama’s intentions were to standardize the eligibility for the program. Individuals in all states with annual incomes up to 133 percent of the federal poverty line (currently $14,856 or less) should have been able to enroll, with the goal of providing coverage to up to 21.3 million poor Americans. The reform also suggested that if a state refused to expand coverage, then it would lose all Medicaid funding. This was meant as a protection to ensure that states supported their poorest equally. In 2012, the Supreme Court ruled on the Obamacare Medicaid expansion and gave the states the opportunity to opt out of Medicaid expansion. As in the past, eligibility differs from state to state, and the states decide whether to choose to expand Medicaid. Empirical observations show that 28 states will expand their Medicaid coverage (including Washington, Oregon, California, Arizona, North Dakota, Minnesota, Colorado, and New Mexico), whereas 23 will not participate (including Alaska, Nebraska, Montana, Idaho, Michigan, Pennsylvania, Georgia, and Mississippi). The coming years will reveal more insight on the results of Obamacare reform in respect to changes to Medicaid as well as the future of the program. Major Criticism The Medicaid program is a central element for lowincome individuals and families. Medicaid allows these individuals to get basic medical treatment. Most of these needy are not able to afford private health insurance and are dependent on Medicaid. Supporters of Medicaid point to the growing participation in the program as evidence of its importance. Nevertheless, most people receiving Medicaid are confronted with severe problems. They can use only the basic medical care coverage and the most necessary treatment. For the most part, they are faced with poor quality and insufficient medical provision. The short-term and long-term consequences of this shortage are not yet known. In this respect,

Medicare



Medicaid is often criticized for being expensive and ineffective. The costs for the program have grown constantly and are comparably high. Nevertheless, many poor Americans are not able to afford another or private health care insurance. A common point of criticism is that Medicaid (like other social assistance programs) fosters welfare dependency and creates the wrong incentives. Some have stressed that Medicaid promotes an unhealthy dependence on government. It is also often criticized for an increase in fraud and abuse. There have been various efforts to reduce it; however, most of these attempts have not been successful. Some conservative critics argue that the structure of the program is the basic source of this problem. The program gives a vast majority of the money to reimburse health care costs to health care providers. Another well-known problem is the increasing costs for Medicaid, which are assumed to continue to increase in the coming years. There have been suggestions to transform Medicaid into a system of direct aid to recipients by introducing vouchers or refundable tax credits. While the main aim would be to end inefficiency and dependency, others argue that it would bring about stronger stigmatizing effects of Medicaid utilization. Michaela Schulze University of Siegen See Also: Medicare; Poverty and Poor Families; Poverty Line; War on Poverty; Welfare; Welfare Reform. Further Readings Bambra, Clare. “Medicare and Medicaid.” In International Encyclopedia of Social Policy, Tony Fitzpatrick et al., eds. New York: Routledge, 2006. Edwards, Chris. “Medicaid Reforms.” http://www.down sizinggovernment.org/sites/downsizinggovernment .org/files/pdf/hhs-medicaid-reforms.pdf (Accessed July 2013). Grabowski, David C. “Medicare and Medicaid: Conflicting Incentives for Long-Term Care.” Milbank Quarterly, v.85/4 (2007). Grogan, Colleen M. and Eric M. Patashnik. “Universalism Within Targeting: Nursing Home Care, the Middle Class, and the Politics of the Medicaid Program.” Social Service Review, v.77/1 (2003).

857

Jansson, Bruce S. The Reluctant Welfare State: Engaging History to Advanced Social Work Practice in Contemporary Society, 7th ed. Upper Saddle River, NJ: Cengage Learning, 2012. Ku, Leightin and Sheetal Matani. “Left Out: Immigrants’ Access to Health Care and Insurance.” Health Affairs, v.20/1 (2001). ObamaCareFacts. “ObamaCare Medicaid Expansion.” http://obamacarefacts.com/obamacares-medicaid -expansion.php (Accessed July 2013). Scheppach, Raymond C. “The State Health Agenda: Austerity, Efficiency, and Monitoring the Emerging Market.” In The Future of U.S. Healthcare System: Who Will Care for the Poor and Uninsured? Stuart H. Altman et al., eds. Chicago: Health Administration Press, 1998. Skocpol, Theda S. Social Policy in the United States: Future Possibilities and Historical Perspective. Princeton, NJ: Princeton University Press, 1995. Slessarev, Helene. “Racial Tensions and Institutional Support: Social Programs During a Period of Retrenchment.” In The Politics of Social Policy in the United States, Margaret Weir, Ann Shola Orloff, and Theda Skocpol, eds. Princeton, NJ: Princeton University Press, 1988.

Medicare Medicare is a pivotal element of the American social security system. It was created in 1965 as Title XVIII of the Social Security Act, titled Health Insurance for the Aged and Disabled. It shows the expectation of most people that government bears an obligation to care for the elderly (ages 65 or older) and disabled. Medicare falls into the authority of the Department of Health and Human Services (DHHS) and is administered by the Center for Medicare and Medicaid. Against the background of the demographic change and the aging of the American population, it is not surprising that the program has been expanded several times since 1965. As well, more people are covered by Medicare. In 1970, 10 percent of the population were covered, whereas in 2009, 14 percent were covered. This implies that Medicare is becoming a more expensive burden for the federal government to carry. Compared to other countries

858

Medicare

in the Organisation for Economic Co-operation and Development, spending for health care is extremely high in the United States (in absolute and relative terms). This cost explosion of Medicare is accompanied by a continuing criticism. However, the American government has thus far met its responsibility to care for the elderly. While it is far too early to determine the failure or success of the latest health care reform, the legislative development of the program shows significant changes in the system. Structure of Medicare Originally, Medicare consisted of two parts (part A and part B). However, the program was expanded several times. As a result, Medicare is divided into four general parts that can be combined in several ways. Part A is the basic hospital insurance. It provides coverage of hospital inpatient care and hospice care (selected services for elderly persons). Part A is the mandatory part of Medicare. It is mainly financed by a payroll tax on workers and employers. Part B is the supplementary medical insurance and provides coverage for some nonphysician services such as diagnostic tests, ambulance service, or flu vaccinations. In contrast to part A, part B is a voluntary element of Medicare. It is financed by a combination of monthly premiums on elderly persons and funds from general revenues of the federal government. Part C is an expansion of the original Medicare program and was introduced in 1997. It is also known as Medicare Advantage and consists of different voluntary plan options (such as coordinated care plans or medical saving account plans) the insured elderly can buy. It is run by Medicareapproved private health insurance companies. Medicare Advantage Plans was created to provide more benefits and services. Part D of Medicare was introduced in 2006 as a part of the Medicare Modernization Act and is called Prescription Drug Coverage. Anyone eligible for parts A and B is also eligible for part D, intended to help cover the costs for prescription drugs. Part D is run by Medicareapproved private insurance companies as well. However, Medicare part D differs from the other elements. To get benefits, a person with Medicare has to enroll in a stand-alone prescription drug plan (PDP) or a Medicare Advantage plan (MAPD) that covers the costs for prescription drugs. Unlike parts A and B, part D is not standardized.

Secretary of Health and Human Services Kathleen Sebelius visited the Arthur Capper senior apartment building in Washington, D.C., to talk to residents about Medicare coverage and their opportunities under open enrollment.

It depends on the plans which drugs or classes of drugs are covered. A person on Medicare can decide between two coverage options. On one hand, a person can choose the traditional Medicare with parts A (hospital insurance), B (medical insurance), and D (prescription drug coverage). Additionally, a person can choose a Medicare supplement insurance if the person needs supplemental coverage. On the other hand, the insured person can choose the Medicare Advantage Plan (part C) which covers parts A, B, and D. Legislative Development of Medicare The complex structure of Medicare has its origin in the different legislative steps since its enactment in 1965. The program was expanded several times, during which the number of persons covered by the



program grew continuously. In the last decades, however, private insurance companies have developed to encompass a huge part of the Medicare program. Historically, market competition has been the dominant coordinating mechanism of the American health care system. There has been a strong emphasis on the relevance of private actors in financing, service provision, and the regulation of health care. In the 1960s, political discussions about health care for the elderly increased. President John F. Kennedy wanted to install centralized health care insurance. However, the debate on Medicare was accompanied by frustration of the political actors. Republicans and Democrats did not agree on the details of the program. Finally, in 1964, a compromise to incorporate both parties was found. The proposed Medicare Program was divided into part A and part B and, finally, the Medicare program was launched during the period of the Great Society—a series of U.S. political programs launched by President Lyndon B. Johnson. However, some scholars have stressed that Medicare was a part of the unfinished agenda of the New Deal legislation. Originally, health care was a component of the Social Security Act (1935). The Roosevelt administration decided to drop those provisions because of the vehement opposition to health insurance that threatened the entire Social Security Act. Medicare can be seen as a health policy milestone. This change manifests itself in the role of the state in financing Medicare. Other researchers argue that the program was a blessing to many elderly American families: because this legislation made eligibility automatic on payment of payroll taxes and premiums, elderly persons were able to receive assistance without the stigma of a means test. While the program was introduced to support the elderly, the Medicare revision in 1973 extended the coverage to nonelderly persons with kidney disease. Medicare was further extended in 1980 when the prescription drugs coverage was added (Medicare Secondary Payer Act). Nevertheless, the 1980s were a turning point for American social policy. President Ronald Reagan attempted to implement diverse welfare state cutbacks. Primarily, he wanted to cut social assistance benefits (especially Aid to Families and Dependent Children [AFDC]), which is why Medicare escaped the cuts. Reagan also was concerned about antagonizing the powerful voting

Medicare

859

bloc of the elderly. Nevertheless, in the late 1980s, a funding emergency was encountered in part A of Medicare (Medicare Catastrophic Coverage Act). During the 1970s and 1980s, outlays for hospital bills had increased faster than the payroll revenues. The federal expenditures had increased from $15.2 billion in 1970 to $38.3 billion in 1980. The reasons for this development are widespread. On one hand, the population is aging. On the other hand, more technology is being used to treat the elderly. Furthermore, federal government authorities had not been able to prevent doctors and hospitals from charging excessive fees and providing unnecessary treatment. As a consequence, Congress established national levels of payment for 467 specific diagnoses. Federal authorities paid hospitals a fee no matter how long the patient stayed in a hospital or whether they developed complications. The results of this development can be summarized as follows: During the 1970s and 1980s, health conditions of the elderly and disabled improved. The poverty rates of the elderly plummeted from 35 percent in 1959 to 12.8 percent in 1998. Nevertheless, the elderly paid 50 percent of their own medical costs for home health care and long-term care. Some argue that Medicare emphasizes short-term and hospitalbased care. For those services, Medicare offered scant coverage, according to other researchers. As a consequence, during this time, many elderly lived under catastrophic health conditions and had to divest themselves of their savings to obtain benefits from means-tested Medicaid. Simultaneously, the private insurance and health markets grew rapidly and the expenditures for private health insurances rose significantly. President Bill Clinton wanted to implement a universal health care program for all Americans. In the early 1990s, 40 million Americans had no or only insufficient health care coverage. In addition, health care services were extremely costly, absorbing 15 percent of the gross national product. Clinton intended to introduce basic coverage such as preventive services, drugs, and curative services. As a quid pro quo, the president also wanted to cut Medicare and Medicaid, allowing their enrollees to join competing plans offered by regional health care alliances. Clinton attempted to implement his health care reform plan in 1993 and 1994, but he was unable to get the legislation passed by Congress.

860

Medicare

After the defeat of this health care reform attempt, it took some time to establish the basis for new reforms. Finally, in 2003, Congress passed the Medicare Prescription Drug, Improvement, and Modernization Act. With this law, a pharmaceutical drug benefit was added to Medicare. A comprehensive reform of American health care was passed in 2010. President Barack Obama had promised to back a health care reform plan but was vague about the details. After a long legislative fight, the Patient Protection and Affordable Care Act (ACA) was passed in 2010, which made a number of changes to the Medicare program. According to the White House, the reform has strengthened Medicare by adding new benefits, fighting fraud, and improving care for patients. Several reasons are listed: free preventive services such as flu shots and diabetes screenings were added, and the new law purported to fight fraud and abuse by introducing tougher screening procedures, stronger penalties, and new technology. The most important stated goal was to save taxpayers’ money and thus lower the costs for prescription drugs. Improvements of conditions and quality of care were also stressed. Therefore, the Center of Medicare and Medicaid Innovation was established to test and support innovative health care models. Major Criticism Despite the expansion of Medicare, more and more actors have been criticizing the program for various reasons. Often, it is criticized that the expansion of Medicare will lead to a shift away from personal responsibility and toward the view that health care is an unearned entitlement to be provided at others’ expense. At times, it is argued that it will result in a socialistic America. However, others have argued that Medicare is not an unearned entitlement because elderly have to pay contributions to the Medicare fund. High costs and quality of health care have also been criticized. Compared to other countries, U.S. health care services are extremely costly and most people are not satisfied with the provision. The most vulnerable groups involved are low-income retired families. Many health care costs are not covered, and it is thus difficult to prevent medical insolvency. Medicare still does not provide aid to people with chronic conditions and does not reimburse out-ofhospital nursing home care. Medicare is also not

available for people under age 65 and is therefore not an option for people forced into early retirement because of health reasons. Medicare remains a program that is concentrated on short-term and hospital-based care. Still, the federal government recognizes its responsibility to care for the elderly. The expansion of the program shows that it has become a pivotal element of the American welfare state and is of special importance for older families. Michaela Schulze University of Siegen See Also: Demographic Changes: Aging of America; Medicaid; Social Security. Further Readings Bambra, Clare. “Medicare and Medicaid.” In International Encyclopedia of Social Policy, Tony Fitzpatrick et al., eds. New York: Routledge, 2006. Brook, Yaron. “Why Are We Moving Toward Socialized Medicine?” (2009). http://www.aynrand.org/site /News2?page=NewsArticle&id=23957&news_iv _ctrl=2402 (Accessed July 2013). Cacace, Mirella. “The U.S. Health Insurance System: Hierarchization With and Without the State.” In The State and Healthcare: Comparing OECD Countries, Heinz Rothgang et al., eds. New York: Palgrave Macmillan, 2010. Department of Health and Human Services. “A Quick Look at Medicare.” http://www.medicare.gov/Pubs /pdf/11514.pdf (Accessed July 2013). Grabowski, David C. “Medicare and Medicaid: Conflicting Incentives for Long-Term Care.” Milbank Quarterly, v.85/4 (2007). Hacker, Jacob S. The Divided Welfare State: The Battle Over Public and Private Social Benefits in the United States. Cambridge: Cambridge University Press, 2002. Jansson, Bruce S. The Reluctant Welfare State: Engaging History to Advanced Social Work Practice in Contemporary Society, 7th ed. Upper Saddle River, NJ: Cengage Learning, 2012. Skocpol, Theda S. Social Policy in the United States: Future Possibilities and Historical Perspective. Princeton, NJ: Princeton University Press, 1995. Slessarev, Helene. “Racial Tensions and Institutional Support: Social Programs During a Period of Retrenchment.” In The Politics of Social Policy in

the United States, Margaret Weir, Ann Shola Orloff, and Theda Skocpol, eds. Princeton, NJ: Princeton University Press, 1988. White House. “Strengthening Medicare.” http://www .whitehouse.gov/healthreform/healthcare-overview #medicare (Accessed July 2013).

Melting Pot Metaphor In her book Battle Hymn of the Tiger Mother, Amy Chua describes how she raised her two daughters using traditional Chinese childrearing practices. Chua describes her strict parenting style by listing the activities her daughters were not allowed to participate in, such as attending sleepover parties, and emphasizing the high academic standards expected of her children, such as earning As in school. She goes on to describe differences between typical Western parenting styles and her own. For example, she asserts that Western parents would make their children practice playing an instrument for only 30 to 60 minutes, whereas she would insist that her children practice for two to three hours a day. Chua’s style represents a refusal to adhere to the “melting pot” model of assimilation, in which immigrants are expected to adopt the practices of the dominant culture and give up most of the traditions of their home country. Chua’s refusal to adopt mainstream Western child-rearing practices was publicly highly criticized in the popular media, making it explicitly clear that these values and practices were inconsistent with those of the dominant culture. It is clear that this approach to assimilation affects family dynamics and courtship practices in many ways. Melting Pot Model The melting pot analogy was originally used in the 18th century to symbolize one way in which a society may function when a culture becomes infused with a number of immigrant cultures. The term was popularized in the United States when the play The Melting Pot was released by Israel Zangwill in 1908. The play describes the United States as “God’s melting pot,” where individuals of various ethnic and racial origins are represented.

Melting Pot Metaphor

861

The mainstream culture is said to represent all of the country’s inhabitants. Ideally, the melting pot analogy would suggest incorporating the positive attributes of various cultures. For example, Americans enjoy a variety of ethnic foods such as hamburgers and hot dogs (German origins) or pizza (Italian origins) without always realizing the origins of such meals. There are linguistic cultural fusions as well, such as the words bagel (Yiddish origin), fiancé (French origin), or algebra (Arabic origin). Thus, many have argued that the United States has such a rich and beautiful culture because it is inclusive of many other cultures. In practice, however, the melting pot model requires that immigrants give up most aspects of the culture of their home countries to adopt the practices and values of the mainstream culture. Immigrants are pressured to melt into the dominant culture, leading to a shared identity and culture for all Americans. One way this can be seen is through courtship and marriage patterns of different ethnic groups after they immigrate to the United States. Courtship and Marriage in the United States Marriage and courtship practices vary dramatically across the world. In some cultures, marriages are arranged by parents, whereas in other cultures partners are chosen with little or no parental influence. Cultures vary in whether they allow polygyny (one husband, multiple wives), polyandry (one wife, multiple husbands), or only allow monogamy (one husband, one wife). They also vary substantially in the extent to which young women and men are allowed to socialize and the age at which they are expected to marry. In the United States, only monogamous marriages are legal and partners are generally freely chosen without parental edicts. It is generally expected that men pay for expenses on the first date or first several dates, but most couples share dating expenses within a few months of dating. It is common for people to have sex outside the confines of marriage, and the age of marriage has been steadily rising to mid-20s, compared to other cultures where marriages can be arranged and marriages occur among young teens. This is the context into which immigrants are assimilating, and there may be extensive social pressure (or legal requirements) to conform to these practices.

862

Melting Pot Metaphor

Indian Courtship in the United States Immigrant families face unique challenges in monitoring their children’s dating and marriage decisions. For example, many Indian families commonly practice the tradition of arranged marriages, where parents determine who their children will marry. Indian or Indian American arranged marriages may take into account religion, caste, and compatibility among families. Marriage is viewed as a union of two families where compatibility of different family members is taken into consideration when deciding whether to wed. Children of Indian immigrants, however, are exposed to a dominant culture that rejects this model of marriage and instead celebrates marriages based on love that are entered into without parental edict. As a result, many Indian Americans have adopted the marital and courtship practices of the dominant culture. Iranian Courtship in the United States Courtship among traditional Iranians is significantly different from mainstream American courtship. One of the main differences between traditional Iranian and mainstream American courtship are the expected stages of courtship. In the mainstream U.S. culture, introduction to parents and family members usually occurs after a couple has dated for a time and is sometimes seen as a signal that the relationship has become serious. In contrast, courtship within the traditional Iranian family begins with elders in the community and family members meeting to discuss the possible suitability of two individuals as a potential couple. The potential couple are given the opportunity to interject their opinions on the pairing. If this potential pairing is agreeable, the man, accompanied by his parents, will attend a gathering hosted by the woman’s family. The potential wife typically offers tea to the potential husband and his parents. If the couple and the families take a liking to one another, then the couple will begin to get to know each other. These differing models of courtship can create challenges for Iranian American men and women who date outside their ethnicity, because it would seem unusual to meet their family before the first date. This problem is exacerbated by the fact that Iranian Americans are one of the smallest minorities in the United States and thus may feel that dating opportunities within their ethnic group are limited,

forcing them to date outside their ethnicity and to adopt mainstream American courtship norms. Criticisms of the Melting Pot Model There are a number of criticisms of the melting pot model. One of the main criticisms is that the mainstream culture predominantly represents the characteristics and needs of the dominant group. Critics contend that nondominant groups are not reflected in the mainstream culture in a significant way. For example, although many Americans in the United States enjoy Thai food at Thai restaurants, the culture is not significantly represented otherwise. Another criticism is that immigrants who choose to maintain their home culture may face negative social and political consequences. The melting pot metaphor would suggest that there is a harmonious infusion and exchange of culture between the dominant and nondominant groups. However, that may not be accurate. For example, children who bring traditional foods from their home country to school may be ostracized by other students. An immigrant practicing a marginalized religion may face discrimination as a result of not conforming to religious practices of the dominant culture. Due to the intense pressure placed on immigrants to adopt the dominant culture, some critics have renamed the melting pot to the pressure cooker, suggesting that immigrants are highly pressured to assimilate, with the threat of facing prejudice or discrimination. Shari Paige David Frederick Chapman University See Also: Breadwinner-Homemaker Families; Breadwinner; Cohabitation; Coparenting; Custody and Guardianship; Divorce and Separation; Domestic Masculinity; Dual-Income Couples/Dual-Earner Families; Egalitarian Marriages; Gender Roles; Hochschild, Arlie; Suburban Families. Further Readings Chua, Amy. Battle Hymn of the Tiger Mother. New York: Penguin, 2011. Hollinger, David A. Postethnic America: Beyond Multiculturalism. New York: Basic Books, 2000. Myers, Jane E., Jayamala Madathil, and Lynne R. Tingle. “Marriage Satisfaction and Wellness in India and

the United States: A Preliminary Comparison of Arranged Marriages and Marriages of Choice.” Journal of Counseling and Development, v.83 (2005). Parrillo, Vincent N. Strangers to These Shores. Upper Saddle River, NJ: Pearson Education, 2006.

Mental Disorders Mental disorders can most commonly be defined as maladaptive patterns of thought or behavior that manifest through a person’s distorted cognition, affect, feelings, social interaction, or ability to function. Despite their name, the symptoms of mental disorders may be mental or physical. Mental disorders many differ greatly in their severity, with the worst leading to disability or suicide. There are more than 200 classifications of mental disorders, the most common of which include anxiety disorders, mood disorders such as depression and bipolar disorder, and schizophrenia. Within the nomenclature, a mental disorder may also be termed “mental illness,” and there is a range of other terms that may or may not be socially accepted. There are multiple fields of study that revolve around mental disorders and many professional organizations involved in working with mental disorders. Though psychiatrists and clinical psychologists predominate among those working with mental disorders, there is also a large interdisciplinary network of professional fields, theoretical frameworks, and treatment modalities that revolve around mental disorders. Social workers, mental health counselors, and marriage and family therapists may all work with those affected by mental disorders as well. Etiology Multiple factors contribute to the cause of mental disorders, which primarily fall within two categories. The first category is genetic or biological factors. This may be exemplified as having a family history of mental illnesses, traumatic brain injury, exposure to viruses or toxins, serious medical conditions, or effects of substance abuse. The second category is contextual or experiential factors. These factors relate to life experiences within the environment in which a person is raised, such as prolonged exposure to abuse, conflict, or extreme stress.

Mental Disorders

863

Prevalence It is estimated that the occurrence rate of mental disorders in children ages 13 to 18 is about one child out of every five with a lifetime prevalence of slightly less than 50 percent. Approximately 20 percent of diagnosed children ages 13 to 18 experience severe mental illness. Males and females have similar prevalence rates of mental disorders. However, there is a difference in age. Older adolescents are more likely to experience mental disorders than younger adolescents. Children are more likely to receive mental health services than adults at a rate of approximately 50 percent. Adolescent males and older adolescents are generally more likely to receive treatment than females and younger children. Just under half of adults in the United States will experience a mental disorder at some point in their lifetime. It is estimated that one in four adults suffers from some sort of diagnosable mental disorder within a given year. Additionally, one in 17 adults face a serious and debilitating mental disorder. There is no significant difference in the prevalence of mental disorders in men and women. According to recent research, the average age of onset for a mental disorder is 14, but the highest prevalence rates of mental disorders occur in adults ages 18 to 44. It is estimated that only 13 percent of adults with a mental disorder diagnosis receive some sort of treatment, such as inpatient care, outpatient care, or prescription medication. Females are slightly more likely to receive treatment for mental health problems than males. In addition, about half of individuals diagnosed with one mental disorder will meet diagnostic criteria for at least one other mental disorder. History and Context of Mental Disorders: The Middle Ages Throughout history there has been substantial development and change in the way that mental disorders are conceptualized, recognized, and defined. The earliest conceptualization of mental disorders began as early as the 5th century b.c.e. and included a mystical outlook on mental illness. Many believed that mental illness was a result of demonic possession, punishment from an angry god, or some sort of supernatural force. The earliest treatment of mental disorders was the act of trephining, which included chipping a hole in a person’s skull with the intention of releasing evil spirits. Some of the patients who

864

Mental Disorders

underwent this procedure survived it and may even have seen some change in their symptoms depending on the ailment. This procedure lasted for many centuries and was used to treat more than mental illness, including disorders such as migraines or even skull fractures. In other cultures, religious ceremonies including special prayers, spells, incantations, rituals, and exorcisms were used to drive out the effects of the evil spirits that caused mental disorders. In early Hebrew cultures, physicians who treated these illnesses were usually priests, who would summon help from God. Other religious cultures encouraged pure living by doing good deeds for others and participating in cleansing rituals for the body and mind. Some cultures even tried to bribe evil spirits into leaving, threaten them, punish them, or allow the patient to submit to them. Early Egyptian cultures held festivals allowing those afflicted with an illness the chance to dance, paint, or enjoy music to relieve their symptoms, but they too ascribed to the belief that a supernatural force was responsible. Around the 4th century b.c.e., early physicians such as Hippocrates and Galen hypothesized that mental disorders were a result of imbalanced humors or body fluids, such as blood, bile, and phlegm. The earliest medical procedures sought to balance these humors through various procedures such as the use or laxatives, emetics, bloodletting, or cupping. Throughout the 5th through 15th centuries, families were in charge of and expected to care for mentally ill family members. Incidentally, abuse was widespread. Shame and stigma often led family members to lock up mentally ill family members or send them away to live on the streets. In most cultures, family members who were sent away did not attract much attention from the law. Those who caused trouble were imprisoned, beaten, or exiled. Although the creation of the first mental hospital is documented in the 8th century b.c.e. in Baghdad, widespread use of mental institutions in countries throughout Europe and America came much later, starting in the 15th century. Mental institutions were far from therapeutic. Patients were often treated inhumanely, barely fed, shackled to the wall, or left to sit in their own excrement. Some mental institutions even made their patients public spectacles by putting them on display, much like a zoo. During this time, a limited number of private asylums were run by clergy but were either limited

in the number of people they could serve or too expensive for some families to afford. Contemporary Context Quasi-medical treatments, religious rituals, and other various mystical treatments were still in regular use throughout the mid-19th century. In addition, new treatments were designed to assist or even force patients to choose to be normal, as though it was their choice. However, these treatments were often cruel and unusual. Some such treatments included restraining patients until they were exhausted enough to act sanely, administering powerful sedation drugs to them, or shocking them into sanity by throwing hot or cold water at them. These treatments were not effective. Critical attention to the cruelty in mental institutions led to slow but progressive changes in how patients were treated. These changes began in the late 18th century for institutions in Europe, and America followed suit in the early 1800s. Until the early 1900s, mental disorders were attributed to lunacy or madness rather than social or biological causes. However, during the early 1800s, this began to change and a focus on the biological influence on mental disorders became the focus of study. As a result, the field of psychiatry was first developed in the early 1800s. Symptoms of mental disorders were viewed as the manifestation of these brain and nervous system issues. Mental disorders were classified into simple categories consisting of only a small number of particularly severe and strange behaviors. By the end of the 1800s these grew into more nondescript and expansive categories such as nerves or hysteria. Most afflicted people saw nonpsychiatric medical doctors for treatment, and psychiatric outpatient treatment for mental disorders was not common. Most psychiatric practice happened in institutionalized settings and any diagnostic label was reserved for hospitalized patients. For those who did seek treatment from their medical doctor, common treatments were usually simple combinations of rest, dietary changes, or electricity treatments. Clergy or religious leaders most often provided counseling-type services. The lack of formal services at this time did not mean that mental disorders did not exist; however, labels and treatments were generally defined in terms of the patient’s culture, medical needs, or religious affiliation.



Classification and treatment of mental disorders significantly evolved through the 1900s. Sigmund Freud can be considered the most famous psychiatrist and he popularized psychoanalytic treatment. His work expanded mental disorder classification beyond the asylum to a number of less severe symptoms with social influences. In addition, his treatments centered on talking, hypnosis, free association, and dream analysis. Other treatments popularized in the early 1900s included electroconvulsive therapy, psychosurgery, and pharmacological treatment. These treatments reflected the growing belief that mental disorders were a result of a chemical imbalance in the mind and body. Psychosurgery, otherwise known as a lobotomy, became popular between 1930 and 1950. Although patients found themselves to be less aggressive and emotional after this procedure, they often became more impulsive, sluggish, and immature. Electroconvulsive therapy has lost popularity in current practice due to abusive practices in asylums and the improvements made in pharmacological treatment. The first pharmacological treatments were developed in the late 1800s, and rapid improvements and advances were made in the mid-1900s. These improvements have been considered some of the most successful in treating mental disorders and led to the deinstitutionalization of mental health treatment beginning in the 1960s. Psychoactive medication has allowed many to manage their symptoms and live relatively normal lives. Despite the gains, there are still high levels of stigma in some cultures that inhibit individuals from receiving the treatment they need. Talk or psychotherapy continued to develop throughout the mid-1900s, as behavioral therapy, which was developed in the 1950s to treat various phobias. Today numerous types of therapies exist, as well as a few empirically validated treatment methods. The first major piece of legislation relating to mental disorders was the National Mental Health Act, enacted by President Harry Truman in 1946. This bill created the National Institute of Mental Health and initiated a call for scientific research on the instance and treatment of mental disorders. The next major piece of legislation was the Mental Retardation Facilities and Community Mental Health Construction Act of 1963. This law supported the deinstitutionalization of individuals with mental disorders by allocating federal funds

Mental Disorders

865

for community mental health centers. Advocacy groups first gained clout in the 1980s and continue to be active today. Diagnosis of Mental Disorders Diagnosis is the process of examining various symptoms as well as the history and context of the symptoms to identify a particular illness. Diagnosing mental disorders is unlike diagnosing medical ailments in that it is more subjective because it relies on patient report and trained observation of symptoms that may be unseen. However, it is still a vital step in determining the right course of treatment. Research is currently being done to identify genetic markers for mental illness through neurological imaging and blood tests. However, there is still a lot of work to be done in this area. Currently, diagnosis may be accomplished with the use of standardized psychological tests or through an in-depth interview by a mental health professional with the goal of identifying the most prominent symptoms. The earliest manual for diagnosing mental disorders was published in 1918 and contained only 22 categories of mental disorders. The American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders is currently in its fifth edition and serves as the primary reference for mental health professionals in the United States. Only licensed mental health practitioners may provide a diagnosis. When diagnosing, mental disorders are distinct in that they are not normal mental or physiological reactions to normal stressful events but inappropriate or extreme reactions according to sociocultural standards. Diagnoses are also differentiated as labels for classifying the symptoms that are being reported and not as labels for describing the patient (e.g., patients are not labeled “the schizophrenic,” “the addict,” etc.). Treatment of Mental Disorders There are two primary treatments for mental disorders: psychotherapy and medication. There are multiple types of psychotherapies targeted toward the behavioral, cognitive, or affective experiences of the patient, possibly a combination of the three approaches. Some approaches are also considered evidenced-based through process of clinical trial. Medication seeks to alleviate disruptive symptoms of mental disorders or to affect chemical

866

Merger Doctrine

imbalances in the brain that may be leading to the mental disorder symptoms. Many patients take medication only, go to therapy only, or use some combination of the two. Morgan E. Cooley Florida State University See Also: Alcoholism and Addiction; American Association for Marriage and Family Therapists; American Family Therapy Academy; Delinquency; Disability (Children); Disability (Parents); Family Counseling; Homelessness; Psychoanalytic Theories. Further Readings Alexander, Franz and Sheldon Selesnick. The History of Psychiatry: An Evaluation of Psychiatric Thought and Practice From Prehistoric Times to the Present. New York: Harper and Row, 1966. Butcher, James, Susan Mineka, and Jill Hooley. Abnormal Psychology, 13th ed. Upper Saddle River, NJ: Pearson, 2007. Public Broadcasting System. “Timeline: Treatments for Mental Illness.” American Experience—A Brilliant Madness Timeline. http://www.pbs.org/wgbh/amex /nash/timeline/timeline2.html (Accessed August 2013).

Merger Doctrine Until well into the 20th century, women in the American common law tradition were viewed as under the control of their fathers until marriage; once married, authority over women shifted to their husbands who became, in effect, their masters and lords. A woman’s identity was said to have “merged” into that of her husband. The wife and husband were considered to be one person and that person was the husband. This is more popularly known as “coverture” because the wife was assumed to be a “covered woman.” This doctrine led to women essentially being treated as chattel and the property of men—they could be beaten, raped, and reduced to domestic servitude. Modern social pressures associated with the settling of the American west in the 19th century gradually led to the amelioration of this repressive

doctrine—most notably with variations of the Married Women’s Property Acts that states started to enact beginning in 1839. Even as these legal disabilities became slowly rectified, they served as an important impetus for generations of women’s rights activists as a reminder of women’s continuing legal subordination before the law. Full legal equality for women in this area was not granted until Reed v. Reed (1971) and arguably, with the failure of the Equal Rights Amendment in 1982, still does not exist today. Under the normative patriarchal ideological control of the time, the husband/father’s power over his wife or his daughter was nearly all-encompassing. For instance, an abusive father in 1806 successfully sued his kindly neighbor for the value of the services (cooking, cleaning, etc., for room and board) provided over a three-year period by his teenage daughter who had taken refuge with his family. The exception to this paternalistic control was that an older unmarried woman had, if independently wealthy, a degree of freedom over her own affairs. She was, for example, able to enter into contracts or be a party to a lawsuit. Such women, however, were rare, as most women were socially, economically, and intellectually dependent upon a husband. The pressure to marry and have children was overwhelming and “old maids” were scorned and shunned. Marriage brought with it a comprehensive legal disability. The justification for this was the principle that the family was a sovereign unit with one executive—the husband. In the interest of reducing family strife, all decisions and intercourse with the outside world had to be grounded in the authority of the husband, and courts were loath to question that authority. Simply, the common law rights of husbands covered a wide area of life that today would subject the man to social recrimination and criminal prosecution. For instance, because the husband was responsible for his wife’s behavior, including her debts and contracts, he could command her obedience through, among other things, corporal punishment—what the law called “reasonable chastisement.” Thus, he could give his wife “moderate correction” so long as there was no permanent injury. Further, husbands could impose themselves sexually upon their wives against their wishes with impunity. The husband also had the ability to sue for money if another man slept with his wife (i.e., the



When this bride married in 1909, she would have moved from her father’s authority to her husband’s. Until well into the 20th century, women were viewed as being under the control of their fathers until they married, when control shifted to the husband.

tort of “criminal conversation”). In some jurisdictions until the early 1970s, a husband could murder his wife’s lover with diminished responsibility or even have his actions excused as justifiable. In addition, married women were, at various times and places, prohibited from voting, owning or inheriting property (their goods were transferred to their husbands), keeping their own wages, or serving on juries. Wives had to assume their husbands’ last names and were required to live in the home of their husband, even against their wishes. Runaway wives could be charged with a crime, as well as people who harbored or aided her. If his wife left with his children, the husband had the power to compel the police to return the children. Wives and children could be institutionalized in mental hospitals on the word of their husbands/fathers if he deemed them “incorrigible.” Women were so tightly controlled as sexual beings that her offspring suffered

Merger Doctrine

867

severe legal disability and hardship in the event that a woman procreated without proper authority (i.e., without a husband). Married women were often prevented from going to college and from practicing most professions. For instance, in Bradwell v. Illinois (1873), the U.S. Supreme Court pointed to the fact that, as a married woman, Myra Bradwell could not engage in the practice of law because she could not act as a legal agent due to coverture—she could not sign contracts or other legal documents. Finally, because a wife’s legal identity “merged” with her husband, for a time a husband’s nationality would determine that of the wife. If the husband was not eligible for naturalization (for example, if he were Chinese), U.S.-born women citizens were stripped of their citizenship (Mackenzie v. Hare, 1915). The feminist and womanist movements of the past 150-plus years have strived to remove these conditions and liabilities, and the rights and obligations of both husband and wife are now reciprocal. The family unit is no longer seen as a sovereign institution with a single head of state that deserves deference (i.e., the public/private split that prevented courts from “peering” behind the marital curtain). Marriages are seen today as based on commitment, love, mutuality, reciprocity, and respect; further, modern divorce (while emotionally difficult) allows women to leave an abusive husband with her integrity and dignity intact. Omar Swartz University of Colorado, Denver See Also: Feminism; Gender Roles; Marital Division of Labor. Further Readings Casebriefs. Bradwell v. Illinois. http://www.casebriefs .com/blog/law/family-law/family-law-keyed-to -weisberg/being-married-regulation-of-the-intact -members/bradwell-v-illinois (Accessed March 2014). Coyle, William. “Common Law Metaphors of Coverture: Conceptions of Women and Children as Property in Legal and Literary Contexts.” Texas Journal of Women and the Law, v.1 (1992). Zaher, Claudia. “When a Woman’s Marital Status Determined Her Legal Status: A Research Guide on the Common Law Doctrine of Coverture.” Law Library Journal, v.94 (2002).

868

Mexican Immigrant Families

Mexican Immigrant Families Since the 1970s, U.S. households comprised of Latino immigrant families have increased significantly. The vast majority of Latino immigrants to the United States are undocumented Mexican immigrant families. Today’s Mexican immigrants are much like their predecessors who arrived over 100 years ago. However, contemporary Mexican immigrants have unique experiences that distinguish them from immigrants before them. In addition, today’s immigrants are distinct from many Mexican American families living for many generations in California, Texas, and New Mexico— descendants of families who occupied lands in the southwest before the Mexican-American War (1846–48), when those lands became part of the United States. Currently, Mexican immigrant families originate increasingly from the southern region of Mexico and experience more challenges crossing into the United States and an increasingly hostile reception in the United States. Mexican immigrants reside in growing ethnic enclaves all over the United States, maintain a transnational experience, and are less prone to assimilate. Although the number of Mexican immigrant families crossing the U.S. border has dropped since 2011, the large number of undocumented immigrant families from Mexico is a heavily debated topic. Up until the 1970s, many Mexican immigrants originated from the central and northern region of Mexico, including Jalisco, Michoacan, Guanajuato, Zacatecas, Chihuahua, Durango, and Nayarit. Before the 1980s, the central and southern parts of Mexico did not have large numbers of its citizens migrate to the United States. More recently, the central and southern parts of Mexico have increased the diversity of immigrant character, including Guerrero, Morelos, Oaxaca, Puebla, Mexico, and, most recently, from the states of Hidalgo, Veracruz, and Chiapas. Most of the immigrants from the south of Mexico are of indigenous origin. A small number of immigrants originate from Mexico states bordering the United States, including Baja California Sur and Sinaloa. The majority of Mexican immigrants come from the interior of Mexico. Although contemporary immigrants originate from a wider range of Mexico, their main option for crossing into the

United States remains along the borders of California, Arizona, New Mexico, and Texas. Reception of Mexican Immigrant Families No matter where Mexico immigrants originate, the one experience that most Mexican immigrant families currently share is their struggles to cross into and build a sustainable life for their family in the United States. Since 2011, the numbers of immigrant families have dropped considerably and families have experienced increased deportation. These challenges are expected to worsen, as the U.S. Senate passed an immigration bill in July 2013 that allocates billions of dollars to secure the border, overhaul the legal immigration system, and permit more than 11 million undocumented immigrants to apply for citizenship. In 2012, President Barack Obama created a program known as the DREAM Act that defers the deportation of undocumented immigrants brought to the country as children. An ongoing debate on a possible overhaul of immigration laws has not abated the stressors and concerns affecting Mexican immigrant families who struggle with constant fear of deportation, disconnection from their loved ones, and the dangers of attempting to return illegally. For many Mexican immigrants, the journey across the border is one of hardship and family separation, replete with potential for crime victimization and loss of life as they venture into barren desert terrain. But the challenges for Mexican immigrant families do not end at the border. Many immigrant families suffer multiple stressors as they maneuver themselves in a country that has made it clear they are unwelcome. Destination of Mexican Immigrants In the 1930s the primary destination for Mexican immigrants was Texas. Later, in the 1960s, immigrants arrived in large numbers in California. A majority of Mexican immigrants occupy the southwestern United States, with the highest concentrations in California, Arizona, New Mexico, and Texas. Recent demographic statistics suggest Latinos of Mexican origin comprise the largest number of immigrants found throughout the United States. Currently, Mexican immigrants have expanded into Idaho, Nevada, Oregon, Utah, and Washington. Moreover, Mexican immigrant families that once occupied large urban centers such as Los Angeles, New York, Chicago, and Dallas are currently



dispersed throughout rural America, including the southeastern United States. It is possible to find immigrant communities in urban and suburban areas throughout the United States comprised of immigrants originating from the same region or town in Mexico. For example, many Mexicans originally from Zacatecas live in Los Angeles, California, and immigrants originating from Michoacan gravitate to Cook County in Chicago, Illinois. Concentrations of immigrants sharing a region of origin help to sustain regional values and support the transmission of those values to the next generation. Demographic Description of Mexican Immigrants Mexican immigrants are undocumented, younger, unskilled, poorer, and less educated than other immigrant groups. They share cultural similarities with other Hispanic groups, including Spanish language and Roman Catholic religious affiliation. Often, immigrant laborers work in demeaning and menial low-wage jobs under conditions that U.S. citizens refuse to accept. Motivated by the hope of employment and financial gain, young males are typically the first to leave their families in Mexico to work as day laborers, gardeners, construction workers, maids, hotel service personnel, and in janitorial services. Often, they leave spouses and elderly family members behind. Most immigrants do so to escape the effects of long-term poverty and to provide their children with better opportunities. While in the United States, immigrant laborers send money back to their family with the intent of bringing other family members to join them. Their status as undocumented workers makes Mexican immigrants part of an invisible labor force, with minimal rights to ask for fair compensation and treatment. Cultural and Social Characteristics Immigrants are often members of larger familial and social networks, and regardless of time in the United States, most immigrants have maintained strong ties to their culture. These ties are evidenced through language preference for Spanish and the endorsement of traditional beliefs and practices. Mexican immigrant families do not typically speak or understand English. Currently, ties to their hometown are facilitated through the use of mobile telephone services and Internet technology. Immigrant families,

Mexican Immigrant Families

869

particularly those with fewer economic resources, compensate for the lack of resources by relying on support from extended family members and close friendship networks. A majority of immigrants come to join close family members, some who may be U.S.-born or naturalized citizens. Households are comprised of multigenerational family members. Mexican immigrant families function with a deep sense of responsibility to their immediate and extended family; they value family interdependence, respect, and maintain strong extended family networks. Although recently arrived immigrants have the luxury of existing Mexican American communities, each successive wave of immigrants has a higher poverty rate, and a much larger number of their children will grow up in poverty. Mexican immigrant families often occupy ethnic enclaves marked by violent criminal activity, drug dealing, school truancy problems, and poverty. The stressors of immigrant life contribute to making Mexican immigrant families at risk of developing mental health concerns. Children of Mexican immigrants often struggle with significant educational, social, and psychological challenges adapting to their new environment. Immigrant children introduced to the U.S. educational system are often challenged by language issues and adjustment to influences of the dominant culture and culture of origin. Immigrant children are often described as living between two worlds—entre dos mundos. Mexican Immigrant Health Status Mexican immigrants experience disproportionate negative health status and health care access. Most will experience symptoms of depression and anxiety. A significant barrier to participation for immigrant families includes limited English proficiency. Linguistically isolated households number less than households where a language other than English is spoken. In comparison with Mexican Americans and immigrants who are English proficient, Mexican immigrants who speak primarily Spanish are less apt to use services available specifically for immigrant populations. Many immigrants prefer to access the support available through their local consulate office. Mexico’s consular officers in the United States often advocate for undocumented Mexican immigrants. Contemporary Mexican immigrants, poor and undereducated, consistently face political,

870

Mexican Immigrant Families

economic, and cultural barriers to advancement within the U.S. social order. Immigrants of color often experience barriers to full participation in society on the basis of race/ethnicity. Increased sensitivities to race issues in the United States have made it difficult to engage in discussion about the overlapping issues of race/ethnicity, immigration, and access to social welfare benefits. Mexican immigrants who perceive communities as being open and welcoming to their presence and accepting of their culture will likely have a different adjustment process than those who experience racial profiling or other negative aspects in their context of reception. Barriers to service and fears of deportation make family members less apt to use services available through community agencies. Few families seek out any form of public support. In addition to issues of acculturation, Mexican immigrant families have post-immigration experiences that include breakdown of traditional gender roles and disintegration of family structure. Cultural Adaptation of Mexican Immigrants Compared to other Latino immigrant groups, Mexican immigrants face significant hurdles integrating and adapting to the United States as found in the large number of households where Spanish is the language primarily spoken. Among the many social issues relevant to departure and arrival of the immigration process are the challenges specific to socioeconomic, political, and cultural adaptation issues faced by Mexican immigrants in the United States. Regardless of the reasons for displacement from their country of origin, Mexican immigrant families hold dearly to the cultural values, practices, and traditions that have come to mark their identity. The cultural integrity maintained by Mexican immigrants has contributed to tension in many communities. Although Mexican immigrants attempt to adapt to the customs and standards of life in the United States, their cultural roots remain firmly embedded. The increasingly transnational experience of immigrants contributes significantly to cultural distinction. No matter where Mexican immigrants live in the United States, contemporary immigrant families retain closer ties to their homeland and are less inclined to assimilate. Many immigrants maintain one foot planted in their hometown, live between two worlds, and express a desire return to their

homeland. Unlike Mexican immigrant families of the past who assimilated wholeheartedly, today’s Mexican immigrants are encouraged by proponents of multiculturalism to retain their sense of cultural integrity. Mexicans have been able to maintain their values, ethnic distinctiveness, and cohesion in the United States largely because their culture has been constantly reinforced by continuous migratory streams from Mexico. In addition to group cohesion, immigrants describe limited contact with key community institutions that can help them sustain and develop their family, such as educational institutions. Instead, public parks, church-sponsored events, and home were identified as important community resources for social interaction. Contemporary immigrant families retain meaningful ties to their families in Mexico. While immigrant social networks in the United States are important for social support and community sustainability, strong community ties impede cultural assimilation. An increased number of communitybased agencies throughout the United States are working with existing Latino immigrant social networks to bridge social networks and facilitate access to local resources, knowledge, and information. The Transnational Immigrant Family The structure of immigrant family households changes to accommodate the transitional nature of the immigrant experience. These values tend to decline as the family becomes increasingly rooted in the United States and acculturated. The Mexican father/husband is often seen as the head of the household and the primary decision maker. However, financial pressures often mean both spouses work to provide for the family, and an increased number of families are matriarchal. In Mexico, the household composition of immigrant families adjusts to fill the role and responsibilities of the family who left for the United States. In the United States, the family adjusts to the economic and social demands of immigrant life. Increasingly, Mexican households in the United States are comprised of members who retain dualcitizenship status and live as transnational families. H. Luis Vargas University of the Rockies

See Also: Acculturation; Central and South American Immigrant Families; Ethnic Enclaves; Immigrant Families; Immigration Policy; Latino Families; Primary Documents 1960; Southwestern Families. Further Readings Mexico’s National Population Council. “Indices de Intensidad Migratoria Mexico–Estados Unidos 2010.” (Indexes of Migration Intensity Mexico–United States 2010). http://www.conapo.gob.mx/swb /CONAPO (Accessed August 2013). Pew Research Hispanic Center. “A Nation of Immigrants: A Portrait of the 40 Million, Including 11 Million Unauthorized” (2013). http://www.pew hispanic.org/2013/01/29/a-nation-of-immigrants (Accessed August 2013). Suarez-Orozco, M. M. and M. M. Paez, eds. Latinos Remaking América. Los Angeles: University of California Press, 2002. Valdivia, C., P. Dozi, S. Jeanetta, Y. Flores, D. Martinez, and A. Dannerbeck. “The Impact of Networks and the Context of Reception on Asset Accumulation Strategies of Latino Newcomers in New Settlement Communities of the Midwest.” American Journal of Agricultural Economics, v.90/5 (December 2008). Vega, A. “‘Americanizing?’ Attitudes and Perceptions of U.S. Latinos.” Harvard Journal of Hispanic Policy, v.18 (2006). Vickroy, L. “Latino Children and Families in the United States: Current Research and Future Directions.” Journal of Evolutionary Psychology, v.24/3 (August 2003). Vidal de Haymes, M. and K. M. Kilty. “Latino Population Growth, Characteristics, and Settlement Trends: Implication for Social Work Education in a Dynamic Political Climate.” Journal of Social Work Education, v.43/1 (Winter 2007).

Middle East Immigrant Families Middle Eastern Americans are one of the fastestgrowing immigrant groups in the United States. Some estimates suggest the number of Middle Eastern immigrants experienced a sevenfold increase from 1970 to 2000. Many people think of Middle

Middle East Immigrant Families

871

Easterners as synonymous with Arabs and/or Muslims, yet this is far from the truth. Middle Easterners are one of the most heterogeneous groups in the world. They vary in culture, race, and ethnicity and include, but are not limited to, Assyrians, Armenians, Arabs, and Persians. They are also heterogeneous in regard to religion, be it the practice of Islam, Christianity, Judaism, Baha’i, or Zoroastrianism, among other religions. Even within religions there are distinct denominations (e.g., the Sunnis, Shias, and Sufis within Islam) that result in ideological differences. Middle Easterners are also linguistically heterogeneous, speaking a variety of languages such as Arabic, Farsi, Turkish, and Hebrew. In addition, Middle Eastern immigrants originate from different countries with distinct ways of living. And finally, there is even disagreement as to what constitutes the Middle East and the Greater Middle East. Yet, even with this diversity, there are some commonalities that characterize family interaction patterns and experiences associated with immigration to the United States. People from the Middle East have immigrated to United States since the late 19th century, although some records indicate that Arabs first arrived with Spanish explorers in the 16th century. In general, there have been two major waves of immigrants from the Middle East. In the first wave from the 1870s to 1924, most Middle Eastern immigrants were poorly educated and worked in low-skilled jobs (e.g., traveling salesmen, farmers, factory workers, merchants). The vast majority were Arab Christians from Syria and Lebanon. These early immigrants were dispersed across the United States and assimilated to the U.S. culture. From 1924 to 1965, restrictive quotas significantly limited the number of immigrants from the Middle East. However, after the quotas were lifted, the second major wave of immigrants from the Middle East began arriving in the 1970s and continued into the 21st century. In the early stages of this wave, the majority of the immigrants were Christian, educated, and white-collar workers from all over the Middle East. However, the percentage of Muslim immigrants has increased dramatically since the 1970s, contributing to Islam becoming the thirdmost-common religion in the United States. The Middle Eastern immigrants in the most recent wave have a strong ethnic identity, and they are more likely to live in ethnic enclaves than earlier immigrants.

872

Middle East Immigrant Families

People from the Middle East have immigrated as refugees to escape religious and cultural persecution, to leave political turmoil, to increase educational opportunities, and to escape economic hardship especially due to limited employment opportunities for young people. Many immigrated as individuals, while others came as families. Importance and Structure of the Middle Eastern American Family Across culture, religion, and country of origin, the centrality of the family is emphasized in Middle Eastern society, and the family continues to assume importance for Middle Eastern immigrants in the United States. There is also a strong value of the extended family in Middle Eastern culture. Despite leaving many extended family members in the country of origin and acculturating to a more Western concept of the nuclear family, there remains a robust value of the extended family for Middle Eastern immigrants. The family (including extended family) is seen as the primary support mechanism to deal with personal and family issues. Therefore, problems should be addressed within the family instead of going outside the family. This partially explains why Middle Eastern families are less likely than many groups to seek out mental health services to deal with emotional or family problems. Instead, members of the extended family will often act as problem solvers, mediators, or peacemakers when problems or conflict arise in the nuclear family. This is a common practice in many collectivist cultures. The traditional family structure is hierarchical, with older members often having more authority over younger members. Thus, parents generally gain more respect and authority as they age. Also, the Middle Eastern family is considered patriarchal, with the father as the head of the family, since men typically have more power and legitimate authority than females. Older brothers may also exercise their power over sisters. Thus, Middle Eastern females are subordinate to their fathers, brothers, and husbands. Although the husband/father has legitimate authority for all family matters, wives may exercise more indirect power by aligning or consulting with extended family members (e.g., their mothers-inlaw). With acculturation to the U.S. culture, Middle Eastern women have gained more power in the family. Not surprisingly, there is more resistance to

change by the men than the women in Middle Eastern immigrant families. Given the importance of males, sons are encouraged to establish independence and be assertive, yet they must simultaneously respect the authority of the father. The Middle Eastern American family is considered to be patrilineal (i.e., tracks lineage through the males). Consequently, a son (and his new wife) may continue to live with his parents when he gets married. However, when a daughter gets married, she generally moves out of the family home to live with her husband. As a Middle Eastern family acculturates to U.S. culture, this practice becomes less common. Instead of living in the same household, the son and his wife may live nearby his parents’ home. The aging parents are generally looked after by their children and their spouses. When the father dies, the eldest son often inherits the authority and the corresponding responsibility for the family. In addition, inheritances are passed from parents to children, but the shares may be more generous to the sons than to the daughters. Specific roles are defined for family members by culture and religion. For example, the husband is expected to be the primary breadwinner. However, after immigrating to the United States, it may be difficult for the father to secure a job that provides for the family, so the wife may have to take a job. Although a woman may be encouraged to pursue education and/or start a career, this is secondary to her primary role in the home. When the wife does not work, it can create economic hardship in immigrant families. However, a wife who works outside the home can come in contact with values and behaviors inconsistent with religion and/or culture. Thus, keeping a wife at home can help maintain family and cultural values. Parent–Child Relations in the Middle Eastern American Family Because Middle Easterners are family oriented, childbearing and child rearing are extremely important. The father is generally considered the disciplinarian of the children, while the mother provides for the children’s daily living needs. Given the hierarchical nature of Middle Eastern families, children are expected to obey the older generations (e.g., parents), especially the father. When conflict between the father and children emerges, the mother may become a mediator between the



children and the father. Discipline for unacceptable behaviors may include mild social disapproval, the silent treatment, lecturing, shame, guilt, and punishment. Harsh discipline can create problems for recent immigrants as they must conform to the laws the state in which they reside. However, physical punishment rarely gets out of control because when Middle Eastern parents engage in unreasonably harsh punishment, extended family members may step in to curtail the punishment, especially if it is excessively physical. In Middle Eastern families both parents tend to give unconditional love and show affection toward children and grandchildren. However, mothers are often the primary conveyors of this affection, especially toward the sons. Both parents also engage in extensive monitoring of their offspring to ensure they are on track academically and to keep them out of trouble. However, more freedom is generally granted to the male offspring than to females. Because family honor is very important, children in Middle Eastern American families may refrain from committing acts of aggression or delinquency because they fear shaming or embarrassing their families. Middle Easterners are considered to be collectivist and tend to base their identity on group membership (e.g., family, country of origin, religion). Thus, group needs are primary, with individual needs considered secondary. Specifically, emerging adults are expected to alter their personal aspirations so they do not conflict with the expectations of their parents. As an illustration, emerging adults may select a college major or occupation that fulfills their parents’ wishes or needs instead of following their own dreams. If a young adult tries to establish independence away from the family (e.g., moving out of the home), parents may use guilt and shame as a way to assert authority. Middle Easterners are often characterized as embracing multiculturalism over assimilation more than other ethnic groups. This is especially true for those Middle Eastern immigrants who have been historically oppressed in their country of origin (e.g., Armenians who have been mistreated by Iran or Turkey). These oppressed groups may be less resistant to change since they have adopted strategies that enable them to maintain a strong cultural identity within the dominant culture (e.g., living in ethnic enclaves, marrying only from the same

Middle East Immigrant Families

873

ethnic group). Due to fears of losing their language and culture, parents play an instrumental role in family ethnic socialization. Because mothers usually attend to the daily needs of the children, they often play the primary role in cultural socialization. Also, many parents may send their children to private schools in order to maintain their native language and the values associated with their faith or culture. Dating, Marriage, and Divorce in the Middle Eastern American Family In Middle Eastern culture, marriage is both revered and a family matter. Marriage functions to unite families and maintain family interests. Marriages between relatives (e.g., cousins) were common in many Middle Eastern societies as a way to ensure family unity and to keep wealth within the family. However, marriages between relatives are less common once a family immigrates to the United States. Parents often play an active role in mate selection. They do this by monitoring dating activities, arranging meetings between young people of marrying age, and/or by showing approval or disapproval of their children’s mating choices. However, younger generations (especially those born in the United States) are more likely to adopt dating and marriage patterns of the dominant culture. Thus, cultural conflict arises between generations when more traditional parents born in the Middle East are not happy with their adult children’s dating habits or choice of a mate. For example, dating many partners or engaging in premarital sex, especially for women, is highly discouraged and looked down upon. A seemingly promiscuous daughter could bring shame to the family. Conflict can also arise if a son or daughter is dating someone from a different country of origin, ethnic group, and/or religious group. Intermarriage between different groups is highly discouraged in Middle Eastern families. Specifically, there is a strong value on marrying someone from the same group. As an illustration, a Shiite Muslim originally from southern Lebanon would be encouraged to marry another Shiite Muslim from the same geographic region. This is especially true for groups who have experienced persecution from dominant cultures. Intermarrying is a way to ensure the survival of a culture. For example, a young male Zoroastrian would feel pressure from his parents and

874

Middle-Class Families

other Zoroastrians to date and marry a Zoroastrian female so he could propagate his culture. Also, Middle Easterners understand that interfaith or intercultural marriage can create additional hardships within a marriage and family. For example, if an Arabic Muslim American man marries a European Christian American woman, she may have an expectation of equality between husband and wife, which is inconsistent with his view. Also, she may not want to live with his family of origin, and she and her husband may differ in how they want to raise their children (such as approaches to religion and discipline). When marriage does occur, there generally are many rituals and customs prescribed by the specific culture and religion. These rituals cover everything from the families meeting together, engagement parties, wedding ceremony, honeymoon, etc. The customs may include bride price, dowry, and prenuptial agreements. Divorce is highly discouraged in Middle Eastern culture. Although divorce rates have been increasing in the Middle East, the divorce rates are higher in Middle Eastern immigrant families. If a divorce occurs, there are stipulations outlined in many of the Middle Eastern religions (such as Islam and Judaism). Divorce can bring shame to the family, resulting in additional hardships for the children and their parents. Scott W. Plunkett California State University, Northridge See Also: Collectivism; Immigrant Families; Islam; Judaism and Orthodox Judaism. Further Readings Abi-Hashem, Naji. “Working With Middle Eastern Immigrant Families.” In Working With Immigrant Families: A Practical Guide for Counselors, Adam Zagelbaum and Jon Carlson, eds. New York: Routledge, 2011. Abudabbeh, Nuha. “Arab Families.” In Ethnicity and Family Therapy, 2nd ed., Monica McGoldrick, Joe Giordano, and John Pearce, eds. New York: Guilford Press, 1996. Haddad, Yvonne Yazbeck, Jane I. Smith, and Kathleen M. Moore. Muslim Women in America: The Challenge of Islamic Identity Today. New York: Oxford University Press, 2006.

Hammer, Juliane and Omid Safi. The Cambridge Companion to American Islam. New York: Cambridge University Press, 2013.

Middle-Class Families American poet Walt Whitman, writing in his capacity as editor of the Brooklyn Daily Times in 1858, praised the middle class and identified middle-class families as those making around $1,000 a year. However, even in Whitman’s day, middleclass identity had as much to do with values and self-image as with income. Founded on ideals of democracy and equality with no inherited titles and no formal aristocracy, the United States has defined itself as a classless society. Its most cherished myth, celebrated by everyone from fictional bootblacks to presidents in three centuries, has been that merit and hard work make it possible to fulfill one’s most ambitious dreams. Between the American Revolution and the Civil War, the division of gender roles into the male breadwinner and the female guardian spirit of the home shaped the identity of the 19thcentury middle-class family, an ideal that became iconic in the culture of mid-20th-century America. In the 21st century, economic reverses, shifting gender roles, civil rights victories, and sociological realities have altered the accepted image of the middle-class family, but values and dreams still outweigh net worth in determining the middle class in the United States, a fact recognized even by the U.S. Department of Commerce, which acknowledged in a 2008 report amid rumors of a disappearing middle class that middle-class status depended as much on a state of mind and aspirations as on income. The New Middle Class When French historian Alexis de Tocqueville visited the United States in the 1830s, gathering material for what would become his two-volume work Democracy in America, he observed that the young nation demonstrated that the middle classes had the capacity to govern even if their education was inferior and their manners deplorable. The qualification seems ironic given that those aspiring to reach the middle class saw education and refined manners as means of attaining their goal. By the 1830s, young men



were leaving the farms of their fathers in increasing numbers for opportunities in the cities. Piety, selfcontrol, and a willingness to work—qualities that would later be termed “middle-class values”—along with literacy and cultural capital, often acquired through the study of advice or conduct manuals, increased the likelihood of their success. Between the 1830s and the 1870s, white-collar workers in increasing numbers were forming the new middle class. As early as 1870, young men looking to advance themselves as managers, salesmen, office workers, and salaried professionals made up 6 percent of the labor force. It was another two decades before women were employed in clerical positions in significant numbers. Although whitecollar workers lacked the autonomy of other members of professions and entrepreneurs, they were nonmanual workers, a frequently cited criterion of class distinction in the 19th century. They also qualified for middle-class status on the basis of holding salaried positions rather than being paid an hourly wage. Such jobs were also viewed by many as the first rung on a ladder that would lead to increased wealth and prestige. The most significant evidence that a man held middle-class status was his home, filled with the trappings of gentility—possessions more easily acquired thanks to lower consumer costs—and managed by his wife whose sphere was limited to the domestic. Historically, the household was the site of production, and no distinction existed between work time and space and family. With the separation of workplace and the household that occurred with the Industrial Revolution, the delineation of gender roles, which existed as early as the colonial period, grew stricter, and a clear division of labor replaced the earlier model of economic partnership. The middle-class male was the breadwinner, a term coined in 1821 to mean one who earns a living for his family and carrying the connotation of struggle to achieve. Women were relegated to the private sphere. The home was her domain, and there she was expected to manage the household, supervise servants, serve as hostess for social events, and generally contribute to the comfort and happiness of her husband. With the birth of children, she became responsible for their care and for their early education. She was expected to be a paragon and the emotional, moral, and spiritual center of the family. According to the “cult of domesticity,” the “true

Middle-Class Families

875

woman,” content in the sphere assigned to her by custom, nature, and God, exemplified the virtues of piety, purity, domesticity, and submissiveness. Contact with the public sphere, the male world of commerce and politics, was demeaning and suggested that the husband was incapable of fulfilling his proper role. His proper role, symbolically at least, included serving as a figure of authority and stability in the lives of his children, but in reality, middle-class fathers were largely absent from the nursery and the schoolroom. Some scholars have described them as “invisible.” Others have suggested that the holiday and other special day rituals, many of which first appeared in the 19th century, were a means of involving fathers in family life and strengthening emotional ties. The separate-spheres lifestyle did not extent to the working-class family, whose survival depended on women as well as men working for wages. The middle-class lifestyle depended in part on the paid labor of working-class women who served as household servants for the more affluent. Changes, Challenges, and Confirmation for the Nuclear Family The first decades of the 20th century brought social changes that some viewed as radical enough to transform the middle-class family. The definition of the private sphere was being expanded. The middle-class homemakers who found themselves an unplanned leisure class joined women’s clubs in large numbers. The General Federation of Women’s Clubs, founded in 1989, boasted more than a million members by the early 20th century. Women in these clubs championed such causes as child labor, pure food and drug legislation, conservation, and health reform. Women’s suffrage had been viewed by defenders of established gender roles as an assault upon the middle-class family initially, but by the time the Nineteenth Amendment passed in 1920, many critics had been persuaded to accept Jane Addams’s view that suffrage merely made women the “housekeepers” of the nation, enlarging their role as natural moral and spiritual guides within their homes to encompass the nation. Life was changing even more rapidly for younger middle-class women. From 1890 to 1920, women comprised 60 percent of all high school graduates, and by 1920, 47 percent of students enrolled in college were female. The numbers of women in

876

Middle-Class Families

A 2010 report from the U.S. Department of Commerce identified the common aspirations of the middle class: a house, a car or two in the garage, a vacation now and then, decent health care, and enough savings to retire and contribute to the children’s college education. However, the same report acknowledged that rising costs made fulfilling these aspirations a struggle, making the 21st-century definition of a “middle-class family” difficult to pin down.

the workforce were also increasing dramatically. Women in the profession, reached 13.3 percent in 1920, almost 1 million, and twice that number were employed in clerical positions. As the primary purchasers of appliances, clothes, radios, and domestic furnishings, women were claiming a significant role in the growing consumer economy. Women were also enjoying unprecedented sexual freedom. The flapper with her bobbed hair, higher hemlines, cigarettes, and freer language may have been the most visible symbol of this change, but advances in birth control technology and the proliferation of articles in popular magazines supporting the idea of sex as pleasurable for women suggested the changes affected a population much larger than the flappers. But all these changes were taking place without altering the framework of the middle-class family substantially. Colleges that women were attending frequently emphasized middle-class values and traditional female roles. Men were able to pursue

career interests and enjoy family life as well, but women who chose professions nearly always forfeited their opportunity to be wives and mothers. Only 12.2 percent of professional women in 1920 were married. Few married, middle-class women worked. The average woman in the workforce as late as 1940 was unmarried, young, and poor. African American women and immigrant women made up the largest segment of the female workforce. Middle-class women in overwhelming numbers continued to be homemakers, and their assumption of that role continued to be seen as a badge of the breadwinner’s economic and social success. Middle-class marriage may have suffered a brief setback during the Depression when the lack of jobs made starting a family more problematic, but by 1935 the economy was improving and the marriage rate reached 10.6 per 1,000 again. A wave of postDepression conservatism seemed to be widespread among young women. One survey from the period



found that 90 to 95 percent of women enrolled in college said that their main career goal was to become a wife and mother and that supporting a husband’s career advancement was more important than pursuing their own. It was not uncommon for these young women to be advised to underplay their intelligence and achievements to increase their chances at marriage, but more change was just ahead. When the United States entered World War II, it became clear that women would be needed to work in the war industries. Rosie the Riveter, the image of a strong, efficient, pretty working woman used in the propaganda effort to convince women their contribution to the war effort was vital, became an enduring symbol of women’s competency. The 60,000 women in the Women’s Auxiliary Army Corps (WAAC) and those in other women’s auxiliaries certainly included many women from the middle class, since skilled clerical workers, teachers, stenographers, and telephone operators were those being recruited. Members of the first WAAC officer candidate training class of 440 women, including 40 African Americans, were on average 25 years old, had attended college, and were working as an office administrator, executive secretary, or teacher. The average middle-class family during World War II consisted of a husband in the military or working on the homefront and a homemaker caring for her children, juggling ration coupons, planting victory gardens, selling war bonds, and engaging in other activities that were both patriotic and appropriate for a woman of her class. The Flowering of the 1950s Family The Servicemen’s Readjustment Act of 1944, better known as the G.I. Bill, had far-reaching effects on the middle-class family, expanding its numbers beyond anyone’s imagination. Education and home ownership had been tied to the identity of the American middle class from the beginning. The G.I. Bill made it possible for millions to achieve both, and for many of the returning veterans, their benefits allowed them to move their families from working class to middle class. The most popular provision of the G.I. Bill allowed veterans to attend any educational institution that admitted them, using benefits that covered tuition and helped support their spouses and children. Government projections were that 8 to 12 percent of veterans would choose to use the education benefits. More than

Middle-Class Families

877

half of them did so in some fashion, with more than 2.2 million enrolling in college and an additional 5.6 million attending high school or vocational school. About three of every 10 veterans used low-interest mortgages to buy homes, farms, or businesses. In 1955, the Veterans Administration backed close to a third of housing starts. Developers built inexpensive tract houses (82,000 in Long Island’s Levittown— and not one sold to an African American) and created instant communities with their own schools and recreational facilities. The federal government built the highways that made it easy for residents of the new suburbs to commute to work. By 1960, half of all American families owned their homes. By the end of the 1950s, 65 percent of American families were middle class, more than twice the percentage in 1929. Working-class families who were moving up felt little of the dissatisfaction with the Victorian ideal that had been surfacing in some educated families in the 1920s and 1930s. The lifestyle they identified as middle class included the husband as sole breadwinner and a wife who cared for the children in a single-family house the couple owned. Only 16 percent of women worked outside the home, despite the fact that 70 percent of those who lost jobs to returning veterans preferred to continue working. Employment and incomes were high. So was the marriage rate: 96.45 percent of women and 94.1 percent of men married, and they did so at younger ages. The average bride was 20. After a steep increase immediately following the war, the divorce rate dropped. Home and family became a source of security and contentment for men and women alike. The birthrate was rising, too. Most brides were pregnant within the first seven months of marriage, and the number of couples who had three children doubled from 1940 to 1960. The number who had a fourth child quadrupled. The number of children under 5 years of age per 1,000 women ages 20 to 44 rose from 400 in 1940 to 551 in 1950; 10 years later, the figure was 667, the highest since 1890. In 1957, at the peak of the baby boom, a baby was born every seven seconds. Anthropologists coined the term nuclear family to refer to a family unit built around the nucleus of father and mother. Media and advertising were vested in the new middle-class family as well. Popular women’s magazines offered advice on keeping hubby happy and the wife in her proper place. Life magazine declared that the American

878

Middle-Class Families

male had been domesticated, and other periodicals advised fathers to be less distant and more nurturing in their engagement with their children. A new medium was becoming an indispensable part of the life of the middle-class family. By 1954, 55.7 percent of American families had television sets; four years later, the number reached 83.2 percent. No other technology had spread so rapidly, and families with children were the quickest to add television sets. Situation comedies were family favorites, and many of them offered an idyllic version of 1950s family life. I Love Lucy (1951–57) topped the ratings and made history with Cuban American Desi Arnaz, real-life husband of star Lucille Ball, playing Ricky Ricardo, band leader and husband of the irrepressible Lucy. Lucy and Ricky did their part to add to the baby boomers, and 44 million Americans watched the episode when Little Ricky was born. Ward Cleaver (Leave It to Beaver, 1957–63), Ozzie Nelson (The Adventures of Ozzie and Harriet, 1952–66), and Jim Anderson (Father Knows Best, 1954–63) demonstrated how the “new father” behaved, and June Cleaver, who baked cookies in pearls and high heels, personified the idealized middle-class mom so perfectly that she became a cultural standard nostalgically celebrated decades after the show’s run ended. The Changing Structure of the Middle-Class Family The decade of the 1950s was the golden age of the traditional family unit. By 1960, changes were under way that would change the middle-class family in myriad ways. In 1960 the U.S. Food and Drug Administration licensed the sale of the oral contraceptive known in the vernacular as “the pill.” Within two years, more than a million women were using it. It allowed women to control the timing and spacing of pregnancies, which made it easier for women to work outside the home. By the end of the decade, more than 80 percent of married women of childbearing age were using the pill. In 1963, The Feminine Mystique was published and fired the first shot in the revolution led by second wave feminists, who were overwhelmingly middle-class women. In 1966, learning from the organizations that were demanding civil rights for African Americans, Betty Friedan and others founded the National Organization for Women. Pushing for enforcement of Title VII, the Civil Rights Act of

1964 that outlawed major forms of discrimination against racial, ethnic, national, and religious minorities—and women—was one of the group’s first priorities. The number of women increased to more than 40 percent of the work force during the 1960s, and that change accelerated over the next decades. By 1985, 71 percent of women between 25 and 44 were in the workforce. That same year, they held 49 percent of professional jobs, 39 percent of jobs in banking and financial management, and 36 percent of management jobs. These changes and doubtless others as well contributed to momentous changes in the structure of middle-class families. The fertility rate, which reached 3.7 in 1957 at the height of the baby boom, dropped below the replacement level by the early 1970s. Although the introduction of no-fault divorce laws was a factor, scholars generally agree that the increase in the number of women working full-time outside the home and their resulting economic independence was the single most significant factor. Female-headed households reached 5.5 million in 1970, a 50 percent increase in a generation. The number almost doubled again by 1989. The greatest increase in the decade of the 1970s occurred among college-educated women. Singleparent families headed by men were fewer, but their increase was even more dramatic, with the 2.8 million in 1989 representing a 132 percent increase over 1970 numbers. The nuclear family did not disappear, but its numbers did shrink in the decades after the 1950s. By the turn of a new century, the middle-class family would encompass a variety of configurations. In 2010, nuclear families accounted for just onefifth of all American households. The middle-class family of the 21st century may be a single-parent family, a blended family, a stepfamily, a childless family, an unmarried couple with children, or a multigenerational family, and any of these configurations may involve gay or lesbian parents. Even among nuclear families, traditional gender roles may be overturned with two parents working full-time or a stay-at-home father and a breadwinner mother. The larger question may be the survival of the middle-class family, whatever its size and shape. The Disappearing Middle Class In the 21st century, the definition of “middle-class family” is difficult to pin down. The income used



to designate this group ranges from $32,500 at the lower end of income for lower-level, white-collar workers to $250,000, the cutoff used by both President Barack Obama and his Republican opponent in the 2012 election. The Census Bureau placed the median household income in 2012 at slightly more than $50,000, the lowest since 1996. Regardless of income cited, researchers are in agreement just as the global middle class is dramatically increasing, the middle-class family in the United States is decreasing. Historically, most Americans, regardless of income, have viewed themselves as middle class. About half of them still did in 2012, including about 50 percent of those earning more than $100,000 a year, but nearly one-third self-identify as lower class or lower middle class, a 7 percent increase since 2008. The increase is greatest among the young. A 2010 report from the U.S. Department of Commerce identified a house, car or two in the garage, a vacation now and then, decent health care, and enough savings to retire and contribute to the children’s college education as the common aspirations of the middle class but acknowledged that rising costs made fulfilling these aspirations a struggle even for a twoincome family making more than $80,000. Even college-educated workers whose incomes increased by a fifth between 1990 and 2008 found a gap between what they had planned and the reality of the 56 percent jump in the cost of housing, the 155 percent leap in out-of-pocket spending on health care, and the double-digit increase in the cost of college. The picture is even more dismal for African Americans. A 2007 Pew Research report revealed that 45 percent of black children whose parents were solidly middle class in 1968—a stratum with a median income of $55,600 in inflation-adjusted dollars—grew up to be among the lowest fifth of the nation’s earners, with a median family income of $23,100, three times the rate of whites who experienced similar downward mobility. At the same time, 48 percent of African American children whose parents were in an economic bracket with a median family income of $41,700 sank into the lowest income group. Even in the face of such bleak economic conditions, experts continue to insist that being middle class is about more than income. It is about believing that tomorrow, if not today, the family can afford a home, a car, a vacation, and a good education for

Middle-Class Families

879

their children. With more Americans doubting they can achieve these dreams, it is perhaps unsurprising that in 2014 some American media are writing epitaphs for the middle-class family. Wylene Rholetter Auburn University See Also: Addams, Jane; Baby Boom Generation; Breadwinners; Cult of Domesticity; Nuclear Family; Parenting; Single-Parent Families; Working-Class/ Working-Poor Families. Further Readings Bledstein, Burton J. and Robert D. Johnston. Explorations in the History of the American Middle Class. New York: Routledge, 2001. Descartes, Lara J. and Conrad Kottak. Media and Middle-Class Moms: Images and Realities of Work and Family. New York: Routledge, 2009. Hemphill, C. Dallett. “Middle Class Rising in Revolutionary America: The Evidence From Manners.” Journal of Social History, v.30/2 (1996). Hernandez, Donald J. “Declining Fortunes of Children in Middle-Class Families: Economic Inequality and Child Well-Being in the 21st Century.” Foundation for Child Development Child and Youth Well-Being Index Policy Brief. New York: FCD, 2011. MacPherson, Ryan C. “Marital Parenthood and American Prosperity: As Goes the Middle-Class Family, So Goes the Nation.” Family in America: A Journal of Public Policy, v.26/1 (2012). http://www .familyinamerica.org/files/Spring2012Files/FIA .Spring12.MacPherson.pdf (Accessed September 2013). Mills, C. Wright. White Collar: The American Middle Class. New York: Oxford University Press, 1951. Ochs, Elinor and Tamar Kremer-Sadlik, eds. FastForward Family: Home, Work, and Relationships in Middle-Class America. Berkeley: University of California Press, 2013. Ornstein, Allan. Class Counts: Education, Inequality, and the Shrinking Middle Class. Lanham, MD: Rowman & Littlefield, 2007. Rudd, Elizabeth and Lara Descartes, eds. The Changing Landscape of Work and Family in the American Middle Class: Reports From the Field. Lanham, MD: Lexington Books, 2008. Samuel, Lawrence R. The American Middle Class: A Cultural History. New York: Routledge, 2013.

880

Midlife Crisis

U.S. Department of Commerce, Economics, and Statistics Administration. “Middle Class America.” (January 2010). http://www.commerce.gov/sites/de fault/files/documents/migrated/Middle%20Class%20 Report.pdf (Accessed September 2013).

Midlife Crisis A midlife crisis is considered to be a time of personal turmoil often triggered by the recognition of significant life events such as death, aging, illness, unhappiness, or dissatisfactory roles and relationships, among others. Canadian-born psychologist and psychiatrist Elliot Jaques (1917–2003) is credited with coining the term midlife crisis in 1965, and the idea began to receive significantly more attention in the 1980s. This time of difficult transitions stereotypically occurs at what is considered the midlife point, which in American culture is around age 40. However, given increased life expectancy over the past decades, the notion that 40 is the time when people begin to contemplate mortality is an outdated concept. Additionally, personal choices such as increased higher education goals, postponing marriage and children, and changes in career paths have shifted early adulthood patterns to the later years and have contributed to this expansion of the midlife crisis age range. Definitions of age ranges are elastic; therefore, current research has expanded the midlife crisis period to include ages 40 to 60, although episodes have been reported before and after this 20-year span. Causes There are various schools of thought as to what incites midlife crisis. Some researchers believe that midlife crisis is prompted by internal triggers and that personal reflection highlights the shortcomings between current achievements and aspirations. On the other hand, other researchers attribute midlife crisis to outside events. Some of these include job loss or career disappointment; health problems; death of parents, siblings, and other relatives; marital dissatisfaction; divorce or separation; extramarital affairs; empty nest syndrome; and stressors of caring for aging parents. Despite the varying theories for the causes of midlife crises, there is agreement that this period is eventful, known to cause

higher levels of stress, and that the aging process exacerbates the notion that the midlife point is an indication of the decline to come. The notion of midlife crisis is stimulated by several factors. One is the sizable number of baby boomers (people born post–World War II, 1946– 64) that have recently experienced or are currently in the middle-age phase. Their search for meaning and the influence this group has due to size has proliferated the notion of midlife crisis and given it additional emphasis. Another contributing factor is media attention. News reports, special interest stories, social commentators, product advertisements, movies, and television shows can foster a culture of fear about midlife crisis. Media attention has the power to bring this concept to the forefront; therefore, people who had been satisfied with personal lives before may now question themselves and perhaps develop feelings of inadequacy. Men Versus Women Popular culture proposes that men tend to experience midlife crisis at higher rates than women. However, the research supports that both men and women report experiencing crises at fairly equal rates. Physiological factors such as andropause for men or menopause for women are considered to be triggers of midlife crises. Andropause, sometimes referred to as “man-opause,” includes a decrease in testosterone, which in some instances can lead to loss of energy, lack of focus, depression, mood swings, and erectile dysfunction. Similarly, menopause in women entails a reduction of the female hormone as well as the permanent cessation of monthly cycles and marks the end of the fertile phase in the life of a female. In some instances, midlife crises can lead to depression that can be characterized by a change in eating and sleeping habits, fatigue, restlessness, anxiety, irritability, thoughts of suicide, and loss of interest in activities once enjoyed. In these cases, seeking professional assistance is often advised to find suitable treatment. Behavior or talk therapy, in combination with prescription antidepressant medication, are common treatment options when major or clinical depression associated with midlife crisis is identified. Cultural Construct Society recognizes midlife crisis as a time when individuals make drastic decisions such as buying



sports cars, seeking divorce or separations, begin dating younger partners, and making significant career changes. The term is recognized by popular culture and remains current in media, film, literature, and research. However, some believe that midlife crisis tends to carry a negative connotation and therefore prefer to denote this phase in more neutral terms such as midlife transition. This perspective presents midlife transition as a set of transformations over time related to stages of personality development, and focuses more on a time for growth rather than stress. Those who share this point of view also recognize the period as an opportunity for self-actualization and personal betterment rather than as a negative time of confusion. More and more, this is seen as a typical part of life and as a normal transition to the next phase. A disconnect has been created between midlife crisis as a cultural understanding and the theoretical research concept. Popular belief leads to overestimating the frequency with which people experience midlife crises and the amount of stress that it can produce. On the contrary, research suggests that stressors attributed to this period, such as career decisions and marital uncertainty, are naturally lived during the younger years rather than the midlife point. Significant changes and decisions tend to be made earlier in life when the foundation is being set for career and relationships. Many Americans associate this period with the natural sequence of the maturing process rather than the realization of mortality or unusual highstress situations. Yet, there are others who do not acknowledge or have skeptical views of midlife crisis. Research has found that only about 10 percent of the U.S. population experience this time of uncertainty, and concluded that undergoing a midlife crisis is the exception rather than the rule. Midlife crisis is not a universal notion because the concept of midlife arises from specific historical and social circumstances. This concept has changed over time depending on life expectancy, who defines it, and cultural age markers. According to the research, some cultures do not experience midlife crises, but the Western emphasis on remaining young feeds the midlife crisis concept. Given America’s youth-oriented society, it is not surprising that personal evaluation of life based on comparisons between optimal standards and the current life situation can often lead to soul-searching

Midwestern Families

881

and self-awareness that can result in life-altering decisions. Flor Leos Madero Angelo State University See Also: Baby Boom Generation; Empty Nest Syndrome; Family Stress Theories. Further Readings Brandes, S. H. Forty: The Age and the Symbol. Knoxville: University of Tennessee Press, 1985. Brooks-Gunn, J. and B. Kirsh. “Life Events and the Boundaries of Midlife for Women.” In Women in Midlife, B. K. Baruch and J. Brooks-Gunn, eds. New York: Plenum, 1984. Erickson, E. H. Childhood and Society. New York: Norton, 1963. Jaques, E. (1965). “Death and the Midlife Crisis.” International Journal of Psychoanalysis, v.46 (1965). Wethington, E. “Expecting Stress: Americans and the ‘Midlife Crisis.’” Motivation and Emotion, v.24 (2000).

Midwestern Families Often referred to as America’s Heartland, the Midwest is best known for its farmlands and small-town way of life, but it also features large metropolitan cities, large industries, and major corporations. The Midwest is generally considered to include 12 states in the north-central United States: North Dakota, South Dakota, Minnesota, Wisconsin, Iowa, Nebraska, Missouri, Kansas, Ohio, Indiana, Illinois, and Michigan. Midwest Expansion The completion of the railroad connecting the East and West Coasts helped facilitate migration to the Midwest. Significant expansion in the Midwest dates back to 1862, when President Abraham Lincoln signed the Homestead Act into law. This act turned millions of acres of public land over to small farmers at a low cost. After early settlement, a landtenure system emerged in which family farms dominated the landscape. With the Great Depression came struggles for many farm families; many people began to question how small, rural communities

882

Midwestern Families

could sustain themselves in tough times. The New Deal programs and World War II both played a role in improving rural conditions; however, they also marked a time of great migration from rural to urban areas. Midwestern Residents Family played a key role in the development of the Midwest. Early settlers created a culture centered on farming. Native-born migrants often moved in extended family groups and settled on neighboring lands. Many European-born migrants could not afford to move as entire families, so some migrated and settled, and others followed later. Often entire neighborhoods of relatives or friends migrated and settled new neighborhoods in the Midwest, preserving their cultural heritage. Farms were often passed through inheritance from one generation to the next. Early family farms depended on all members pitching in to do farm labor and household work. Immigrant families in particular depended on family labor, so these families tended to be large. Poor families often had to pull children out of school at a young age so they could help with farm chores. Families with more money were able to afford hired help to assist with farm and household chores, which gave wealthier farmers and their wives time to volunteer for church, civic, and other community causes. A dark side of reliance on family for farm tasks was the many women and children who suffered physical abuse, and this abuse was often unnoticed due to the remoteness of farms at the time. Many children also suffered serious injuries or death as a result of accidents on farms. As industrialization helped simplify agricultural production, fewer children were needed to help complete farming and household tasks, and families became smaller. In many ways, the midwestern social patterns of cooperation, vigilance, and volunteerism that were established during frontier times remain today. A small-town ideology helps shape social relations in rural Midwest towns where life revolves around family, friends, neighbors, school, and church. Volunteer organizations, church groups, historical societies, volunteer firemen, and various service groups provide social opportunities for many rural residents. Residents of small towns are often well acquainted with one another and monitor each other’s behavior and property. Although familiarity

with one another might be advantageous to many in small towns, new residents who move in often struggle to gain insider status. The Midwest shows some mixed results on national indicators of well-being. In terms of child well-being, including measures of family economic well-being, health, safety, risky behaviors, emotional well-being, and educational attainment, states in the Midwest tend to fare well. In contrast, growing numbers of midwesterners live in poverty. By the early 21st century, those struggling with extreme poverty were more likely to be white, have a high school diploma, own a home, and live in the Midwest. Concentrated poverty was a growing problem across the country, but midwestern metropolitan areas were particularly hard hit, and African American families were among the poorest residents. The metro areas with the highest unemployment, lowest incomes, and worst schools also had the highest black populations. Economy The Midwest economy is largely based on heavy industry and agriculture, both accounting for thousands of jobs. Rural Midwest communities have historically had lower poverty rates than the national average, in part due to their reliance on a farm-based economy. However, declines in farm employment have negatively impacted small towns. During the 1980s, rural areas in the upper Midwest experienced a farm crisis of great economic decline that led to the closure of thousands of businesses, including many family farms. The farm crisis peaked during 1986 and 1987, with many families facing loss of income, foreclosures, job loss, and underemployment. Most midwestern men and women worked offfarm prior to the farm crisis, but about one-quarter of men and more than one-third of women reported finding off-farm employment to help meet their families’ financial needs during the crisis. Numerous midwestern communities experienced severe economic problems as a result of the 1980s farm crisis. Populations in rural counties declined as residents moved to larger communities to seek employment. As farm families reduced their spending to cope with the crisis, small-town businesses felt the impact. Many small communities failed to rebound economically after the farm crisis. Since that time, social and economic changes have



led to gains for some communities and losses for others—some towns benefit from their proximity to larger cities and serve as bedroom communities, but those that are more remote face considerable problems. Like many inner suburbs around the country, those in the Midwest have experienced great economic decline. Hardest hit have been those that have historically been dependent on heavy industry like meatpacking, steel and paper mills, assembly lines, and other manufacturing. Factories have closed or moved to other cities or overseas where they can better compete in the global economy. Given their emphasis on heavy industry, Ohio and Michigan have been particularly hard hit by economic decline and population loss. Rapid demographic changes and competition for low-wage jobs have resulted in increased tensions among rural residents. Families that remain face considerable economic challenges, as well as diminishing social services, poor schools, substandard housing, unemployment, and violence. A number of Midwestern cities are characterized by high levels of working-poor families, and workers with low education and limited skills are increasingly considered unemployable. Technological advances have meant factories and other industries now rely on fewer manual laborers, while also needing more educated workers. Some struggling Midwestern cities have sought to attract new residents by becoming immigrant friendly, luring both low-wage laborers as well as highly skilled entrepreneurs. Cultural Diversity In terms of race, the dominant view of the Midwest is that it is largely homogeneous, but it is characterized by great ethnic diversity. Most late19th-century settlers were European farmers and peasants who sought agricultural opportunities. They founded towns designed to replicate European cities and celebrated rituals of their homelands. Today, the Midwest still has large concentrations of people of German and Scandinavian descent who remain where their ancestors first settled. During the mid-20th century, thousands of African Americans migrated from the south into Midwest urban areas. By the end of the 20th century, towns and cities across the Midwest experienced a loss of their native population, but a rapid increase of

Midwestern Families

883

immigrants helped offset these losses. Most were Hispanic immigrants, often of Mexican descent. Although the majority of Hispanic immigrants moved to urban areas in the west, southwest, and northeast, a considerable number also concentrated in smaller communities and rural areas in the Midwest, as well as in midwestern cities. Many were attracted to rural areas due to low-wage employment opportunities in farming, poultry processing, meatpacking, and horticulture, as well as a slower pace of life. Immigration has had some complicated impacts on rural communities across the Midwest. Communities that have experienced population loss due to younger, native residents seeking employment in larger metro areas are now thriving due to an influx of immigrants. Although immigrants have helped revive small towns, they also present some challenges. Midwestern communities struggle with strains on schools, health care, social services, and employment; a significant number of Hispanic immigrants are also in the United States illegally. Hispanic immigrants in particular tend to have a disproportionately high poverty rate, and more recent immigrants are more likely to live in poverty than their counterparts who immigrated in the past. Many immigrants face discrimination in housing and other services, as well as poor educational opportunities due to schools’ lack of preparedness to teach non-English-speaking students. In addition, rural communities often have a limited capacity to respond to social and economic changes brought about by immigration and migration. Given the high birthrate and young age of many Hispanic immigrants, their population increases will likely continue, and they will continue to have an impact on growth and development in the region. Conclusion In the future, midwestern residents and communities will continue to grow more diverse, but the region will likely still rely on many of the influences that gave it its start. Agriculture and manufacturing will undergo changes with technological advances, but both will still be a critical part of the midwestern economy. Likewise, quality of life and a low cost of living will continue to draw newcomers to the region. Kelly A. Warzinik University of Missouri

884

Military Families

See Also: Southern Families; Southwestern Families; Standard North American Families. Further Readings Cayton, A. R. L., R. Sisson, and C. Zacher. The American Midwest: An Interpretive Encyclopedia. Bloomington: Indiana University Press, 2006. Council of State Governments Midwestern Office. “Signs of the Times: Midwestern Demographic Trends and Their Implications for Public Policy” (2002). http://www.csg.org/knowledgecenter/docs /mw-Signs.pdf (Accessed March 2014). Longworth, R. C. Caught in the Middle: America’s Heartland in the Age of Globalism. New York: Bloomsbury, 2008. Walzer, N., ed. The American Midwest: Managing Change in Rural Transition. Armonk, NY: M.E. Sharpe, 2003.

Military Families According to recent data, more than 2 million service members have been deployed since 9/11. Along with those serving in the military, there are several million military spouses, children, and members of extended family in the United States. While the military family has been a fixture in American society since the Civil War, the wartime engagements in Iraq and Afghanistan (Operation Iraqi Freedom/Operation New Dawn and Operation Enduring Freedom, respectively) have brought into the spotlight the modern military family and its struggles as well as its accomplishments. First Lady Michelle Obama and Dr. Jill Biden have played significant roles in garnering attention and bringing into the public discourse increased levels of awareness and support for military families through their initiative, Joining Forces. The media present military separation events and reunions as well as special segments about military family issues. Military families are also made visible through various promotions and marketing materials that give servicemembers and their family members discounts and unique offers (such as Disneyland). Wounded servicemembers and their struggles are often highlighted through charitable functions and solicitations for donations.

The increased presentation of the American military family in U.S. culture brings into bold relief many of the common concerns with this family form. With the military lifestyle comes frequent moving for the family as well as other unique qualities not found in civilian culture. Military families often experience military-related separations due to servicemember trainings and/or deployments, for example. In particular, deployments are a fixture for the modern American military family. The stressors military families face and how they cope with them are often the focus of research study to increase their resilience and ability to fulfill their responsibilities to the military. Military children have specific issues to deal with and do not receive as much research attention as spouses and couples. Military family members returning from deployments with physical or mental effects are receiving a significant amount of attention in the public sphere as well as in research studies given what they face upon reintegration. Same-sex military couples and their families also have received increased media attention and federal recognition in recent years. The changing face of the American military family also includes advances in communication technologies and services available for managing military family matters. Military Life Military family life is distinctive from civilian family life. For example, since the American Revolution military families have experienced relatively frequent relocations. Today, permanent change of station (PCS) is relatively common in military families and occurs when the servicemember is assigned to a different military base, which can be within the United States or in another country. When PCS orders are given, the military family must decide if and how to move to a new home. With frequent changes in household location come other changes for the military family, such as finding new employment for spouses, schooling for children, and immediate support networks for the family. Military family life is also unique given the strong culture of the military, which often spills over into the family dynamic. The hierarchical structure of the military is a strong influence on the family unit. Military bases also offer housing to servicemembers and their families, which can contribute to a sense that the entire family is a part of the military, not just the servicemember. Military families often



Military Families

885

The North Carolina Guard 1452nd transportation company returns home. Military members are separated from their families during short-term temporary duty assignments or long-term combat deployment, which may last from several weeks to more than a year. During these assignments, the servicemember is stationed away from his or her military base.

bond with one another over shared experiences, values, and lifestyle, which are not considered as common in other types of organizations. One of the defining features of military life is the frequent separation between the service members and their families due to military obligations. Military Separations One of the most, if not the most, significant periods in military family life is separation due to military duty. There are several forms of separation ranging from short-term temporary duty assignment (TDY) to long-term combat deployment. During TDY, military servicemembers are required to live apart from their families for such things as trainings and additional schooling. Deployments are long-term assignments during which the service member is relocated away from his or her military base, which may last from several weeks to months to more than a year. Each branch of the military has typical deployment lengths, for example, the Air Force deploys the shortest time (i.e., four months) and the Army deploys for the longest time period (i.e., 12–18 months). Regardless of type and length, separations intervene in and alter military family members’ lives.

Military servicemembers and their families must remain prepared for separations at any time during the servicemember’s service as this is part of what they “sign up” for in the military. The frequency of separations varies due to such factors as the unit assigned, specific service duties, and current needs within the military branch. From servicemember to servicemember, the difference might be significant. For example, some military servicemembers have deployed once while others have deployed five or more times since the engagements in Iraq and Afghanistan began. The uncertainties related to separations are influenced by several variables. For example, will the family know about the separation in advance or will it be an unexpected assignment? Will the military assign the servicemember to a combat zone or are they deploying to a “safe” place? Is the separation long term or short term? Will the family have access to consistent and reliable forms of communication or will their contact be more sporadic? The process of deployment includes notification (e.g., when the servicemember is officially informed by the military of future deployment), predeployment activities (e.g., trainings and briefings), deployment to the new location (e.g., traveling out

886

Military Families

of country, working in a combat zone), and redeployment (e.g., reunion with family members, transitioning back into American life). Each stage of the process typically involves certain emotions and events, often bringing with them tensions or challenges within the family. Notification occurs officially when the servicemember receives orders, but servicemembers may also gain information about potential separations through informal means. Before a separation is considered official, servicemembers and family members are expected to prepare themselves for such an event. Once orders are secured, military families enter predeployment. During this stage the servicemember might attend training sessions in other cities or states, leaving family at home. Depending on the training location and expenses related to traveling, family members might visit the servicemember or the servicemember might return home during this period. Even if the family does not have to temporarily separate during predeployment, military family members must engage in specific activities such as predeployment briefings. Here they are given information about what will happen during the deployment and resources available to them during the separation; however, while the briefings may be informative, family members may still feel uneasy or uncertain about the future. Military family members might experience conflict and distancing during the predeployment period given the myriad and mixed emotions felt, positive and negative, when preparing for the time apart. Spouses and parents must begin discussing and potentially renegotiating their family roles for the different relational context of deployment. They also need to prepare for the possibility of the deployed spouse not returning or coming back injured; therefore, family members must have difficult conversations about such topics before the deployment. Children also experience challenges during predeployment. For example, they must prepare themselves for having only one parent in the home during the separation and taking on new responsibilities within the family. In light of their concerns regarding the separation, some military spouses consider temporarily moving themselves and their children to live with or close to other family members or friends rather than remain in their current residence. The deploying servicemember also must face the reality of leaving family and

prepare for deployment duties. Sometimes these preparations involve emotionally distancing themselves from their family and friends. Once deployment occurs, military families often experience another set of mixed emotions. While they feel the absence of their family member, they can also feel relief when the separation has started. During deployment the family must adjust to the changes previewed in predeployment. While some families find a comfortable routine, other families are challenged by the changes that come with separation, and still others cycle through periods of contentment and challenges. Parents may find themselves feeling like single parents and spouses like singletons. Children may also have difficulty dealing with having one parent in the home. Parents and children might struggle with gender role changes. The deployed family member has the stressors of fulfilling military duties while maintaining family ties back home. Communicating with and maintaining a sense of family is a core concern for all military families. Thankfully, during this period, some families are given the opportunity for a brief reunion period (R&R, or rest and recuperation) with the deployed family member. They are not guaranteed, but if granted, an R&R may last from a few days to a few weeks. As the deployment draws to a close, family members prepare for their reunion. Emotions again may be complicated. Family members may look forward to reuniting while simultaneously enjoying the time apart and the new routine, which makes it difficult to see the separation end. Each person must begin the process of reintegration before the physical reunion occurs. Redeployment begins once the deployed family member returns home. As often presented in the news media, servicemembers are frequently greeted with a formal homecoming. Nevertheless, reunions do not necessarily involve such preparation and fanfare. The most significant concern during redeployment is the reintegration of the deployed family member. This process may range from seamless to traumatic. Spouses must readjust to one another, and parents and children face reestablishing their relationship dynamics. As each military family goes through the redeployment phase, they likely contemplate future deployments or other militaryrelated separations. Overall, the deployment cycle includes these three periods; however, each family experiences a



unique combination of events and emotions as they move through the process. Some families have several ups and downs within each phase, while others maintain levels of stress and satisfaction across all three periods of deployment. No two deployments are exactly the same; however, there are some common stressors, or major life events, military family members report. Spouses might have to change jobs, move the family to a new home, or experience health problems during deployment. A couple might become pregnant, have a baby, or experience relationship problems. Children may have to change schools, make new friends, or move in with another family member (e.g., grandparents). Management of deployment stressors varies, as well. Family members often seek support from others, such as neighbors, extended family members, or other military families. Sometimes military family members can alleviate their own stress by helping others (e.g., participating in family support groups). Coping strategies might also be dysfunctional, such as avoiding dealing with problems, substance abuse, or taking out frustrations on others, such as engaging in child abuse. The conditions of recent military engagements in Iraq and Afghanistan have brought about concerns with military divorce rates as well as increased incidents and awareness of servicemembers returning with injuries. Military Marriages and Divorce Given the stress associated with military deployment, the rates of divorce in each of the military branches have become a source of increased concern within the military, the media, and the scholarly research. While historically divorce within the military population has been less than in the civilian population, recent increases according to some reports have created alarm within the community and larger society. While some challenge the validity of studies showing higher rates of military divorces, it is worth noting some of the potential risk factors for military couples. Deployment is one possible risk factor given the long periods of time spent separated within the marriage. Spouses may grow apart and/or recognize sources of dissatisfaction in the relationship. Stress associated with deployment might contribute to the disintegration of the marriage. Another risk factor is if the wife has deployed during the marriage. Some research shows higher rates of divorce among this group. This may be due

Military Families

887

to added stress on these couples, or families with children, when the wife/mother is away for a period of time. Women are often tasked with second shifts (i.e., when they come home from work they are primarily responsible for housework and child care, for example) and providing emotional labor within their marriages and families, which would need to be distributed to other family members, most likely the husband, when they deploy for their jobs. The added stress combined with the separation from family members may contribute to higher rates of relational dissolution in this group. However, some recent studies report deployment may also have positive effects on military marriages that outweigh or at least balance out the negative effects often emphasized in the media. Deployments might offer particular economic benefits, such as increases in salaries and tax benefits, but also deploying servicemembers and their families can feel validated through their noble service and sacrifices, which may further solidify their commitment to one another and the military. Given the potential delayed effects of deployment and military service on military couples and families, continued attention to the patterns and reasons for military divorce rates is warranted. Returning Wounded Servicemembers Of grave concern to families, the military, and society is the return of mentally and physically injured servicemembers. Two specific injuries receiving significant attention in scientific research and in the media are post-traumatic stress disorders (PTSD) and traumatic brain injuries (TBI). These injuries are of particular concern given their invisible nature (i.e., there might not be signs of the injury like with other physical traumas, or the visible wounds have healed while other injuries remain), which creates challenges for their diagnosis and treatment as well as for gaining support through family and friends. PTSD is considered a mental health condition and may occur in military servicemembers due to events such as one or more traumatic experiences during deployment. Symptoms of PTSD include but are not limited to sharp mood swings, sensitivity to loud sounds or quick movements, nightmares, and reliving the traumatic event. PTSD may last a few weeks or as long as several years. Treatment is vital for the long-term mental health of the servicemember; however, stigma attached to mental health

888

Military Families

problems can serve as a barrier to their seeking help from health care providers and support from their friends and families. Medications as well as therapies are available for treating PTSD. TBIs are physical injuries to the brain that disrupt its normal functioning. In the military context, these may be caused by events such as bomb blast exposures or vehicular accidents. Estimates are that as many as 20 percent of returning servicemembers have a TBI, which can range from minor concussions to severe injuries to the brain. While severe TBI is visually apparent, many brain injuries in military service personnel are left undiagnosed or misdiagnosed, as screenings are not particularly advanced or widely available in the field or at home. Symptoms of TBI may include headaches, problems with balance, and lack of concentration. Given the varied causes and effects of TBIs, treatment often involves a multifaceted approach of physical, mental, and social rehabilitation. Military family support is an important factor to the success of any PTSD and TBI treatment. Same-Sex Military Families The face of the official American military family is changing given two recent policy decisions. First, in 2011 the U.S. policy banning gays from openly serving in the military (what is known in nontechnical language as Don’t Ask, Don’t Tell, or DADT) ended. While the history of marginalizing gays in the military had existed since the Civil War, DADT was put into place during President Bill Clinton’s first term in office. Its repeal has opened the door for gay servicemembers to freely and publicly participate, discuss, and present their family relationships, which may include same-sex marriages, civil unions, or domestic partnerships, depending on the state. This change in policy has also affected children with gay military parents by opening up their recognition in the military system. The second important policy change affecting military families in the United States was the partial repeal of the Defense of Marriage Act (DOMA). Also enacted by President Clinton, DOMA was a federal law with two key provisions. First, DOMA codified that no state was required to recognize a legal marriage from another state. Second, DOMA defined a marriage at the federal level of government as being comprised of one man and one woman, and spouse as an opposite-sex partner in

a marriage, which therefore excluded same-sex marriages, civil unions, and domestic partnerships from receiving equal recognition in issues such as social security benefits and federal tax return marital status. With the repeal of DOMA, samesex military couples who have a state-sanctioned marriage, civil union, or domestic partnership may receive federal recognition. These historic changes in American policy have radically altered what is and what will become military families in the United States. Communication Technologies and the Military Family The American military family has seen, along with the rest of society, extreme changes in the communication technologies available for maintaining military as well as personal relationships. For most of their history, military families have relied on face-to-face communication, letter writing, and, more recently, telephone conversations and e-mail exchanges. Today, military families have available and capitalize on modern technologies, such as Facebook, Instagram, and text messaging, to stay in touch, particularly during military-related separations (e.g., deployments). A recent and highly valuable technology is video conferencing (e.g., Skype), which allows family members to see as well as hear their loved ones when separated. Applications, or apps, are also available through smartphones and tablet devices. For example, LifeArmor is a self-management application offering information and skill development tools regarding mental health issues. In comparison, the National Military Family Association (NMFA) offers MyMilitaryLife, which is a more general app providing support to military spouses seeking assistance in managing military family life. Not only do these technologies provide additional modes of communication and information sources, but the affordability of mobile phones and Internet connections has also significantly affected the relational patterns of military families (e.g., increased and diversified forms of contact). While the latest technologies are available, their use does not ensure successful relational maintenance. Some military family members experience difficulty seeing their loved ones when they cannot be physically near them. Others may feel they can communicate too frequently, which takes away



from the excitement of reconnecting less often. At a practical level, communication technologies do not always function as intended (e.g., slow service), leaving family members frustrated with using the mode of communication. Regardless, the increased availability and affordability of such things as phone contact and Internet connections have forever changed how military family members relate with one another. Services for Military Families Given the increased operation tempo (OPTEMPO) of the military and palpable effects of the United States’ extended engagements around the globe, both the government and private sources have improved as well as increased services available to military families; however, recent economic problems in the United States and the global economy have threatened some of those services. While an exhaustive list is not possible in this forum, a few notable services are the Operation Purple Camp Program, family support centers and groups, privately funded programs such as Talk, Listen, Connect, and the military sponsored Web site Military One Source, which has disseminated information and highlighted services for military servicemembers and their families. The Operation Purple Camp Program is a series of summer camp opportunities for military children who have or will experience their parents deploying. The National Military Family Association, a nonprofit organization dedicated to serving American military families since 1969, launched this program in 2004. Along with typical camp activities, during the week-long camps, the military children are encouraged to discuss their thoughts and feelings about deployment and are educated about how to manage the stressors of military family life. Another source for military families are the various family centers located on military bases as well as the family support groups, formal and informal. Family assistance centers across all branches of the military, which have come into formal existence within the last several decades—for example, in 1982 Air Force Family Support Centers and the Army Family Liaison Office were created—provide support services for families going through this process. For example, the U.S. Army military installations include Family Readiness Centers where families can access information as well as services

Military Families

889

available to them. Family Readiness Groups (FRGs) are often implemented, too, which are militarysponsored organizations primarily consisting of military family members, though they may include private citizens and other military personnel such as chaplains. Spouses have organized their support of the troops since the Revolutionary War, but FRGs did not become official entities until the first Gulf War. Technology has affected how such groups are managed since then. Now, FRGs can exist online and/ or in face-to-face settings, but their main purpose is the same: to provide military family members support services. These groups are created within each military unit and are frequently led by officers’ spouses who are in contact with military members who keep them abreast of information and resources available. Often thought of as support groups, FRGs vary widely in their implementation given the different cultures of each military base and leadership, for example. Privately funded programs such as Sesame Street’s Talk, Listen, Connect are also available to military families. Talk, Listen, Connect is an outreach program provided by the Sesame Workshop in collaboration with several organizations and companies such as Wal-Mart. It provides information and support for families, in particular children, experiencing deployment, service member injuries, and death of a family member. Since its introduction in 2006, the program offers a range of programming, most notably videos featuring Sesame Street characters. Military One Source, also a partner in Talk, Listen, Connect, is an extensive Web site provided through the Department of Defense that offers information spanning military life and employs 24-hour staff who can help service members and their families navigate their immediate and longterm needs (e.g., health and wellness issues, deployment readiness, and child care). This site, online since 2004, provides necessary and timely services for military family members, particularly during deployment when family members are displaced from one another across miles and time zones. These are only a few examples of services available for military families in the United States and abroad. As military engagements extend and shift across the globe, services targeted at military families will continue to be important for their

890

Million Man March

well-being and resilience as they continue to face the challenges of military family life. The military branches and the Department of Defense, as well as society writ large, will need to increase and diversify services to address the changing face of the military as well as the numbers of families affected by wartime deployments and peacekeeping missions. Erin Sahlstein Parcell University of Wisconsin, Milwaukee See Also: Defense of Marriage Act; Primary Documents 1942; Relational Dialectics; Same-Sex Marriage; Technology; Twenty-Four-Hour News Reporting and Effect on Families/Children; War on Terror. Further Readings Castro, Carl A., Amy B. Adler, and Thomas W. Britt, eds. Military Life: The Psychology of Serving in Peace and Combat. Vol. 3. Westport, CT: Praeger, 2006. MacDermind Wadsworth, Shelly and David Riggs, eds. Risk and Resilience in U.S. Military Families. New York: Springer. 2011. Maguire, Katheryn and Erin Sahlstein. “In the Line of Fire: Family Management of Acute Stress During Wartime Deployment.” In Communication for Families in Crisis: Theories, Methods, Strategies, Fran C. Dickson and Lynne Webb, eds. New York: Peter Lang, 2012. Military One Source. http://www.militaryonesource.mil (Accessed May 2014). Sahlstein Parcell, Erin and Katheryn C. Maguire. “Turning Points and Trajectories of Military Deployment.” Journal of Family Communication (in press). Sahlstein, Erin, Katheryn C. Maguire, and Lindsay Timmerman. “Contradictions and Praxis Contextualized by Wartime Deployment: Wives’ Perspectives Revealed Through Relational Dialectics.” Communication Monographs, v.76 (2009).

Million Man March On October 16, 1995, approximately 870,000 black men gathered on the National Mall in Washington, D.C., for the Million Man March. Estimates of the number in attendance range from 400,000

(according to the National Park Service) to 1.5 million (according to group organizers). Boston University’s research group estimates there were 870,000 in attendance. The march was a response to a call made a year earlier by Minister Louis Farrakhan, encouraging black men to take responsibility for the betterment of themselves, their families, and broader society. It also sought accountability from the U.S. government and American corporations. Though met by controversy, the movement is considered to have been quite successful and has resulted in a number of subsequent rallies. The march was a response to institutional racism and classism that disrupt black families. These factors include high levels of unemployment, the over-incarceration of black men, a welfare system that encourages poor men to leave their families, and the perpetuation of racial stereotypes in the media. It called black men to unite and continue the black tradition of seeking equality, caring for society’s most vulnerable, and fighting for justice. Notable speakers included but were not limited to Martin Luther King III, Reverend Benjamin Chavis (National Director of the Million Man March), civil rights activist Rosa Parks, poet Maya Angelou, Reverend Jeremiah Wright, Senator Aldebert Bryan, Reverend Jesse Jackson Sr., Minister Louis Farrakhan (leader of the Nation of Islam), Reverend Addis Daniel, Dr. Cornel West, Dr. Betty Shabazz, as well as activist and educator Dorothy Height. The speeches centered on three themes and challenges for black men: atonement, reconciliation, and responsibility. Participants were challenged to atone for their moral and ethical mistakes and to make amends with the Creator. Speakers also called on participants to heal or reconcile personal and social relationships to unite with one’s community to promote a more just society. Finally, participants were asked to take responsibility for oneself, one’s family, and broader society. Speakers suggested that a number of the most dire problems facing black America were the result of black men who have not “stood up” as fathers, husbands, citizens, and community members. The march and its organizers also called on the U.S. government and American corporations to atone for the atrocities they had committed. Thus, organizers encouraged the government to take responsibility for its role in slavery (called by



the movement “the Holocaust of African enslavement”), the criminalization of black people, the disregard for treaties with Native Americans, unjust foreign policy, and the erasure of people of color from American identity and culture. Corporations were called to take responsibility for social affairs, to limit the negative effects of their businesses on society, to treat workers with respect and dignity both in the United States and abroad, to invest in the communities in which they operate, to support black organizations and schools, to partner with black businesses, and to care for the environment. The Million Man March also went by a second name: the Day of Absence. Organizers encouraged those in attendance and at home to respect it as a sacred day of atonement, reconciliation, and responsibility. Black people were asked to stay home from work, school, or other events and to, instead, focus inward through meditation and prayer. Organizers also hoped that the day would serve as an opportunity to register black voters and to develop a Black Economic Development Fund to facilitate the financial and entrepreneurial development of black people in the United States. Several controversies resulted from this event. The first included the exclusion of women. Maulana Karenga, the author of the mission statement, suggests that the goal was to foster responsibility in black men but not at the expense of black women. While women were not encouraged to attend as participants, there were several female speakers, including Rosa Parks, Maya Angelou, Dr. Betty Shabazz, and Dorothy Height, all of whom are regarded as strong feminists. A second accusation was that the march sought to resegregate society. Organizers emphasized that the rally was intended as an opportunity for black men to unite and work toward the benefit of and equality for people everywhere. The march encouraged black people, the U.S. government, and corporations to assist in the empowerment of black Americans. It did not call for social resegregation or racial separation. The Million Man March is credited by some as having helped with the reelection of President Bill Clinton, due to a dramatic increase in voting by black men. It is also said to have led to a decrease in black crime rates, an increase in black adoptions, and a greater level of community engagement and entrepreneurship by black men.

Minimum Wage

891

The march was followed by a Million Women March on October 25, 1997, and a Million Family March on October 16, 2000. The former sought to unite and empower black people in the United States while the latter sought family unity and racial and religious harmony. Further, in 2005, on the 10th anniversary of the Million Man March and in the wake of Hurricane Katrina, Louis Farrakhan kicked off the Millions More Movement. The Millions More Movement is more inclusive than previous rallies and seeks to unite across race and religion to resist racism. The movement focuses on issues such as police brutality, gentrification, low wages, access to health care, and affordable housing. The 2005 rally also included one gay speaker, Mr. Cleo Manago, but many protested the continued silence imposed on the black gay and lesbian community. Kristin Haltinner University of Idaho See Also: African American Families; Kwanzaa; Promise Keepers; Segregation; Single-Parent Families; Working-Class Families. Further Readings Allen, Robert. “Racism, Sexism, and a Million Men.” The Black Scholar, v.25/4 (1995). Karenga, Maulana. “The Million Man March/Day of Absence Mission Statement.” The Black Scholar, v.25/4 (1995). West, Michael. “Like a River: The Million Man March and the Black Nationalist Tradition in the United States.” Journal of Historical Sociology, v.12/1 (1999).

Minimum Wage Promulgated by the Federal Labor Standards Act (FLSA) of 1938, the national minimum wage is the lowest hourly wage an employer can legally pay a worker for labor. Before the enactment of the national minimum wage, many workers regularly faced exploitation in sweatshops and factories under horrendous conditions for just pennies a day. Earlier attempts by union activists to create a mandatory minimum wage were rendered unconstitutional by the Supreme Court, which cited that

892

Minimum Wage

such mandatory provisions imposed by the state restricted the rights of individual workers to set a price for their own labor. As a part of his 1936 election campaign, Franklin Delano Roosevelt promised to find a way to constitutionally provide the American worker with a minimum standard of living necessary for good health, economic self-sufficiency, and general well-being. American workers were extremely pleased with setting a minimum wage as a measure of legal protection; however, employers and fiscal conservatives were vehemently opposed to the legal establishment of a minimum wage. Considered one of the key elements in sweeping labor legislation meant to both protect laborers and stabilize the economy after the Great Depression of 1929, the national minimum wage remains a highly debated issue that has moved beyond the constitutionality of the law. Mainly at issue now are the perceived costs and benefits associated with mandatory minimum wage laws. Essentially, the proponents of minimum wage laws are workers, labor unions, and advocates for workers’ rights, while opponents continue to be fiscally conservative policy makers, businesses, and some economists. Positive and Negative Consequences Since its establishment, the national minimum wage has resulted in both positive and negative consequences for American families. The positive aspect hinges on the fact that a minimum wage structure sets a legal limit under which wages cannot drop. In contrast, the negative aspect is that over the years the minimum wage has only increased in small increments, never actually keeping pace with the high cost of living. Historically, proponents of the minimum wage law have contended that it helps both the American worker and the economy. According to supporters, the minimum wage helps to improve the living standards for the poorest and most vulnerable groups, encourages and motivates hard work, stimulates consumption (by putting more money in the hands of consumers), and decreases the dependency on government support. Some proponents argue that opponents of mandatory minimum wage laws erroneously stereotype minimum wage workers as middle-class teenagers working part-time jobs. One recent study conducted by the Economic Policy Institute, nevertheless, found that the majority of minimum wage earners are adults working

long hours and living in low-income households. Furthermore, of the 4.5 million workers who directly benefited from increases in the minimum wage in 2010, over half were from families with total incomes of less than $35,000 per year. For each positive point made by proponents of the minimum wage law, the opposition argues against it. For example, instead of improving the living standards for the poor and most vulnerable, the opposition argues, such a law will worsen conditions. They further maintain that external control of wage levels only hurts workers, as employers are forced to reduce the number of jobs available to keep up with mandatory wage setting. Instead of stimulating the economy, minimum wage law detractors argue, such government intervention will most likely cause price inflation, as businesses are forced to try to compensate for paying higher wages by raising prices. What is more, those opposed to minimum wage policy argue that wage increases are responsible for jobs moving away from the workers, within and outside the United States. Some economists have been vocal opponents of minimum wage laws, mainly believing that such policy amounts to bad economic strategy. Over the years, these economists have offered a couple of alternative approaches to aid the impoverished and stimulate the economy. The two most prominent alternatives to minimum wage laws offered by economists include the Basic Income approach and the Earn Income Tax Credit (EITC). In 1968, 1,200 economists signed a document calling for the U.S. Congress to introduce legislation for a system of income guarantees and supplements. A social security system that periodically and unconditionally allocates a sum of money to each citizen that is sufficient to live on describes the contours of the Basic Income approach. In contrast, first initiated in 1975, the EITC is a refundable tax credit for low- and medium-income individuals and families, especially those with qualifying children. In general, the EITC works by providing a tax refund to those whose credit exceeds the amount of taxes owed. The Basic Income approach never gained traction. The EITC, on the other hand, remains an integral part of the U.S. tax code. The EITC along with the minimum wage structure serves as the bedrock of federal strategy to alleviate poverty and provide a safety net for low-income earners. American families both benefit and suffer as a result of these safety

Minimum Wage



893

Fast food workers protest in Union Square, New York City, August 29, 2013. In December 2013, fast food workers around the country went on strike in support of increasing the minimum wage. The national minimum wage has resulted in both positive and negative consequences for American families. While a minimum wage structure sets a legal limit under which wages cannot drop, the minimum wage has only increased in small increments over the years, never actually keeping pace with the high cost of living.

nets. It cannot be ignored that the minimum wage standards help Americans from devolving to developing world status, whereby people live on a dollar per day. Still, the minimum wage and its increases have never kept pace with the high cost of living. Over the course of 74 years, the national minimum wage has increased only 22 times and has only been equivalent, in purchasing power, to the cost of living in 1968. Working-class American families at both the middle- and lower-income levels continue to find it difficult to sustain a standard of living that aids in preserving their quality of health, economic self-sufficiency, and general well-being. Alice K. Thomas Howard University See Also: Fair Labor Standards Act; Living Wage; Working-Class Families/Working Poor.

Further Readings Bureau of Labor Statistics. “Characteristics of Minimum Wage Workers: 2011” (March 2, 2012) http://www .bls.gov/cps/minwage2011.htm (Accessed August 2013). Hall, Doug. “Increasing the Minimum Wage Is Smart for Families.” Washington, DC: Economic Policy Institute (2011). http://www.epi.org/publication /increasing_the_minimum_wage_is_smart_for _families_and_economy (Accessed August 2013). Internal Revenue Service. “Overview of EITC.” http:// www.eitc.irs.gov/central/press/overview (Accessed August 2013). Sherk, James. “Who Earns the Minimum Wage? Suburban Teenagers, Not Single Parents.” Issue Brief. Washington, DC: The Heritage Foundation (February 28, 2013). Steensland, Brian. The Failed Welfare Revolution. Princeton, NJ: Princeton University Press, 2007.

894

Minuchin, Salvador

Minuchin, Salvador Salvador Minuchin (1921– ) developed structural family therapy (SFT), one of the most influential family treatment models of the 20th century; authored more than 13 books; and was inspirational in moving the focus of psychotherapists from individual to family orientations. SFT is a pragmatic, strength-based, and outcome-focused treatment model that empowers family members to create change in their patterns of interaction. Concepts and techniques used in SFT have influenced other family therapy approaches and are commonly used by therapists worldwide to assist distressed families. SFT techniques evolved out of Minuchin’s work with low-income families. Minuchin was born in a small Jewish community in rural Argentina. Prior to the Great Depression, Minuchin’s father was a successful businessman. Unfortunately, the major economic downturn of the times resulted in Minuchin growing up in a very impoverished environment. This likely influenced his later career decisions to develop a psychotherapy intervention model that was effective in assisting poor families. Minuchin enrolled in medical school at the age of 18 and later began a residency in pediatrics with a subspecialty in psychiatry. Soon after completing medical school, Minuchin moved to Israel and joined the Israeli army as a physician. In 1950, Minuchin moved to the United States to study psychiatry. He chose to train at the William Alanson White Institute in New York because that training program was heavily influenced by the interpersonal psychiatry work of Harry Stack Sullivan. After completing his training at the institute, Minuchin began working as a child psychiatrist at the Wiltwyck School for delinquent boys. During his time at Wiltwyck, Minuchin came to realize that traditional individual psychoanalytic treatments were not effective with the poor intercity population he was treating. Minuchin initiated a movement among his colleagues to embrace family treatment strategies. Little did he know that this effort would lead to the emergence of SFT as one of the most powerful and prominent models of family treatment within the field of family therapy. Minuchin left the Wiltwyck School in 1965 to accept a position as director of the Philadelphia Child Guidance Clinic. Under Minuchin’s leadership, the Philadelphia clinic became one of the most

respected child guidance authorities in the world and remains so to this day. During his time in Philadelphia, Minuchin became more interested in the larger social context in which families are embedded and further refined SFT concepts. Minuchin resigned as director of the Philadelphia Child Guidance Clinic in 1975 but remained Director Emeritus until 1981, when he left Pennsylvania to establish Family Studies Inc. in New York City. Family Studies Inc. was established to teach and train family therapists the SFT model. After Minuchin’s retirement from Family Studies Inc. in 1996, the center was renamed the Minuchin Center. Structural Family Therapy SFT is rooted in three fundamental assumptions regarding human behaviors and interactions. First, it assumes that family members influence and are influenced by their social contexts through a series of reoccurring patterns of social interaction. Second, changes in family structure—the rules that govern family interactions—result in behavior changes for individual family members. Third, the therapist’s interactions with the family become a part of the family context, resulting in an altered family structure. From an SFT perspective, family structure is defined by repetitive patterns of interactions between family members and subsystems within the family system. Subsystems are comprised of individual family members based on hierarchical structure, gender, generation, or functional reality. Each subsystem within a family is governed by its own rules. Examples of subsystems within a family include but are not limited to a parental subsystem, spousal subsystem, and a sibling subsystem. Subsystems are regulated by boundaries. Boundaries are rules that define who participates, how one participates, and for how long one participates in a system or subsystem. Boundaries can be classified into three categories, rigid, clear, or diffuse. From a structural perspective, rigid boundaries may be problematic in that they often result in less accommodation and greater isolation or disengagement for a family member or subsystem. Like rigid boundaries, diffuse boundaries may also be problematic in that they promote too much accommodation, resulting in overinvolvement or enmeshment. Clear boundaries are ideal in that they are firm but flexible enough to promote both autonomy

Miscegenation



and connection between family members and subsystems. SFT incorporates three continuous and interwoven processes, including joining, formulation, and restructuring to promote change in families. Joining involves the therapist establishing a therapeutic relationship between the family and the therapist. Formulation refers to the process of the therapist identifying maladaptive or ineffective interactional patterns within the family. Restructuring includes all of the interventions and actions taken to alter maladaptive interactional patterns and replace them with more adaptive and effective sequences of family interaction. Structural family therapists are active and directive throughout the therapeutic encounter. Once the therapist has established an appropriate therapeutic relationship with the family through joining techniques and identified the maladaptive interactional patterns through formulation, the therapist then employs techniques or interventions to restructure the family. Restructuring techniques include actualizing family transactional patterns (enactments), marking boundaries, escalating stress, assigning tasks, utilizing symptoms, manipulating mood, and support, education, and guidance. Conclusion As the creator of SFT, Salvador Minuchin and his work have greatly influenced the field of family therapy. Minuchin developed a pragmatic comprehensive strength-based model of family intervention that is outcome focused and action oriented. Unlike most other therapeutic theories of the time, SFT was designed to meet the needs of poor families. The model offers clinicians an array of actionoriented interventions designed to establish an effective therapeutic relationship with the family, assess the functionality of family interaction patterns, and alter or restructure interactional patterns to facilitate more satisfactory family functioning. SFT is still practiced by psychotherapists around the world and many of its concepts and interventional strategies have since been incorporated into other psychotherapy models. W. Jeff Hinton Heath A. Grames University of Southern Mississippi

895

See Also: Family Counseling; Family Therapy; Working-Class Families/Working Poor. Further Readings Minuchin, Salvador. Families and Family Therapy. Cambridge, MA: Harvard University Press, 1974. Minuchin, Salvador and Charles Fishman. Family Therapy Techniques. Cambridge, MA: Harvard University Press, 1981. Minuchin, Salvador, Bernice Rosman, and Lester Baker. Psychosomatic Families. Cambridge, MA: Harvard University Press, 1978.

Miscegenation The ideology and practice of white supremacy in the United States dominated the national culture until the mid-1960s. Within that history, couched in the pseudoscientific language of eugenics, was the feared concept of miscegenation—literally, “race mixing.” The term originated in 1863 in a short political pamphlet, The Theory of the Blending of the Races, Applied to the American White Man and Negro. The pamphlet was ostensibly a sympathetic argument for interracial solidarity but was actually a hoax instigated by enemies of President Abraham Lincoln to inflame racist passions against him; literature from the time pictured ugly caricatures of black men kissing beautiful white women, allegedly the result of Lincoln’s policies. Racists feared that the “tainted” blood of what were then called “colored people” (i.e., anyone not white) was a threat to the racial purity of the “white race,” which, unchecked, would lead to social denigration and a weakening of civilization. The theory was that white people had superior genes and were thus biologically responsible for creating and preserving “Western” culture, which was considered superior to all others. Many states (although not the U.S. Congress) responded to the perceived threat of racial intermixing by passing laws proscribing the practice. The fear of people of color passing as white and subverting the white gene pool through sexual and/or marital relationships reached its height in Virginia’s Law Preserving Racial Integrity (1924), which forced people to register their race and assigned criminal penalties for misrepresenting

896

Miscegenation

oneself as white. Antimiscegenation laws voided interracial marriages, criminalized interracial fornication, and delivered criminal sanctions to those who violated them or performed such marriages. The laws existed in a number of states until the Supreme Court in Loving v. Virginia (1967) declared them unconstitutional. Prior to the Civil War, it was common practice for white slave owners to take black female slaves as mistresses and/or to subject them to rape, knowing that the offspring of such intercourse would be their property. As a result, “mulattos,” “quadroons,” and “octoroons” (the offspring of such unions across generations) were common and unremarkable in the American south. With the abolishment of slavery in 1865, however, white racists were desperate to reinvent the racial hierarchy and deemed such people a social threat, actively rallying against them through law and other coercive means, including imprisonment, lynching, and forced sterilization, and the term caught the white public’s imagination. There are many court cases concerning antimiscegenation laws from this period, and they all frame the laws as enlightened public policy to protect the “purity” and “morals” of “white civilization.” Courts described what they called the “amalgamation” of the races as something unnatural and deplorable, inaccurately referring to the offspring of such unions as sickly, unhealthy, and degenerate. Other courts evoked God’s “wishes” for the races to be kept separate (Bob Jones University prohibited interracial dating by its students until 2000 using this rationale). The hysteria that accompanied these judicial opinions, as well as inflammatory statements by government and religious leaders at the time, were a major source of the stereotypes that reinforced discrimination and segregation of African Americans and, in the 1950s and 1960s, were a rallying call by segregationists to resist the demands of the civil rights movement. The first successful challenge of antimiscegenation laws in the 20th century came in Perez v. Sharpe (1948) and the death knell in Loving v. Virginia (1967). In Perez, a white woman and an African American man were denied a marriage license under California’s antimiscegenation statute. Recognizing the imprecise nature of racial categorization (today race is understood as a social construction), the California Supreme Court narrowly struck down the statute, deeming

antimiscegenation legislation unreasonable and unconstitutional, becoming the first state to repeal its antimiscegenation laws. In its decision, the court denied a key assumption in eugenics, stating emphatically that there was no scientific proof that one race was superior to another in native ability. In Loving, an African American woman and a white man, residents of Virginia, traveled to the District of Columbia to get married, such unions being illegal in Virginia. When they returned to Virginia, their house was raided by police and they were arrested. They pleaded guilty and were sentenced to one year in prison, with the sentence suspended for 25 years on condition that the couple leave the state. The court applied the logic of Perez and unanimously ruled, with appropriately harsh language, that miscegenation laws served no function except for reinforcing “white supremacy.” In its most extreme forms, eugenics and miscegenation practices have been discredited, particularly as they became associated with Nazi Germany after World War II, and tend not to be used in mainstream society as they once were (American eugenics is credited with inspiring Hitler’s eugenics laws). In fact, what used to be derided and condemned as “race mixing” is now a common, widespread, and respectable practice, with bi- and multiracial families quickly becoming normative (Barack Obama, the 44th U.S. president, is one prominent example). Such trends are exciting, as genetic diversity—in contradistinction to eugenical dogma—improves rather than weakens human genetics. It also breaks stagnant social categories, rebuts antiquated assumptions about what constitutes a family, and unsettles reified tradition, making life more interesting. The rhetoric of miscegenation and eugenics, however, has not completely disappeared; in Louisiana, for example, in 2009 a local justice of the peace refused to marry an interracial couple on the grounds that it hurt the children of such union. When it does appear in public culture, as it did in this case, it tends to elicit condemnation and controversy and its proponents held publicly responsible. Omar Swartz University of Colorado, Denver See Also: Genealogy and Family Trees; Interracial Marriage; Multiracial Families.

Further Readings Lemire, Elise. “Miscegenation”: Making Race in America. Philadelphia: University of Pennsylvania Press, 2009. Maillard, Kevin Noble, and Rose Cuison Villazor, eds. Loving v. Virginia in a Post-Racial World: Rethinking Race, Sex, and Marriage. New York: Cambridge University Press, 2012. Selden, Steven. Inheriting Shame: The Story of Eugenics and Racism in America. New York: Teachers College Press, 1999.

Mommy Wars Mommy wars is a term used to describe the troublesome division that separates mothers based on their different choices rather than aligning them based on their shared experiences. In particular, this divisiveness is targeted at middle-class mothers and their choice of either working in the paid labor force or staying at home with their children. Employed mothers are often characterized as selfish and concerned more about their careers than their children, while stay-at-home mothers are criticized for having wasted their potential by embracing traditional roles instead of seeking outside employment. Historical Background The mommy wars are a relatively new phenomenon given the shift in women’s labor force participation. Women have always worked; however, prior to World War II the majority of women workers were of lower or working-class status and usually minority status or women of color. With the onset of World War II, the number of women working in the paid labor force increased greatly among women from a variety of races, ethnicities, marital statuses, and economic backgrounds. The government actively recruited women workers and created the Rosie the Riveter character to inspire women to leave home and enter the workforce to benefit the war effort. Wars have historically led to an influx of women into the paid labor force, but after World War II attitudes changed and the majority of women who worked wanted to remain in the workplace even though postwar propaganda told them to go back home.

Mommy Wars

897

In 1963, not long after women returned home from the factories, Betty Friedan published The Feminine Mystique. Despite postwar prosperity and the birth of the baby boom generation, this book revealed “the problem that had no name.” Women were unhappy with their status as housewives and mothers and were left feeling unfulfilled. Friedan’s book, which is often credited with launching the second wave of the feminist movement, encouraged wives and mothers to seek other avenues, such as the workplace, for personal fulfillment. The second wave of feminism fought for a variety of women’s rights that had both a direct and indirect affect on women’s opportunities in the workplace. The Civil Rights Act, the Equal Pay Act, the Pregnancy Discrimination Act, as well as a variety of laws pertaining to reproductive rights and the never-passed Equal Rights Amendment all worked to provide women with the opportunity to enter the paid labor force. The women’s movement wanted to grant choice—not just reproductive choice but choices surrounding work and family as well. However, instead of creating an appreciation for the diversity of choices, the result was the manufacturing of a socially contrived war among mothers. Women’s choices became a source of debate within various social and political arenas, media outlets, and among mothers themselves. Current Trends According to the latest data from the Pew Research Center, 65 percent of married mothers with children under the age of 18 years old are employed outside the home. Additionally, mothers are the primary breadwinner in 40 percent of family households. This is in stark contrast to 1960 data that revealed only 11 percent of families had mothers who were the primary breadwinner. As for stay-at-home mothers, each year the Web site salary.com calculates the marketplace value of mothers’ unpaid work to illustrate how much they would earn if their labor were paid. Recenly, the yearly salary estimate for stay-at-home mothers has ranged from $112,000 to $117,000. Despite the important contributions both working and stay-at-home mothers make to their families and to society as a whole, the mommy wars devalue their choices and their accomplishments by pitting mothers against each other. The debate occurs within a larger cultural discussion about what is best for children. However, the mommy wars target primarily

898

Montessori

middle- and upper-class women who have the economic ability to make a choice between working and not working. The mommy wars are not focused on working-class mothers who must work to pay the bills. In fact, mothers from lower economic backgrounds who choose to stay home with their children and collect social services to help them do so are often vilified for not working. The mommy wars also do not question fathers. Fathers are not judged for choosing to work, and because of their token status, stay-at-home dads are often celebrated. The mommy wars mean mothers are in a no-win situation. On the one hand, Pew data tells us that more than half of Americans believe mothers working outside the home negatively affects parenting and marital success. On the other hand, the majority of Americans also believe that mothers’ income helps families to live more comfortably. Attitudinal data tells us that half of Americans believe it is better for children if their mothers stay home, yet health studies indicate working mothers report better overall physical and mental health than nonemployed mothers. There are positive and negative consequences to each choice. The mommy wars extend beyond the choice of working for pay or staying at home. The mommy wars position mothers against each other on a variety of topics, from birth plans to breastfeeding and diaper choices to potty training techniques. Once again, mothers are in a no-win situation. Whether the mommy wars continue is contingent upon society recognizing that there is no one correct choice. Similarly, it will require that mothers support one another’s decisions. Michelle Napierski-Prancl Russell Sage College See Also: Baby Boom Generation; BreadwinnerHomemaker Families; Breastfeeding; Child-Rearing Practices; Feminism; Intensive Mothering; Parenting; Stay-at-Home Fathers. Further Readings Buehler, C. and M. O’Brien. “Mothers’ Part-Time Employment: Associations With Mother and Family Well-Being.” Journal of Family Psychology, v.25/6 (2011). Hays, S. The Cultural Contradictions of Motherhood. New Haven, CT: Yale University Press, 1998.

Steiner, L. M., ed. Mommy Wars: Stay-at-Home and Career Moms Face Off on Their Choices, Their Lives, Their Families. New York: Random House, 2007. Wang, W., K. Parker, and P. Taylor. Breadwinner Moms: Mothers Are the Sole or Primary Provider in Fourin-Ten Households With Children; Public Conflicted About Growing Trend. Washington, DC: Pew Research Center (May 29, 2013).

Montessori Maria Montessori (1870–1952) was a maverick. She studied mathematics and engineering and then went to medical school at a time when women in Italy were not welcome in any of these fields. Her best-known achievement was in education, where after years of teaching and observing children, she developed and refined a unique, revolutionary, and controversial educational program. Her educational philosophy was one of discovery, as opposed to direct instruction. She believed that students learn best when experiencing and doing, and that when children are free to decide where to put their focus within a carefully prepared environment, they will innately choose work that maximizes their development. Today, more than 100 years after opening her first “Montessori” school, more than 7,000 Montessori schools are estimated to exist around the world. Beginnings Montessori began her journey developing her unique approach to education when she worked with children with significant learning disabilities. She offered these students encouragement and freedom in their work and noticed that they responded with much interest in their own learning and self-discipline, along with seeming to pass through stages of learning. When these students’ test scores compared favorably to students without known learning disabilities, Montessori wondered why students without learning disabilities did not do better. When she had the opportunity to test her educational philosophy and techniques with such children, she found that they responded positively to her creating circumstances that allowed the emergence of their “teacher within” as well.



Montessori continued to refine her theorizing and learning materials and came to believe that traditional, teacher-focused educational methods are flawed and suffer from a lack of understanding of human development. Montessori proposed that education occurs naturally and spontaneously in human beings as they explore and experience their environment. Therefore, in stark contrast to listening to the words of others, carefully constructing an environment and including a range of specific kinds of learning materials appropriate for children’s developmental stage are key to optimizing learning. Consequently, Montessori systematically tested and refined environmental conditions and learning materials that were most productive for children in each stage of development. In general, the environment should be natural and freeing for children. This means an environment that includes only materials appropriate for children’s age and growth, that excludes anything that is an obstacle to their growth, and that includes means by which children can progress as their abilities expand. Ideally, the environment also is clean, orderly, simple, beautiful, and harmonious. In addition, structures, such as shelves and chairs, should be in proportion to the ages of the children using them, and the arrangement of structures in the environment should be conducive to activity and allow for easy movement around the space. Other more specific features of the environment and learning materials vary depending upon children’s developmental stage. Periods of Human Development Montessori delineated four periods of human development. Each has specific learning styles and developmental needs and outcomes associated with it. The first period (or “plane”) begins at birth and continues until about age 6. During this time, children explore sensory stimuli and begin to construct their sense of self. Montessori described children at this stage as having an “absorbent mind” because they are able to assimilate so much of their environment, including tastes and smells, the language they hear, and culture-specific materials and concepts. This absorbent ability begins to fade around age 6 when children move into the second developmental period distinguished by Montessori. In the second plane, which lasts until about age 12, children lose baby teeth, experience substantial

Montessori

899

leg and torso growth, tend to prefer group activities, and find value in reason and using their imagination. In response, Montessori sees children’s work at this stage to be the development of intellectual independence, a moral sensibility, and an understanding of social organization. The third plane, which lasts until about age 18, includes puberty, psychological instabilities, creative urges, and an assessment of one’s self-worth using external sources. In this period, the child’s work is to create his/her adult self. The fourth plane continues until about age 24 and is not as well developed by Montessori as the other stages, yet she contends that the work during this period is to continue to be a student of one’s culture and contribute to civilization. Understanding each of these planes is important, according to Montessori, because she proposed that each requires specific sets of educational materials and approaches. Moreover, Montessori imagined that when children have the opportunity to develop according to their inner developmental needs and laws, civilization will be more peaceful and enduring as a result. Criticism and Interest With much research confirming her findings, Montessori launched a campaign to spread the news. This included publishing books, giving lectures, and creating teacher-education courses about her work. Although her educational approach was well received and honored by many, in every country she visited, she was faced with strong criticism of her new educational model. Much of that opposition came from teachers invested in traditional teaching techniques or who did not understand her work or appreciate her view of the role of teacher. Montessori’s approach is child centered, not adult centered. Teachers are to be humble, preparing an environment where children can do their work and that encourages the emergence of their “teacher within.” In the United States, Montessori’s approach was initially embraced in the early 1900s. Interest waned quickly though, and her work was virtually absent from American education until the 1960s. With Montessori’s death in 1952 and new publications about her life and work appearing shortly afterward, most notably one by her close friend E. Mortimer Standing, Montessori’s work was revived in the United States. Various groups and organizations emerged intent on challenging traditional

900

Mother’s Day

teaching models and promoting Montessori’s approach to educating children. Today, thousands of Montessori schools exist in the United States. Many parents though do not know much about Montessori or are wary of Montessori’s child-centered approach and emphasis on liberty and free choice for students. As a result, Montessori is still generally perceived as an alternative educational option in the United States, and Montessori’s theories about child development and learning are not commonly understood. Mel Moore University of Northern Colorado

See Also: Adolescence; Childhood in America; ChildRearing Experts; Education, Elementary; Education, High School; Education, Middle School; Education, Preschool; Emerging Adulthood; Parenting Styles. Further Readings Lillard, Paula P. Montessori: A Modern Approach. New York: Schocken, 1972. Montessori, Maria. The Absorbent Mind. New York: Ballentine, 1972. Montessori, Maria. Secret of Childhood. New York: First Owl, 1995. Standing, E. Mortimer. Maria Montessori: Her Life and Work. New York: Penguin, 1957.

Mother’s Day The inspiration for Mother’s Day dates back to the 1850s. Anna Reeves Jarvis organized Mothers’ Work Days to encourage women’s participation in certain causes, including helping the poor, improving sanitation, and lowering infant mortality. During the Civil War, her group helped care for wounded soldiers. In the early 1870s, poet and philanthropist Julia Ward Howe proposed an annual Mothers’ Day for Peace in Boston. Howe’s Mothers’ Day was celebrated widely in Massachusetts and other eastern states every June 2 until the turn of the century. In both cases, the intent was to focus on all women and their contribution to social and political action, not for celebrating one’s own mother (thus Mothers’ Day instead of Mother’s Day).

Founding Ideas The idea for celebrating mothers individually was put forth by Anna Reeves Jarvis’s daughter, also named Anna Jarvis. In 1891, the younger Jarvis left her mother behind in Grafton, West Virginia, and moved to Chattanooga, Tennessee, and later to Philadelphia, Pennsylvania. Moving away from her mother was integral to her movement to honor mothers, as separation and longing for home ties and rootedness resonated for other sons and daughters who left their parents’ homes in that time period. After her mother died in 1905, Jarvis, who never married or had children of her own, wrote many letters to friends and family remembering her mother’s love and faith, as well as suffering and sacrifices she endured as a mother. In May 1907, Jarvis arranged a special service in Grafton at Andrews Methodist Episcopal Church where her mother had served many years. Soon Jarvis began to imagine a bigger celebration to honor all mothers in America. Arguing existing holidays were biased toward men, she began a letter-writing campaign to newspaper editors, church leaders, and politicians across the country urging them to establish a day to celebrate mothers. Anna Jarvis envisioned a somber, holy celebration of mothers’ contributions to home, family, church, and community. To ensure it would be a holy day, she selected the second Sunday of May—the Sunday closest to the anniversary of her mother’s death. The first official Mother’s Day observation was held in May 1908 in a few towns and cities across the country. Jarvis organized the Mother’s Day International Association to help promote the holiday, and within a couple years, many mayors and governors had issued proclamations for Mother’s Day. In 1914, President Woodrow Wilson issued a presidential proclamation declaring the second Sunday in May to be Mother’s Day. Thus began the national observance of Mother’s Day in the United States. Commercial Interest Although she intended it to be a sacred observation, the sober and religious tone of Mother’s Day soon had competition from commercial interests who saw the holiday as a money-making opportunity. Commercial florists took the lead in capitalizing on the holiday. On the first Mother’s Day, Jarvis urged women to wear a simple white carnation, her



American war mothers pay tribute at the Tomb of the Unknown Soldier on Mother’s Day in Washington, D.C., May 12, 1929. Although Anna Jarvis originally intended Mother’s Day to be a sacred observation, it soon became commercialized, with florists taking the lead in capitalizing on the holiday.

mother’s favorite flower. This idea, however, led to heightened demand that resulted in price spikes and annual shortages of white carnations. To cope with the shortage and boost business, the floral industry suggested wearing a red or other bright-colored carnation if one’s mother was living and white if she was deceased. Further, the industry recommended homes, churches, and cemeteries be decorated with flowers, and Mother’s Day bouquets were widespread by 1912. Given its holy and sentimental importance, florists strived to conceal their commercial efforts. In the mid-1910s, newspapers ran florists’ advertisements alongside articles highlighting the history and purpose of Mother’s Day. They credited Jarvis and described her as a woman who was inspired by love and grief for her own mother. Stories told of childhood, home, separation, love, and memories. The National Association of Greeting Cards Manufacturers, organized in 1914, also contributed to commercializing Mother’s Day. One of their products was an etiquette book, Greeting Cards: When and How to Use Them, which in 1926

Mother’s Day

901

proclaimed, “Every mother should receive a card with just the right sentiment.” By the 1920s, Mother’s Day was one of the most prominent U.S. holidays—and one of the most profitable. Department stores, candy stores, jewelers, and others began promoting products to celebrate Mother’s Day. Without their investment, Mother’s Day might have remained a minor holiday and perhaps would have eventually faded away. Initially Anna Jarvis welcomed the attention commercialism brought to Mother’s Day; however, she became increasingly upset their interest was primarily profit driven. In 1920, Jarvis denounced the floral industry. She urged people to stop buying flowers, as well as cards and other gifts. In response, the industry denounced her and stopped recognizing her contribution to making Mother’s Day a national holiday. Ironically, even as Jarvis fought against these industries, other commercial advances were changing the domestic roles Jarvis wanted celebrated. During much of the rest of her life, Jarvis continued to fight to regain control of Mother’s Day. In the 1940s, she even fought to have the holiday removed from the calendar. Over time, judgments and legal costs added up and her fortune disappeared. Jarvis spent her final days in a sanitarium where she died in 1948. She was never told that part of her expenses there were paid by the Florist’s Exchange. After her death, Mother’s Day celebrations continued to expand in emphasis away from women’s contributions in the home. During the 1960s, it served as a day to encourage social action for women, justice, and equality. Recalling Mothers’ Work Days, various groups called for addressing social concerns, such as poor women and children and improving access to child care. Mother’s Day has largely evolved into a holiday on which ritual observances and church attendance are secondary. Far more Americans engage in commercially inspired rituals of giving cards, flowers, and gifts and going out to eat instead of attending religious services. It is also the peak day for long distance telephone calls. Mother’s Day has become a $20 billion industry. In addition to the traditional gifts, mothers are increasingly likely to be given expensive jewelry and high-tech gadgets. Kelly A. Warzinik University of Missouri

902

Mothers in the Workforce

See Also: Father’s Day; Focus on the Family; Marketing to and Data Collection on Families/Children. Further Readings Coleman, M., L. H. Ganong, and K. Warzinik. Family Life in 20th-Century America. Westport, CT: Greenwood Press, 2007. Coontz, S. The Way We Never Were. New York: Basic Books, 1992. National Retail Federation. “Consumers Look to Pamper Mom With iPads, Jewelry This Mother’s Day.” http:// www.nrf.com/modules.php?name=News&op= viewlive&sp_id=1567 (Accessed 2013). Schmidt, L. E. Consumer Rites: The Buying and Selling of American Holidays. Princeton, NJ: Princeton University Press, 1995. Schmidt, L. E. “The Commercialization of the Calendar: American Holidays and the Culture of Consumption, 1870–1930.” Journal of American History, v.78 (1991).

Mothers in the Workforce Over the last 30 years, an unprecedented number of women with children have joined the paid labor force in the United States. According to the U.S. Department of Labor in 2012, 71.3 percent of women with children under the age of 18 are working outside the home. Moreover, various studies indicate that this is a growing trend in other parts of the world as well, including in so-called traditional societies where, until recently, women were culturally expected to stay at home with young children. Demographics In 1975, approximately two out of every five mothers with preschool children worked outside the home. By 2010, these figures had changed dramatically: mothers with younger children (64.2 percent) accounted for a substantial portion of the labor force. Once children reach school age, the numbers spike even further: today, mothers with school-age children (i.e., 6 to 17 years of age) are in all likelihood working outside the home (77.2 percent). Moreover, marital status matters. In 2010, 74.9 percent of unmarried women with children were working

for pay in comparison to 69.7 percent of married women with children. Unfortunately, labor force participation statistics do not separate full-time work from part-time work. In 2011, approximately 30 percent of women with young children (under the age of 6) worked less than 35 hours per week, in comparison to 18 percent of men. Conversely, about two out of every three women with young children worked more than 35 hours per week, according to the Population Reference Bureau. Factors Contributing to the Rise of Mothers in the Workforce A major change over the last 50 years is the speed with which women return to the paid labor force after the birth of a child. For instance, among women who were born in 1946, approximately 50 percent returned to the workforce by the time their first child reached the age of 6. For women born in 1970, 50 percent rejoined the workforce after one year. A number of factors have contributed to the dramatic increase of working mothers. Instrumental was the recession in the early 1970s that led to the need for dual-earner households. At that time economic factors came together with social ones, such as the feminist movement that advocated that women join the paid labor force and earn incomes equivalent to those of men. In more recent years, the steady decline in men’s earnings, the increases in divorce, and acceptance of births outside marriage have resulted in more single-mother families, and the rise in women’s educational levels have all contributed to the increase in the number of mothers with young and school-age children in the labor force. These trends stand in contrast to an intransigent societal model of work and family that is predicated on the ideal of an employee who works full-time without substantial family responsibilities and who has a partner who takes care of dependent family members. As mothers with young children have joined the paid labor force, they have substantially decreased the amount of time they do housework. Time diary data comparing women’s activities between 1965 and 2000 indicate that women are increasingly decreasing the amount of time they spend on domestic activities such as cooking and cleaning. They are either readjusting their perceptions of what constitutes acceptable housework or they are buying the services of other women to cook,



clean, and take care of their children. Remarkably, U.S. mothers, including women who are in the paid labor force, are spending the same amount of time or even more time with their children as they did 40 years ago. S. Bianchi, J. Robinson, and M. Milkie found by studying time diaries that working mothers prioritize their relationship with their children over other activities such as housework and time for themselves. Child Care The dramatic increase of women in the paid labor force has not been accompanied by any substantial governmental efforts in the United States to provide quality child care for working families. Thus, while three out of four mothers work more than 30 hours per week, most of those families have to rely on a patchwork system of child care. Given that children younger than 5 spend about 35 per week in some form of child care, this constitutes an enormous problem for most working families. It is critical to note that in 29 percent of dual-earner families, women are the primary heads of household; in other words, they earn more than their husbands or the total income for the family. This is an important point, as there is a general cultural misperception that most women work out of “selfish” reasons and not to provide financial stability for their families. That said, child care constitutes a major financial burden for working families. In families with very young children (under the age of 5), employed mothers who earn less than $18,000 per year spend 95 percent of the household income on child care. Work, Marriage, and Motherhood Current scholarship indicates that the majority of younger Americans believe in egalitarian relationships between men and women and that many marriages today begin with an equal sharing of household and financial tasks. These same studies also indicate that women, especially after the birth of the first child, tend to perform most of the housework and caregiving in their families, despite working outside the home in record numbers. Men, on the other hand, continue to define their primary role as economic providers for their families. More recently this division of labor has been referred to as a “neotraditional” arrangement wherein men perform most but not all paid work, and women conduct most but not all unpaid work.

Mothers in the Workforce

903

The discrepancy between stated beliefs and actual practice raises many questions about how conceptualizations of gender roles intersect with work and family issues in American society. In particular, motherhood has become an increasingly contested and problematic arena for women who make that reproductive choice. Stay-at-home mothers are celebrated by the media and in popular discourse as women who are “doing the best they can for their children,” while working mothers are often depicted as “selfish” or impervious to the needs of their families. This pervasive discourse obscures both the historical and the class-based reality that for most mothers, working outside the home has often been the only financially viable option. Historically, in the United States, women of color and poor women have always been part of the labor force, even while bearing children. In the contemporary United States, it is only a small minority of couples that can make the choice to have one partner stay at home with the children while the other spouse is employed in the paid labor force. For most mothers there is only one “choice”—to work out, often in isolation, an exceedingly complicated balancing act between meeting monetary demands and family responsibilities. Transnational Motherhood The trend of mothers in the workforce that is seen in the United States is increasingly spreading to other parts of the world. Both in other industrialized countries and in the developing world, an increased feminization of the workforce is a striking phenomenon. A particularly important development is what is often referred to as “transnational motherhood.” This term refers to women with children who leave their families behind to work for pay in other countries. They subsequently send their remittances back home to support their families, and most frequently specifically their children. Transnational motherhood is a trend wrought with complex problems. The women who engage in paid work abroad are frequently vilified by a hegemonic discourse in both their host and home countries that discounts their paid workforce activities and attacks their mothering skills as “less than adequate.” This judgment is based on a superficial quantitative assessment of the amount of time and effort they are investing in their children. Transnational mothers are often described as women who are inadequate mothers,

904

Mothers in the Workforce

as women who are not meeting their responsibilities to their families, and, ultimately, as women who are contributing to the ills of society by their lack of investment in the next generation. Despite such pronouncements, transnational motherhood is on the rise due to the lack of economic opportunities in many developing countries. Moreover, governments in poorer countries increasingly rely on these remittances with respect to their gross domestic products, and these contributions may even equal or exceed the amount of money brought in through exports. For instance, in the Philippines approximately 34 to 53 percent of the population depend in their daily lives on remittances sent back home from female migrants. Simultaneously, from a family perspective, about 27 percent of children in the Philippines are now growing up with primarily their mothers abroad at some time during their childhood. “Doing Motherhood” Women with young children in the workforce and transnational motherhood raise many questions about gender roles in families, breadwinning or economic provisioning, and the kinds of formal and informal supports that working families need. Moreover, they highlight the fact that the current dominant intensive mothering dialogue does not take into account the lack of choices that most mothers in the United States and abroad have with respect to how they spend their time or talents. This discourse also does not recognize that women with children in the contemporary global world are constantly “doing motherhood,” that is, they are creating a wide variety of versions of “motherhood” to serve the best interests of their children. These women are consistently searching for the means to provide resources for their children, be it basic provisions such as food and shelter, time, interactive activities, and so on. “Doing motherhood” has become synonymous with negotiation and innovation. The normative guidelines that regulated the gendered behaviors for previous generations of mothers are constantly being transformed through the agency of contemporary mothers. Policy Concerns As the global world shifts to new conceptions of women’s roles that increasingly include mothers

in the paid labor force, some have called for policies that assist families in managing their familial and economic responsibilities. The institutionalization of quality child care and responsive workplace policies are often at the forefront of these proposed changes. There is much research that indicates that workplace flexibility, time sharing, and working from home assist working mothers to balance their various obligations. However, besides northern European countries such as Sweden and Norway, as well as France, the rest of the world has been slow to adjust its policies to take into account family change. Instead, most workplaces continue to be structured around the traditional norm of a dual-couple model with one member of the family working and one member at home. States have been reluctant to embrace the idea of instituting national quality child care programs, and they often do not force employers to assist families balance workfamily issues. In the long run, this does not bode well for economic productivity, nor for the mental health of individuals and societies. Some have argued that constructions of gender inevitably encode conflicting and ambivalent meanings that can never be fully reconciled. Contemporary global motherhood embodies this notion. Motherhood and what constitutes a “good” mother vary over time, place, and social class. However strikingly, today motherhood now commonly includes a breadwinner or provider aspect. “Doing motherhood” is the new norm—one that is fluid and often based on a lack of choices and instead responsive to context. Bahira Sherif Trask Nikki DiGregorio University of Delaware See Also: Focus on the Family; Gender Roles; Mommy Wars; Myth of Motherhood. Further Readings Bianchi, S., J. Robinson, and M. Mikie. Changing Rhythms of American Family Life. New York: Russell Sage Foundation, 2007. Ehrenreich, B. and A. Hochschild, eds. Global Woman: Nannies, Maids, and Sex Workers in the New Economy. New York: Metropolitan Books, 2003. Ferree, M. “The Gender Division of Labor in Two-Earner Marriages.” Journal of Family Issues, v.12 (1993).

Flax, J. Thinking Fragments: Psychoanalysis, Feminism, and Postmodernism in the Contemporary West. Berkeley: University of California Press, 1990. Gornick, J. C. and A. Heron. “The Regulation of Working Time as Work-Family Reconciliation Policy: Comparing Europe, Japan, and the United States.” Journal of Comparative Policy Analysis, v.8 (2006). Hill Collins, P. “Theorizing About Motherhood.” In Mothering: Ideology, Experience, and Agency, E. N. Glenn, et al., eds. New York: Routledge, 1994. Hochschild, A. R. The Time Bind: When Work Becomes Home and Home Becomes Work. New York: Metropolitan Books, 1997. Moen, P. and P. Roehling. The Career Mystique: Cracks in the American Dream. Lanham, MD: Rowman & Littlefield, 2005. Orloff, A. S. “Women’s Employment and Welfare Regimes: Globalization, Export Orientation and Social Policy in Europe and North America.” Social Policy and Development Programme Paper Number 12. United Nations Research Institute for Social Development. New York: United Nations, 2002. Parrenas, R. S. “Transnational Mothering: A Source of Gender Conflicts in the Family.” North Carolina Law Review, v.88 (2010). Population Reference Bureau. “More Mothers of Young Children in the Labor Force.” http://www.prb.org /Articles/2012/us-working-mothers-with-children .aspx (Accessed July 7, 2013). U.S. Department of Labor, Bureau of Labor Statistics. “Women in the Labor Force: A Databook.” http:// www.bls.gov/cps/wlf-databook-2012.pdf (Accessed July 7, 2013).

Moynihan Report Since its release in 1965 by the U.S. Department of Labor under the Lyndon Johnson administration, The Negro Family: A Case for National Action by Daniel Patrick Moynihan (1927–2003) has simultaneously polarized the American public and synthesized ideas across conventional political boundaries. Although an early copy of the report was vetted by Martin Luther King, Jr., it is remembered as paternalist at best and quasi-eugenicist at worst. With defenders as disparate as liberal scholar Kenneth Clark to conservative luminary Robert Novak,

Moynihan Report

905

Moynihan and his work were products of Cold War liberalism and inchoate neconservatism, a remnant of a moment when political allegiances across social and economic axes were scarcely predictable. Moynihan served in the administrations of John F. Kennedy, Johnson, Richard Nixon, and Gerald Ford and represented New York in the U.S. Senate from 1976 to 2000, but he grew up in a home where his single mother’s financial and romantic fortunes were consistently unstable and persistently linked. Born into the last generation of Irish Americans who experienced the pains of presumed alienation from the white Protestant mainstream, he often framed the origins of his analysis of the “Negro family” in authenticating scenes of his own unstable childhood. Though his narrative of his fatherless boyhood located the source of the trauma in alcoholism, his treatment of black families positioned itself in the long history of the African diaspora, arguing that the black family structure was born in the space of the plantation. New World slavery, he noted, was the most oppressive form of the institution, because it simultaneously ejected African Americans from collective life and, in terms that Orlando Patterson later refined, rendered their “social deaths” permanent and genealogical. Recapitulating the thesis of Stanley Elkins’s Slavery, which posited plantations as structures like concentration camps in which no culture could be constructed or sustained, Moynihan jettisoned emerging midcentury notions of slave culture as vital and agential. The aftermath of Reconstruction, he notes, diminished the capacity of African American families to recover from the trauma of slavery. Generations out of slavery, he argues, families continued to presume maternal authority and paternal absence. In the year that Moynihan wrote The Negro Family, one-fourth of all African American families were headed by a woman. This fact is framed not as the sole product of slavery but as one exacerbated by southern resistance to Reconstruction, which presumed the criminality of black men, created the foundations of mass incarceration and public violence as a punishment, and thereby exiled black men from their families. Here, Moynihan makes his first profoundly controversial claim, suggesting that Jim Crow proved more “humiliating” to men of color because it hindered their access to public accommodations that were not available to women

906

Moynihan Report

of any race. Assuming that the Victorian notion of separate spheres governed African American life, Moynihan failed to consider the possibility that even normative domestic space was permeable to racist violence. Because African Americans lived in communion and kinship with one another, the effects of violence were not felt by atomized individuals but by entire communities. Migration by rural African Americans from the South to the North did not, to Moynihan’s mind, ameliorate the conditions of racial apartheid. Between the confluence of the Great Migration and Great Depression and the mid-century, “Negro unemployment . . . [has] continued at disaster levels.” The instability and proletarianization of African American labor provided barriers to family coherence in the urban milieu. Neither migration to the north nor the growing presence of African American women in institutional life provided Moynihan with evidence of healthy families. The report measures black women’s autonomy as evidence of social pathology. Noting that African American women outnumbered male counterparts by 4:1 among employees at Johnson’s Labor Department—a bureaucracy that had made significant efforts to recruit people of color—and that the National Achievement Scholarship Program’s funding for African Americans was predominantly awarded to females, Moynihan feared an insurmountable achievement gap between men and women that rendered marriage less desirable and “egalitarian” than mainstream midcentury matrimony. Few institutional opportunities existed to lift black men to the middle class. Though Moynihan praised the virilizing effects of military service, he bemoaned that 56 percent of black men failed the Armed Forces Qualification Test. That the selection criteria of these and other quantitative measures might have been biased toward nonblack applicants is not an option that he considered. In Hindsight With a half-century’s hindsight, contemporary readers might locate the controversy over the Moynihan Report in two inflammatory words: matriarchy and pathology. Even if the conditions that Moynihan located were incontrovertibly true, the report does not demonstrate that they constitute matriarchal authority for black women, who have endured in vexed intersections of sexism, racism, and classism.

Exploited labor, disproportionate responsibility for child rearing, and strained romantic relationships sound more like the despairing sense of black women as “de mule uh de world,” offered in Zora Neale Hurston’s Their Eyes Were Watching God, than matriarchy. The report, while it dismisses notions of genetic deficiency and inherent inferiority among African Americans, fails to question the heteronormative Eurocentric standards of the nuclear family. Non-nuclear kinship models are framed as “disorganized,” regardless of the potential of extended family to make childhood both loving and livable. Even without the inflammatory rhetoric, Moynihan’s argument would nonetheless contain significant gaps, especially surrounding his treatment of social class. By his own admissions, the conditions of poor and middle-class African Americans were radically different. While he identified a “stable middle-class group that [was] steadily growing stronger and more successful,” he relegated their experience to two brief paragraphs in a 50-page report. What Moynihan called the Negro family might more accurately be described as the poor children of urban migrants or the African American proletariat. With these gaps in mind, it is nonetheless necessary to remember that Moynihan saw himself as writing toward justice. He praises the labors of the civil rights movement as providing the belated fulfillment of the American Revolution, which he acknowledges as hindered by the presence of chattel slavery in the republic. The report is bookended by praise of the civil rights movement’s “propriety,” as well as its role as a “moderate, humane, and constructive force” in midcentury America. The diminutives directed at African American women nonetheless make it clear that this praise measures people of color against white norms. Contemporary readers might nonetheless praise him for admitting that legal equality—the absence of apartheid constraints on black subjects—was not the teleological aim of civil rights. In terms refined by contemporary critical race theory, he theorizes both liberty and equality, advocating African American access to both legal and social citizenships. Measuring the difference between Moynihan’s intentions and effects reveals the erosion of midcentury liberalism. The decades after the Moynihan Report transformed the “black matriarch” into the “welfare queen” of Ronald Reagan’s political

Moynihan Report



speechifying, using Moynihan’s own emphasis on “individual-level behavioral approaches” to social problems as the key weapon to fight poverty, according to political scientist Ange-Marie Hancock. Stories of abusive welfare mothers in Cadillacs and diamonds turned conservative opposition to redistributive economics into a distinctly gendered phenomenon. Long after Aid to Families with Dependent Children had been gutted by Bill Clinton’s welfare reform, which transformed it into Temporary Aid to Needy Families, a program administered by the states rather than the national government, urban legends about luxurious federal spending continued. Responding to these anecdotes, Anne Henderson of the Milwaukee County Welfare Rights Organization offered an impassioned defense of black women: If you think I’m gonna have a baby—and watch that child grow up with no food or clothing; and then watch him go to school where teachers don’t teach anything; and worry that he’s gonna become a pimp or start shooting up dope; and finally, when he’s raised, see him go into the Army and get really shot up in there—if you think I’m gonna go through all that pain and suffering for an extra $50 or $100 or even $500 a month, why you must be crazy. While resistance to antipoverty spending and myths of “welfare queens” might have their roots in Moynihan’s accounts of African American families without men, these stories are not his creation. The subtitle of the report, “the case for national action,” shorthands Moynihan’s demand for more robust federal spending for antipoverty programs. In the 1970s, Moynihan served in both the Richard Nixon and Gerald Ford administrations, flirted with neoconservatism, and coined the term benign neglect to describe the ideal relationship of the state of antiracist efforts. The trajectory of his life, though, reveals this period as something of an aberration. Before he vacated the Senate seat, Daniel Patrick Moynihan was one of the few voices in the Democratic establishment that resisted Clinton’s efforts to “end . . . welfare as we know it.” Moynihan at once declaimed the gutting of programs and blamed liberals for the incapacity to reform them: If you think things can’t be worse . . . just you wait until there are . . . children on grates, because

907

there is no money in the states and cities to care for them. It is a social risk that no sane person would take, and I mean that . . . [But] for years, whenever the critics said, correctly, that the welfare system was doing more harm than good, and suggested that it be rethought, its defenders screamed “racism” and “slavefare.” They did that until there was no public support left at all. Now they are stunned at what they’re getting. While Moynihan failed to save the apparatus of the midcentury war on poverty, the most robust legacy of his writing and agitation appears in the black feminist movement. Repudiating his nomenclature and defending black motherhood has produced a revolutionary literature, from Audre Lorde to Kevin Powell, from Alice Walker to Toni Morrison. Within Hortense Spillers’s powerful defense of “mother right” and the capacity of African Americans to offer “the power of ‘yes’” to women’s power and autonomy, the Moynihan Report resides, if only as the moment that provided an occasion for resistance. Jennie Lightweis-Goff Tulane University See Also: African American Families; Parenting; Slave Families. Further Readings Apple, R. W. “His Battle Now Lost, Moynihan Still Cries Out.” http://www.nytimes.com/books/98/10/04 /specials/moynihan-lost.html (Accessed June 2013). Hancock, Ange-Marie. The Politics of Disgust: The Public Identity of the Welfare Queen. New York: New York University, 2004. Hurston, Zora Neale. Their Eyes Were Watching God. New York: Harper, 2006. Moynihan, Daniel Patrick. “The Negro Family: A Case for National Action.” In The Moynihan Report and the Politics of Controversy, Lee Rainwater and William L. Yancey, eds. Cambridge, MA: MIT Press, 1967. Patterson, James T. Freedom Is Not Enough: The Moynihan Report and America’s Struggle for Black Family Life From LBJ to Obama. New York: Perseus, 2010. Patterson, Orlando. Slavery and Social Death: A Comparative Study. Cambridge, MA: Harvard University Press, 1985.

908

Multigenerational Households

Spillers, Hortense. “Mama’s Baby, Papa’s Maybe: An American Grammar Book.” Diacritics, v.17/2 (Summer 1987). Vobedja, Barbara. “Clinton Signs Welfare Bill Amid Division.” http://www.washingtonpost.com/wp-srv/ politics/special/welfare/stories/wf082396.htm (Accessed June 2013).

Multigenerational Households According to the U.S. Census Bureau, multigenerational families (sometimes referred to as multigenerational households) are defined as those having three or more generations living together in the same household. In some cases, in which great-grandparents are also living with the family, it is a fourgeneration household. Based on current population estimates, approximately 49 million Americans are living in 4.2 million multigenerational households nationwide. Some definitions of multigenerational households, however, are broader. If multigenerational families are defined as households that contain two generations (i.e., householder plus parent or parent-in-law or householder plus child or child-in-law), then they are said to account for 11.9 million U.S. homes. Multigenerational living can be temporary or permanent, depending on the reason for its formation. Multigenerational households allow families to act as cooperative units and may provide families with needed opportunities to face obstacles (such as financial hardship) as a group. These households may also provide families with the opportunity to facilitate communal caregiving, as it allows family members to be available to provide needed care to children or elders who reside in their same household. Who Lives in Multigenerational Households? The two most common types of multigenerational households include (1) grandparent as householder, in addition to middle generation parent and grandchild and (2) middle generation parent as householder, in addition to one or more of the householder’s parents or parents-in-law and the householder’s

children. The first household type, with the grandparent as the householder, is the most common form of multigenerational household in the United States. This living arrangement may be a result of custodial child rearing, that is, the grandparents are assisting parents in providing full- or part-time care for the children. This arrangement can be especially draining on the grandparent’s physical and emotional health, particularly when caregiving obligations were unexpected. Grandparents and middle generation parents may also experience tension as a result of this living arrangement if caregiving responsibilities are not adequately delineated between family members or if adult members disagree about appropriate disciplinary strategies for children. The second household type, with the middle generation parent as the householder, is generally the result of the middle generation parent assuming caregiving responsibilities for his or her parent or parent-in-law. In the event that the older generation can no longer live independently, either as a result of financial hardship or health/safety necessity, the middle generation parent may take on the responsibility of providing care to the parent figure. In most cases, the middle generation parent is also providing continued care to his or her own children and, thus, may feel especially strained. This arrangement often results in economic hardship for the family, as middle generation parents are forced to provide financial assistance to aging parents and growing children, a task that requires sufficient funds and time away from work. Time spent away from work magnifies financial hardship, as individuals who are not actively involved in the workforce may be less able to earn income necessary to support the household. Cultural differences affect the formation and duration of multigenerational family households. Some cultures, namely those that are described as collectivistic, value communal living with other family members more highly than others and may consider multigenerational living the ideal, especially when considering demands related to caregiving and intergenerational responsibilities. Multigenerational families in the United States are most common in minority family groups, namely Asian and Latino families, followed closely by African Americans. Whites are least likely to live in multigenerational households.



History of Multigenerational Families The United States has a rich history of multigenerational living. In fact, it was often the norm for multiple generations of farm families to live together in early centuries, as farms required as many family members as possible to bolster production and remain profitable. In 1850 an overwhelming majority of elders (approximately 80 percent of individuals age 65 and above) lived with children or other relatives, making it the most popular living arrangement in the United States. Whites, widowers, widows, and married couples all lived with children in the 19th century, with elderly blacks being the least likely to reside with children. The rise of industry and industrialization during this time period also prompted increases in immigration and the subsequent rise of multigenerational households. As a result of industrial growth, Americans migrated to cities in overwhelming numbers in hope of finding employment thus families lived together in an attempt to pool their financial resources and improve their economic standing. After 1850, however, each subsequent decade saw a decline in the number of multigenerational families. In 1900, for example, only 57 percent of adults ages 65 and older were reported to be living in multigenerational households. After World War II multigenerational households fell out of American favor. In 1940, only about 25 percent of families were reported to be living in multigenerational households. Contributors to the decline in multigenerational households included the substantial growth of suburbs (tailored largely to nuclear families), declines in the number of immigrants in the population, and improvements in the health and longevity of seniors. The implementation of social security legislation also allowed older generations to receive income postretirement, so they were less financially reliant on extended family. Children of elders also welcomed social security legislation, as it relieved them of long-term caregiving duties that often lasted until a parent’s death. Freedom from long-term caregiving meant that children, mainly the eldest daughter in the family, could marry sooner and begin a family of her own. Additionally, Medicare was legislated in 1965, which allowed older populations to have greater independence instead of relying on family members to meet their medical needs and/or financial

Multigenerational Households

909

needs. Finally, sharp declines in family size meant that elders had fewer choices of children to live with as they aged. This meant that many elders did not have to option to live with family and instead had to resort to institutions, including hospitals, rehabilitation centers, or nursing homes in later life. From the 1940s until the early 2000s, the number of multigenerational households remained at all-time lows. Families were often encouraged by self-proclaimed family experts to focus solely on their own nuclear household and were discouraged from taking their parents in as roommates. In fact, family scholars and clinicians often encouraged adults to “cut the silver cord.” In other words, children were instructed to limit contact with parents and to abstain from assuming any responsibility for their daily lives. In 1980, only about 12 percent of households were described as multigenerational, and although 1990 saw a slight increase in the number of multigenerational households, these numbers were nowhere near trends witnessed prior to the onset of the war. Multigenerational Families Today Despite historical trends that suggest that multigenerational families have experienced declines and remain stagnantly low, economic and family scholars argue that multigenerational households have experienced resurgence (largely as a result of the serious recession in 2007) and scholars predict that the number of multigenerational families will continue to rise in upcoming decades. Currently, multigenerational families constitute a larger percentage of households than in previous decades (currently about 17 percent of households are described as multigenerational). Experts suggest that there are several factors that have contributed to the rebirth of multigenerational households in today’s society. Important economic and societal influences are predicted to influence the growth of multigenerational families, including but not limited to housing shortages, rises in the cost of living and housing costs, job loss, changes in job status and underemployment, cuts to Medicare, a poor economy, rises in the number of unmarried mothers, increases in immigration, increases in the age of first marriage, greater availability of kin, and improvements in the health and longevity of individuals.

910

Multigenerational Households

Factors That Influence Formation of Multigenerational Households The United States saw a tremendous increase in the rise of multigenerational families following the recession of 2007. In fact, 2007 to 2008 alone saw an increase of 2.9 million Americans living in multigenerational households. Although the recession formally ended in 2009, the United States has yet to see a significant decline in the number of multigenerational households, as the economy still remains unfavorable. These trends suggest that multigenerational living is proving to be a viable, or in some cases last-resort option for families, and thus the likelihood of families remaining in multigenerational homes is strong. Housing shortages and rises in cost of living tied to the recession mean that families will be less likely to be able to financially sustain independent living. Multigenerational living offers a promising alternative to these issues, as it allows families to pool economic resources and live collectively. Increases in the number of unmarried mothers, as well as the age of first marriage, represent changes to family structure that affect multigenerational housing trends. In the case of unmarried mothers, it may be that these moms cannot adequately provide for children independently and must rely on extended family members for instrumental and social supports in raising children. Living with parents or other family members provides single mothers with the opportunity to cut down on independent living costs and receive needed assistance. Increases in the age of first marriage, on the other hand, suggest that children may live at home longer than they did in previous decades, especially those who are single and jobless. It may not be financially viable for them to live independently. In some cases, children may leave their parents’ home for a brief period to attend school, but if the economy is weak and they cannot find jobs, they may return home to live with parents. These children are referred to as “boomerang children.” The baby boomers are now middle-aged, financially independent adults, thus making them viable caregivers for aging adults and children. Additionally, individuals are living longer, healthier lives than ever before, thus allowing them to serve in extended caregiving roles. Given the noted improvements in the health of middle- and later-life adults (especially compared to previous decades), these populations

may be called on to care for parents, or even grandparents, who are in their 80s and 90s and whose health may be deteriorating. Though most older adults highly value their independence and wish to remain living independently for as long as possible, multigenerational living may offer elders needed medical assistance. In some cases, older adults suffering from loneliness or depression, particularly following the death of a partner or spouse, may also benefit from multigenerational living due to the sense of comfort it provides. Experiences of Multigenerational Families Multigenerational living can be both satisfying and stressful for families. In some cases, it may relieve financial hardship, while in others it may introduce relational tensions resulting from frequent, daily (sometimes unpleasant) interaction between family members. Although the “face” of multigenerational living is not uniform across the United States, scholars suggest that the decision to establish a multigenerational household may also be influenced by individual- and familial-level variables. For example, in cases where a middle generation parent is the householder and is providing care to an aging parent, decision making about who will provide care is likely influenced by the gender, proximity, and economic standing of each available family member. In most families, females are more likely to provide physical care (e.g., direct care) to aging parents, whereas males are more likely to provide financial support. Children who are closer in proximity to elders often provide more care than family members who live farther away. Also, family members with the greatest number of economic resources are more likely to be called on to provide financial assistance to family members in need than are those who report low-income or impoverished living. Poorer family members, however, are more likely to have family members living with them, as this is often the only means by which they can provide support. Power imbalances are often a key consideration of the multigenerational household, particularly in households in which children are providing care to aging parents or grandparents. Typically, the householder has the most power in the family and incoming family members are expected to adjust to and comply with the householder’s rules and expectations. For example, parents who used to be in charge



Nearly all members of this multigenerational family, which consisted of parents, children, and grandmother in Eastport, Maine, worked at a local sardine factory in 1911. The growth of industry, industrialization, and immigration during this period prompted the subsequent rise of multigenerational households.

may be living with children or grandchildren who now assume the power role in the family. This role reversal may create challenges for family members. Multigenerational living can provide families with opportunities to engage in kinship care; however, in the event that families do not clearly establish and negotiate household guidelines, frustrations may become amplified. For example, consider a multigenerational household in which a grandparent serves as the householder and a mother and grandchild reside in the same household. If the grandparent and middle generation mother figure do not clearly delineate responsibilities related to disciplining the child, the grandparent and mother may experience tension in their relationship as a result of discrepancies in parenting choices and decisions. Grandparents who are unable to step out of the parenting role may be accused of interfering with the

Multigenerational Households

911

mother–child relationship. Furthermore, children may experience loyalty binds or conflicts related to the confusion about whom they are to answer to or obey in the home. Children may express feelings of being caught in the middle if adult generations cannot negotiate and reach consensus about childrearing issues. Conversely, some multigenerational families report child-rearing benefits as a result of having several generations living together under one roof. More family members often means that there are more individuals available to monitor children’s activities and behavior and to provide guidance when needed. These considerations are especially true in instances where single mothers reside with their child and their parent, as single mothers rely more heavily on child-rearing support provided by extended family members. Adults, especially cohabiting or married partners, living in multigenerational households may experience disruptions in their romantic relationship when multigenerational living is sudden or unexpected. Spouses may disagree about establishing a mutigenerational household. If they disagree and parents, parents-in-law, or children move into the home, it may create friction between the cohabiting or married adults. They may experience a loss of privacy as a result of multigenerational living. Given the current complexity of family relationships, multigenerational households may also include steprelatives, in-laws, or in some cases, partners or close friends of another member. Special considerations, including privacy and personal schedules of household members, may be warranted in these situations to determine how best to assimilate household members with one another, as well as their novel environment. Scholars urge family members to communicate about the implications of communal living and to establish a plan for its transition. Needs of Multigenerational Families Multigenerational families have unique needs that must be met by society for them to adequately function. Legal help, mental health resources, and housing are a few examples of these needs. In terms of legal assistance, multigenerational families are often forged out of financial or heath-related necessity. Often, multigenerational family members do not share the same legal rights (e.g., a grandparent

912

Multilingualism

does not have the same right to a grandchild as the parent does). In cases where legal decisions need to be made on a family member’s behalf, multigenerational families may face obstacles in exercising their rights. Stress-related illness is common in multigenerational households; thus multigenerational families must ensure that adequate resources are in place to deal with stressors related to caregiving. Given the complexities of insurance policies, multigenerational members are often not afforded health coverage, these families may not have access to necessary mental health supports in times of stress. Housing is also a concern for multigenerational families. Though homeowners have more freedom in making decisions about multigenerational living and accommodations, rental housing rules and regulations, particularly among low-income populations, may not allow families to accommodate several family members. In fact, some leases may state that tenants who allow more than an agreed on number of people to live in their housing unit will face eviction. Even in cases where houses do allow for multiple generations to reside together, it is unlikely that the rental home will meet the needs of each family member (i.e., a play area for the child and disability accommodations, such as a ramp or handrails, for the aging grandparent). Scholars argue that to meet the growing needs of multigenerational families, both now and in the future, growth in numbers and availability of affordable housing options is necessary. Annamaria Csizmadia University of Connecticut See Also: Caring for the Elderly; Extended Families; Family Values; Grandparenting; Immigrant Families; Intergenerational Transmission. Further Readings Bengston, V. L. “Beyond the Nuclear Family: The Increasing Importance of Multigenerational Bonds.” Journal of Marriage and Family, v.63 (2001). Pew Research Center. “The Return of the MultiGenerational Family Household.” Washington, DC: Pew Research Center (March 18, 2010). Ruggles, S. “Multigenerational Families in NineteenthCentury America.” Continuity and Change, v.18 (2003).

Multilingualism Within the social context of American families, multilingualism is important to the social history of American families as it is applied to the rapid ethnic diversification of the American citizenry, and the evolving sensibilities about ethnic identity that lie at the heart of discussions about American nationalism. The term’s commonsensical usage reflects an emerging acceptance of demographic changes in the ethnic composition of American society. Multilingualism functions in the daily lives of many American families in the contemporary era. Multilingualism Explained Multilingualism is the process or application of polyglotism, also more commonly known as the use of multiple languages by an individual speaker. Multilingualism’s rise invisibility stems from America’s increasingly globalized economy and cultural awareness. Although there is some debate about the degree of multilingual practitioners, along the lines of native-level speaking versus knowledge of phrases at a less native level, most multilingual speakers fall somewhere between the ends of the spectrum. Spoken fluency requires prolonged exposure to a given language; thus, extensive multilingual speakers are generally understood to refer to those who have a mastery of basic communicative skills and an intrinsic understanding of the grammatical rules and vocabulary in the language rather than nearnative-level fluency. Often multilingual speakers have acquired and maintain their native language in addition to at least two others with which they have varying levels of accomplishment and familiarity. Although multilingual ability can be achieved exclusively through classroom instruction, the primary and most effective means of developing this ability is through “total immersion,” whereby the speaker is immersed, as a practical matter, in the culture and history of the region in which the language originated and developed so that they may become more familiar with the idioms and eponyms of that language. These two methods represent a type of cognitive process that reflects the learner’s age at the time during which they acquired the mastery of additional languages. It is well-established that the younger a person is, the less obstacles exist for him or her to acquire additional languages due to a physiological concept known as structural



plasticity of the brain, which refers to a greater density of gray matter in the inferior parietal cortex and its cytoarchitecture, or the ways in which neurons are hierarchically structured. However this theory is debated, as some contend it is a mixture of experiential and genetic factors that predispose some people to easier language acquisition than others. Multilingualism finds its greatest visibility in social circumstances where cultural variations arise. Americans’ Linguistic History English has been the dominant language since the colonial era. Although colonists encountered Native Americans who spoke a wide variety of unwritten languages, by the mid-19th century the federal government began to adopt a policy that officially discouraged the use of languages other than English in newly acquired states. California passed legislation in 1855 that prescribed the use of English as its official language, and by the 1920s most states possessed laws that required Englishonly public school instruction. It was not until 1974 in the Supreme Court case Lau v. Nichols that the government required public school systems to provide non-English-speaking students with “basic English skills” for them to profit from their attendance. Recent data show that the states with the highest percentage of English language learners are California, Texas, Florida, New York, and Illinois, and yet, 75 percent of students with limited English proficiency are native Spanish speakers while 7 percent are native speakers of Asian languages. During the 1970s multilingualism arose along with social changes that reflected a society that appeared to embrace multiculturalism’s tenets in tandem with the passage of the Equal Educational Opportunities Act, but just 20 years later opposition to a multilingual society arose after the election of Republican President Ronald Reagan. Currently, New Mexico and Texas, Louisiana, and Maine are unofficially but de facto bilingual states evenly divided between English and Spanish, and English and French, respectively. Hawai‘i, American Samoa, Guam, the northern Marianas Islands, and Puerto Rico are all officially bilingual. Twenty-three of the 50 states have no official language while the remaining states make English the official language. Within the United States, at the federal level there exists no official language, though there have been some efforts to make English the nation’s official language.

Multilingualism

913

“English only” laws have been constructed by many in the legal community as a violation of the First Amendment right to communicate with and petition the government in addition to the principles of free speech. Linguistic Identities American society’s monolinguistic culture differs from most of the world, whose population makes use of two or more languages during a lifetime and where many grow up bilingual either as a result of two native speakers as parents or by virtue of the family language differing from the community in which they live. Additionally, many people in other countries acquire a third language as a result of migration. In the United States, standard English is typically spoken by those with prestige and power, whereas “nonstandard” English is commonly associated with those at the lower end of the socioeconomic level, such as that referred to as “African American vernacular English,” though its practitioners are often bidialectical—capable of moving between both versions of the language. Thus the use of language communicates specific messages about one’s cultural identity that is inexorably linked to the fluency one possesses with that language or languages. Social distance and status are conveyed through language and supplement nonverbal methods as a means to communicate meaning. Multilingual practitioners can identify and act appropriately with the range of people who possess a high level of literacy in their native language. The ability of multilingual practitioners to successfully communicate across a wide array of racially and ethnically diverse communities and constituencies is vitally related to American socioeconomic competitiveness. In an increasingly globally competitive economy, American corporations and companies regularly seek out multilingual employees for their communication skills. Moreover, multilingualism’s utility in a business world based on free market capitalism warrants a careful approach that embraces, rather than rejects, cultural differences. Because many businesses in corporations operate in a multi-office, global environment, it is imperative that American public policy reflect a judicious approach that acknowledges the relationships between parent-company host countries and its subsidiaries. A multilingual workforce, therefore, becomes a substantial benefit to globally

914

Multilingualism

distributed corporations that maintain offices in an array of nations. To facilitate good business relationships it is imperative that American corporations procure and endorse multilingual training programs and multilingual employees who are capable of efficiently and accurately translating and communicating messages between parties within the same company and external to that company. Multilinguistic Families and Communities in American Society American cultural antagonism to other languages and those who regularly practice immigrant languages are seen as a threat to the ideals of national unity that are embodied in the use of English. It is this advocacy for increased global competency that is to some degree undermined by an ideological insistence that English exclusivity preserves a national identity that takes precedence over culture diversity intrinsic to multilingualism. Much of this is a fear of difference that stems from the monolinguistic English speaker transiting city communities where English is the minority language, thus perpetuating the image that these languages are equal to or may replace English’s cultural and pragmatic dominance. In racially and ethnically diverse communities where multiple immigrant languages are spoken, a common concern is the belief that language accompanies the cultural values that tie such communities together. If language is a marker of culture, then fluency in both immigrant and native languages is an important aspect of one’s identity and the degree to which one has assimilated or rejected assimilation with the country in which he or she resides. Generally racially and ethnic diverse immigrant communities usually adopt English within one generation of their arrival. The decisions to adopt English within these communities is particularly complex as they represent a negotiated agreement to present an image of national unity that is balanced with a commitment to one’s cultural origins. The degree to which English is adopted and the frequency with which it is used, either in the home or publicly, invariably reflects this careful negotiation. Public versus private use of English or other immigrant languages generally is predicated on one’s English fluency, public perception, age, education, and a variety of other factors. Of particular scrutiny for these communities are the ways in which native languages are decidedly

used at home while purposely avoided in public settings. Moreover, when parents take deliberate action to teach their children language skills, multilingual parents must be more deliberate in the instruction of native languages. The decision to educate children in native languages is also a decision to encourage the use of that language, at least insofar as it occurs in a home environment where the threats of ethnic and racial hostilities are lower than in public settings. Additionally, because multilinguistic parents and children must interact to some degree with American society, the use of English is necessary for survival, thus rendering members of these communities captive to the influences of the American culture commonly referred to in the research as “agents of Americanization.” Familial relationships are often supported by the intimacy that accompanies the use of immigrant languages in the home or in communities where English is less publicly dominant. Multilinguistic parents who are socioeconomically well off and possess a high degree of education are more likely to conduct language maintenance in their native tongue than those who are poor and possess less education. The socialization of children is frequently left to the mother in such multilinguistic families, who are in a key position to transmit their linguistic knowledge (such as that of vocabulary, style, grammar, etc.). However, those parents may also weigh the benefits and relative cost associated with such instruction against the merits of assimilation to English dominant society as a strategy for successful integration of their children. Occasionally, adolescent children may find themselves to be in better linguistic positions than their parents due to their multilinguistic knowledge and experience transiting both American and native cultures. Frequently these children can bridge the linguistic gap, providing a crucial, practical means of survival for such families. Also, although such circumstances may threaten the stability of the family hierarchy, multilinguistic skills nevertheless remain an important way in which such children can maximize the benefits of living in the United States and can occasionally influence their parent’s language use. Because multilingualism represents a willingness to embrace cultural diversity, an English-only society, according to some, demands a type of cultural imperialism

Multiple Partner Fertility



whereby integration and assimilation will necessarily result in the disappearance of ethnic differences, thus making a large-scale nationwide “melting pot” impossible. Michael Johnson, Jr. Washington State University See Also: Acculturation; “Anchor Babies”; Assimilation; Childhood in America; Immigrant Families; Immigration Policy; Language Brokers; Latino Families; Melting Pot Metaphor; Multigenerational Households; Parents as Teachers; Segregation; Social Mobility; Urban Families; Working-Class Families/Working Poor. Further Readings Blackledge, Adrian and Angela Creese. Multilingualism: A Critical Perspective. Advances in Sociolinguistics Series. London: Bloomsbury, 2010. Dicker, Susan J. Languages in America: A Pluralist View. Towbridge, UK: Cromwell Press, 2003. Linguistic Society of America. “Multilingualism.” http:// www.linguisticsociety.org/resource/multilingualism (Accessed November 2013). Sollors, Werner. Multilingual America: Transnationalism, Ethnicity, and the Languages of American Literature. New York: New York University Press, 1998. Tuominen, Anne. “Who Decides the Home Language? A Look At Multilingual Families.” International Journal of Social Languages, v.140 (1999). World Health Organization. “Multilingualism and WHO.” http://www.who.int/about/multilingualism /en (Accessed November 2013).

Multiple Partner Fertility Multiple partner fertility (MPF) is defined as having children with two or more birth partners (also known as multipartner fertility and multipartnered fertility). Although this definition broadly applies to married or unmarried birth partners, MPF is most commonly studied among nonmarital birth partners and disadvantaged populations. Over the past decade, concerns about the well-being of children in diverse family structures and the limitations of

915

public policies in addressing the needs of such complex families have driven research on MPF. Family relationships in the context of MPF are often complicated by economic circumstances and complex family relationships. These circumstances affect children and adults. History and Prevalence The steady rise and ultimate plateau in divorce since the 1950s was accompanied by a decrease in marriage and a steady increase in nonmarital births, resulting in more opportunities for fertility across multiple partnerships. Prior to the 1960s, MPF was infrequent, primarily occurring in the context of widowhood. Over the ensuing decades, MPF became more common as both divorce and remarriage increased in frequency. However, the term multipartnered fertility was not introduced until 1999 by Frank Furstenberg and Rosalind King when they described fertility patterns of low-income adolescent mothers, and research in the past decade has primarily focused on MPF in nonmarital birth families. Information on the prevalence of MPF is limited. National household surveys focus on living arrangements within households and lack detail about fertility across relationships and households. Estimates from the Survey of Income and Program Participation show that the percentage of children under 18 years old living with half-siblings (children who share only one biological parent) fluctuated between 10 percent and 12 percent over the past two decades. However, these estimates do not include half-siblings who live in separate households. Other data sources such as the National Survey of Family Growth (NSFG), the National Longitudinal Survey of Youth (NLSY), and the National Longitudinal Study of Adolescent Health have produced estimates of MPF among men and women at different points in their fertility histories. Data from these sources underestimate the prevalence of MPF because participants were interviewed at different ages, and subsequent births with new partners are not reported. Also, male–female comparisons are limited by the measurement of MPF at different age ranges. Limitations notwithstanding, NLSY data showed that in 2006 almost 19 percent of all women ages 41 to 49 had experienced MPF. Among women with two or more children, almost 28 percent had experienced MPF. Data from the NSFG showed that almost 8 percent of all men had experienced

916

Multiple Partner Fertility

MPF. Among all fathers ages 15 to 44, 17 percent reported MPF, and for those ages 35 to 44, 3 percent had children by three or more partners. MPF has become more common among recent cohorts of men. Men with first births between 1985 and 1994 were more likely to have a birth with a second partner than those in the decade before. This was not true for women. However, between 1985 and 2008, the rate of MPF increased among those who were white, had more education, and higher incomes; and decreased for those who were African American, not high school graduates, never married, and in the lowest income quintile. MPF remains most prevalent among black non-Hispanic women and men. Among women ages 19 to 25, 7 percent of black women experienced MPF compared to 3 percent of white women and 2 percent of Hispanic women. Black non-Hispanic men are most likely to experience MPF, followed by Hispanic men and white non-Hispanic men. The extent to which MPF is caused by racial/ethnic or socioeconomic differences is unclear. Other factors associated with racial/ethnic differences may be the cause of both MPF and socioeconomic disadvantage. For example, men and women who did not live with both biological parents as children are more likely to experience MPF, be black than white, and have less than a high school diploma. Other factors such as being the child of an adolescent mother and having a history of incarceration (for men only) are similarly associated with race/ ethnicity, socioeconomic disadvantage, and MPF. Circumstances surrounding first births, such as age and relationship commitment, may account for MPF above and beyond race/ethnicity and socioeconomic factors. Among men and women, having a younger first birth increases the likelihood of experiencing MPF. Specifically, over one-third of the men who have experienced MPF fathered their first child prior to their 20th birthday, compared to 11 percent of fathers with single-partner fertility. Additionally, MPF is more prevalent among parents in less committed first-birth relationships. Women ages 19 to 25 who report no contact with the father of their first child following news of the pregnancy are more likely to experience subsequent MPF, as are women not living with the father of their first child at the time of the birth. However, whether they live together or not, mothers are less likely to have subsequent MPF when birth partners are involved

with the children and when their coparenting relationships are supportive. MPF is common among unmarried partnerships, which are particularly unstable. Approximately 40 percent of all children are born to unmarried parents, and 60 percent of nonmarital births include at least one partner who had a child with a prior partner. Among women with a nonmarital first birth and a subsequent birth, 40 percent had children with multiple partners, and estimates suggest that over 50 percent of first-born children to unmarried mothers will have at least one half-sibling by age 12. Family complexity varies in these families. Children with different degrees of biological relatedness may share a common household. For example, a mother may live with a new birth partner, their biological child, and children from two prior relationships. Alternatively, groups of siblings may live across several households, as when a father has children with two birth partners, and the children live with their mothers. Fertility-driven family complexity also increases the risk of further complexity. Mothers and fathers entering additional birth partnerships are more likely than those in first-birth partnerships to do so with new partners who have prior children. Here, children are biologically related with three or more sets of siblings who share links with four or more parents. For example, a daughter may live with her mother, stepfather (mother’s new partner), and their biological child. Contributing to the complexity, the daughter may have a relationship with her biological father who lives elsewhere, and the stepfather may maintain contact with his children from his prior partnership. Relationships Within and Across Households MPF often results in complex kin networks, obligations, and negotiations across households. Perhaps partially because of such complexity, MPF families face considerable instability compared to families with first-birth partners. However, the stability of new birth partnerships is somewhat dependent upon whether mothers, fathers, or both have prior children. Among urban parents, MPF fathers are less likely to marry at the birth of a child to a new partner. MPF does not diminish the likelihood of marriage for mothers at the birth of a new child. However, marriage is less likely and relationship dissolution is more likely within five years of this



child’s birth when either new partner has a child by another. Family problems are prevalent in MPF. Compared with couples in first-birth partnerships, lower levels of relationship quality and coparenting support are present when only the father or both the father and mother have children by previous partners. Relationship quality and coparenting support are not affected when MPF mothers couple with a first-birth father, and unmarried mothers’ resident children from prior relationships foster family cohesion when mothers and children embrace the new birth partner as a father figure. Fathers are often nonresident parents following birth partnership dissolutions. For new birth partnerships, cross-household negotiations over coparenting arrangements and resource allocation are typical sources of conflict. Also, sexual jealousy between prior birth partners and new unmarried birth partners contributes tension, particularly when such partnerships occur in close succession. When new relationships are tenuous, mothers often encourage fathers to devote resources and time to the new family. Correspondingly, fathers pay less child support and visit nonresident children less frequently when they have children with new partners or when mothers have children with new partners. However, these patterns vary by race and ethnicity. Consequences of Multiple Partner Fertility Despite the growing prevalence of diverse family structures in the United States, guidelines for family roles in the context of MPF are unclear. Such role ambiguity may lead to distress and poor outcomes for men, women, and children. Communication among half-siblings in married stepfamilies is similar to that of full siblings, and both of these groups differ from stepsiblings in communication quality. Half-siblings and full siblings are engaged in similar levels of both positive and negative communication, whereas stepsiblings are less engaged in both. However, half-siblings experience more sibling rivalry than do full siblings. Also, children living with halfsiblings are at greater risk for poor academic performance, behavioral problems, and depression. Little research has focused on outcomes of children with half-siblings in other households. Living in households without both biological parents is associated with financial hardship and poor mental health

Multiple Partner Fertility

917

outcomes for mothers and children. Also, a growing body of research suggests that transitions from one family structure to another contribute to poor outcomes for mothers and children. Although this research is not directly linked to MPF, such changes are a consequence of it. Family structure transitions are associated with financial hardship, lower levels of social support, depression, and poor parenting practices among mothers. Children exposed to family structure changes are more likely to experience behavioral problems, lower cognitive test scores, less school readiness, and worse physical health. Poor child outcomes may partially result from lower quality parenting practices in these families. Research on the mental health consequences of MPF for men and women is also limited. Upon the birth of a child, mothers and fathers experience heightened parenting stress when their current partners have children by prior partners. Also, MPF mothers and fathers are more likely to be depressed, but evidence suggests that MPF likely results from depression rather than vice versa. Specifically, fathers’ depression increases the likelihood that relationships with first-birth partners dissolve, leading to opportunities for subsequent fertility with new partners. For fathers, both MPF and depression are linked to behavior problems of young resident children. Many of the problems associated with MPF may result from economic hardship. Economically disadvantaged men and women are most likely to experience MPF, and MPF results in deepening levels of economic hardship as resources are spread thin. The need for economic provision across households can extend beyond the ability of fathers to provide such support. Faced with conflicting demands, some fathers ultimately choose not to maintain contact with some children. Ethnographic research chronicles a subgroup of fathers who have experienced MPF with little intention to invest in children. These men are typically engrossed in urban street culture where value systems emphasize sexual conquest over domestic family life. Although the prevalence of these cases is unknown, research suggests that a substantial majority of MPF fathers intend to invest in one or more sets of children. Raymond E. Petren Florida State University

918

Multiracial Families

See Also: Child Support; Fertility; Fragile Families; Half-Siblings; Stepfamilies; Urban Families. Further Readings Carlson, Marcia J. and Frank Furstenberg, Jr. “The Prevalence and Correlates of Multipartnered Fertility Among Urban U.S. Parents.” Journal of Marriage and Family, v.68 (2006). Guzzo, Karen Benjamin and Frank Furstenberg, Jr. “Multipartnered Fertility Among Young Women With a Nonmarital First Birth: Prevalence and Risk Factors.” Perspectives on Sexual and Reproductive Health, v.39 (2007). Meyer, Daniel R. and Maria Cancian. “‘I’m Not Supporting His Kids’: Nonresident Fathers’ Contributions Given Mothers’ New Fertility.” Journal of Marriage and Family, v.74 (2012). Monte, Lindsay M. “Multiple Partner Maternity Versus Multiple Partner Paternity: What Matters for Family Trajectories.” Marriage & Family Review, v.47 (2011).

Multiracial Families Multiracial families, also known as “interracial families” or “mixed-race families,” represent family units that consist of family members of different racial backgrounds. Multiracial families typically emerge as a result of marriage, cohabitation, civil union, remarriage, or transracial adoption. According to the 2010 U.S. Census Brief titled “Households and Families: 2010,” there are about 2.2 million multiracial households in the United States. Recent decades have seen a substantial increase in the number of multiracial families. Some of the factors that explain this upward trend include the elimination of segregation and antimiscegenation laws, integration of work and educational settings, growing social acceptance and public recognition of interracial unions, and an increasing number of American families who adopt children outside their own race. Like their monoracial counterparts, multiracial families have developmental tasks, such as forming a functional family identity, managing family boundaries, and operating the family system in a way that promotes the well-being of each individual family member and the family unit as a whole.

For multiracial families, however, mastering these family developmental tasks may present some challenges that are uniquely associated with the family’s multiracial composition. For example, multiracial families must form a family identity that incorporates the racial heritage of each family member. They also have to develop ways to cope with social disapproval, racism, and/or discrimination. Finally, parents, stepparents, and other parental figures must teach children about the concept of race, their own and other family members’ racial status, and how to navigate relationships with individuals in and outside their race. This parenting behavior, which is known as ethnic-racial socialization, is particularly important for multiracial children and transracially adopted children because it can promote or prevent the development of a positive ethnic-racial identity, which in turn has implications for children’s socialemotional and academic adjustment. Children in multiracial families may be multiracial or transracially adopted but not necessarily of mixed race. For children in multiracial families, one of the most important developmental tasks involves development of a positive ethnic-racial identity. Their identity formation may be complicated by the lack of same-race adults. Multiracial children and transracially adopted children may not have an adult family member with whom they can readily identify in terms of race. Additionally, they may face social challenges from peers at a time when the importance of social acceptance increases, such as during middle childhood and adolescence. Children in multiracial families, however, also have the opportunity to develop bicultural competence and a certain cognitive flexibility associated with negotiating multiple cultural (sometimes even linguistic) systems. Definition and Formation of Multiracial Families Multiracial families are sometimes referred to as “multicultural families” or “multiethnic families,” likely because in popular language, ethnicity, race, and culture are used interchangeably. This entry focuses on multiracial families defined as families made up of various racial backgrounds. For example, a family comprised of an Asian American father, a Latina mother, and their biological daughter would be considered multiracial. A family made up of a Chinese American husband and a Japanese American wife represents a multiethnic but not a



multiracial family because both Japanese and Chinese belong to the same racial group, although they are different ethnicities. Multiracial families form when individuals of different racial groups marry or cohabit, or when one person or a same- or mixed-race couple adopts a child transracially, that is, a child of a different race. Multiracial families may be made up of same- or different-sex parents and their biological or trans­ racially adopted offspring. Families consisting of grandparents or other extended family members raising a child whose race is different from their own would also be considered multiracial. The many avenues that lead to multiracial family formation result in a variety of family structures and in turn contribute to diverse family processes among contemporary multiracial families in the United States. Furthermore, they highlight the importance of considering the unique developmental ecology that each multiracial family represents for children’s development. History of Multiracial Families Prior to the 1950s, multiracial families made up a rather small proportion of U.S. families. De jure and de facto segregation of racial groups and legal prohibition of interracial marriages coupled with widespread social disapproval of interracial relationships represent two significant social influences that historically prevented the growth of the multiracial family population. In the United States, segregation and antimiscegenation laws along with strict social control over interracial relationships kept members of different racial groups apart (e.g., black Americans and white Americans) for more than 200 years. However, the legally codified desegregation of educational and work settings starting in the 1950s contributed to a gradual shift in social attitudes toward and eventually to a steady increase in the number of multiracial families. Since the 1967 Loving v. Virginia Supreme Court case, which declared antimiscegenation laws (i.e., legal prohibition of intermarriages) unconstitutional, the number of interracial marriages and interracial unmarried couples has surged. Some sociologists argue that as legal barriers against interracial unions decrease, the number of individuals who date and/or marry across racial lines increases. As interracial relationships become more prevalent, social acceptance of such unions

Multiracial Families

919

also grows, which leads to further increase in the number of people who engage in interracial relationships. In the wake of the 1967 landmark legal decision, interracial coupling has become increasingly more socially accepted. In fact, June 12, the day of the 1967 Supreme Court decision, has been designated as Loving Day—a day of celebration for interracial families. In addition to interracial romantic relationships including marriages, transracial adoption has become more widespread in recent decades. In the last 30 years, the number of multiracial families has expanded due to growing numbers of white families who adopt a child of a different race. Trans­ racial adoptions often, albeit not always, involve international adoptions. The number of multiracial families that formed as a result of parents adopting children from Asia, particularly China, is considerable. In addition to Asia, children from African and Latin American countries have also been adopted in significant numbers by European American parents. Along with international transracial adoption, domestic transracial adoption also leads to the formation of multiracial families. Domestic transracial adoption often involves white American parents adopting African American, Native American, Hispanic, or biracial children. Laws such as the 1994 Howard M. Metzenbaum Multiethnic Placement Act and the 1997 Adoption and Safe Families Act have facilitated transracial adoption of children within the United States and from abroad. Multiracial Families Today The easing of legal and social barriers around multiracial family formation has led to a substantial and growing presence of this subgroup in the U.S. family population. The U.S. Census Bureau estimated that in 2010 there were approximately 2.2 million multiracial households in the United States. This estimate includes interracial married couples, interracial unmarried partners (both same and different sex), and families with transracially adopted children, grandparents who are raising a differentrace grandchild, and extended family arrangements in which family members of different race reside in the same household. Multiracial families are not evenly distributed across the United States. According to the latest census, in the continental United States, the largest proportion of marriages made up of different racial

920

Multiracial Families

backgrounds is found in the west (11 percent). Smaller proportions of multiracial married couples live in the Midwest, northeast, and the south (4 to 6 percent). Three states have particularly high proportions of interracial marriages: Hawai‘i, Oklahoma, and Alaska. For example, according to the 2010 census, 37 percent of all marriages are interracial in Hawai‘i. Demographers have surmised that the high percentage of interracial marriages in Hawai‘i, Oklahoma, and Alaska is in part due to the relatively large presence of Native peoples in these three states. Challenges of Multiracial Families The mixed-race nature of multiracial families presents added challenges for normative family developmental tasks such as family identity formation, family boundary management, and parenting behaviors that enable parents to help children negotiate social relations in and outside the family. Every family must form an identity that represents the entire family unit; this process, however, is complicated by the mixed racial composition of family members in multiracial families. Multiracial families thus have the added challenge of forming a family identity that reflects and recognizes each family member’s ethnic-racial background and represents the ethnic-racial diversity of the family system as a whole. Family members’ internalized racism, prejudice, as well as experiences of discrimination and prejudice from outside the family unit may make formation of a positive ethnic-racial family identity difficult. In addition to family identity formation, multiracial families must also manage boundaries around the family unit. Managing family boundaries in these families may be a complex task because each family member must be able to cope with lack of social acceptance, face questions about family members’ origin, respond to comments that question the family identity, and protect other family members from prejudice and racism. Finally, parents in multiracial families have the added task of actively engaging in parenting behaviors that support children’s healthy ethnic-racial identity formation and teach them how to negotiate relations with same- and different-race peers and adults. Messages that build children’s pride in their ethnic-racial heritage are especially important for positive ethnic-racial identity development. Additional supportive parenting behaviors include

Multiracial families, also known as interracial families or mixed-race families, represent family units that consist of family members of different racial backgrounds.

preparing children in how to respond to negative ethnic-racial stereotypes, racism, and discrimination. Multiracial families who reside in ethnically and racially diverse communities can draw on social support from community members more so than families of similar composition who live in ethnoracially homogenous neighborhoods. Children in Multiracial Families Similarly to children in same-race families, children in multiracial families may be one parent’s or both parents’ biological offspring. Alternatively, they may have been adopted into the family. Thus, children in multiracial families may or may not be multiracial. Both multiracial biological children and transracially adopted children have unique needs and challenges that can pose vulnerability for their socialemotional and academic adjustment. A central



issue for both groups of children is formation of a healthy ethnic-racial identity. Much research suggests that a positive ethnic-racial identity is associated with better youth outcomes, such as higher levels of self-esteem, lower levels of depression, and better academic adjustment. To form a healthy ethnic-racial identity, multiracial children need to be exposed to both sides of their racial backgrounds. In addition, they must have the opportunity to discuss with family members their mixed-race heritage and receive positive feedback in their quest to negotiate a racial identity that befits them. Research shows that multiracial youth understand race as a fluid concept, thus they may change their racial identity across context and over time. Whereas some multiracial youth identify with one race, many choose to embrace a biracial identity. Yet others develop a situational identity that can shift across context. Finally, there are multiracial youth who refuse to define themselves in racial terms. Research also suggests that allowing multiracial youth to choose an identity is associated with more positive psychological adjustment than forcing them to choose one particular identity. Multiracial youth need positive family support as it relates to their racial background also because they can experience questioning of their racial heritage and denial of their chosen racial identity. Transracially adopted children too have specific developmental needs. They need to be able to access information about their cultural, ethnic, and racial heritage and have opportunities to interact with adults and peers who share their background. Multiracial families of transracially adopted children can support children’s development by participating in cultural activities with children, having regular and open discussions about children’s heritage, and by cultivating a social network that includes adults and children of diverse ethnic-racial background. Despite some challenges that children in multiracial families may experience in terms of ethnicracial identity formation and peer acceptance, they also enjoy some developmental advantages that are associated with their racially diverse family background. Research suggests that they tend to possess bicultural competence, that is, the ability to function effectively in two cultures. In addition, their exposure to multiple cultural and often linguistic systems enhances their cognitive flexibility, which

Music in the Family

921

has benefits for academic achievement. They are also said to be less prejudiced and to be more tolerant of individual differences. Annamaria Csizmadia University of Connecticut See Also: Adoption, International; Adoption, MixedRace; Civil Rights Movement; Interracial Marriage; Miscegenation. Further Readings Child Welfare Information Gateway. “How Many Children Were Adopted in 2007 and 2008?” Washington, DC: U.S. Department of Health and Human Services, Children’s Bureau, 2011. Csizmadia, Annamaria, David L. Brunsma, and Teresa M. Cooney. “Racial Identification and Developmental Outcomes Among Black-White Multiracial Youth: A Review From a Life Course Perspective.” Advances in Life Course Research, v.17 (2012). Csizmadia, Annamaria, J. P. Kaneakua, M. Miller, and L. C. Halgunseth. “Ethnic-Racial Socialization and Its Implications for Ethnic Minority Children’s Adjustment in Middle Childhood.” In Socialization: Theories, Processes and Impact, E. L. Anderson and S. Thomas, eds. Hauppauge, NY: Nova Science Publishers, 2013. Robinson-Wood, T. “It Makes Me Worry About Her Future Pain: A Qualitative Investigation of White Mothers of Non-White Children.” Women & Therapy, v.34 (2011). Rollins, Alethea and Andrea G. Hunter. “Racial Socialization of Biracial Youth: Maternal Messages and Approaches to Address Discrimination.” Family Relations: An Interdisciplinary Journal of Applied Family Studies, v.62 (2013). Samuels, G. M. “‘Being Raised by White People’: Navigating Racial Difference Among Adopted Multiracial Adults.” Journal of Marriage and Family, v.71 (2009).

Music in the Family Music, an integral part of family and social life, is conveyed also in extrafamilial groups, or fellowships. Music varies from home to home due to a

922

Music in the Family

number of influences. Sources about music’s role until the present century were primarily diaries and personal letters, most from the affluent in society. Regional genres, such as blues and jazz, or the instrument most used (piano, fiddle, fife) were a by-product of the geographic area. Economic status often dictated which instruments or types of music one performed or heard. Other factors of geography that affected music included urban versus rural and the types of industry each region had (manufacturing or agriculture). Throughout the history of the United States, music’s role has changed as well. Technology, both print and sound, has developed and changed to a greater degree in recent years. Music is delivered via myriad technological gadgets, each with its own sphere of influence. Sources of Music Much of what is known about the musical life in earlier America comes from diaries and compilations of personal letters. One of the foremost writers was George Templeton Strong (1820–75). His diaries describe the concert scene in the 1850s and 1860s, particularly in New York City. Other publications, such as daily newspapers, list several public events for families to attend. John Sullivan Dwight (1813–93), editor of Dwight’s Journal of Music, also describes the musical life of the 19th century. Annual sheet music production increased from 600 titles in the 1820s to 5,000 in the early 1850s, and by the 1870s had mushroomed to more than 200,000. Many collected music and had printers or binders assemble these collections. Binder’s collections were found in homes of doctors, attorneys, and other highly paid professionals. The most affluent in society have been able to consume those types of entertainment, such as printed music and musical instruments, particularly pianos. In 1866, after the end of the Civil War, Americans purchased 25,000 new pianos at $15 million. Each year production increased, and by 1890, 232,000 pianos were manufactured in the United States. Some department stores printed music and issued it as complimentary; on the last page was a catalog of products sold by the store. These products demonstrate that the material culture of the 19th century was similar to that seen today, particularly with cosmetics, such as teeth whiteners, hair restoration liquids, and pure drinking water.

Chronology Perceptions of music differed between families in colonial, revolutionary, and antebellum homes. During the 18th century, Americans viewed music, particularly refined classical music, as part of aristocracy. Amateur music making became more popular after the American Revolution, and together with increased economic prosperity, more people partook of this activity. Attitudes toward theaters, actors, and music were generally unfavorable. In addition to viewing actors as immoral or corrupt, the theatrical venue exuded the aromas of tobacco, smoke, and alcohol. Antitheater laws prevailed in Philadelphia until 1789. Composers, such as Benjamin Carr (1768–1831) of Philadelphia, published theirs and others’ sheet music, which was growing in demand. The 19th century also saw a rise in communications via telegraph and telephone, and in transportation via the railroad. During the antebellum period, most Americans engaged in agriculture, and most of the population resided on small farms or in small towns. Home circles were larger than the nuclear family and included grandparents. American society, regardless of class, assembled in private homes more than in public buildings. The parlor, also called a drawing room, became the room for activity, as many installed a piano there. In South Carolina, for instance, women would gather after breakfast in the drawing room, where they copied music, practiced instruments such as the piano, guitar, or flute, and had additional cultural activities. The Civil War brought increased production of sheet music and developing genres, such as ballads. One of the primary forces of music publishing and composing, particularly for the Union, was George Root (1820–95). He composed tunes such as “The Battle Cry of Freedom” and “Tramp, Tramp, Tramp,” and managed a publishing house in Chicago. Only a fire in 1871 stopped his sales of sheet music, but his copper plates and copyrights survived. During the Gilded Age, in the late 19th century, periodicals such as Western Rural became platforms to educate rural Americans about music. Pastiches retained the melody of the known tunes but altered the words to fit these tunes into social or political themes, such as temperance, abolitionism, or labor equality. American Indians communicated via rhythms and pitches slightly different from those of European settlers. The 20th century



saw the development of technology, from cylinders to compact discs. Genres by Age of Musicians, Era, or Region Each era had certain forms and genres that became popular. The 19th century saw this for ballads, or short narrative songs. Many of these fit melodically into known tunes; thus, the broadside could be published en masse. Topics included current and ephemeral issues, such as politics, the economy, and persons. From the pre–Civil War to the Reconstruction period, blackface minstrelsy could be found in all parts of the country. Daniel Decatur Emmett (1815–1904) led Bryant’s Minstrels and the Virginia Minstrels as a white banjo virtuoso. Emmett also became known for composing the words and music to “Dixie.” Others of prominence included Campbell’s Minstrels, Lloyd’s Minstrels, and Edwin Pearce Christy’s Minstrels. The typical ensemble consisted of a vocal soloist, accompanied by a fiddle, one or two banjos, tambourine, and bones. Minstrelsy evolved into vaudeville, a variety show dominant during the late 19th and early 20th centuries. The age of the musician or listener in the family also determined the music they heard or performed. For many, nursery rhymes were set to music. During the 1920s and 1930s, an American educator, J. Lilian Vandevere, issued rhythm band and orchestra music for narrator, sleigh bells, bells, triangle, rhythm sticks, wood block, tambourine, cymbals, drum, xylophone, piano, and children’s chorus. Some of these were called Orff instruments, based on the pedagogical style of Carl Orff (1895–1982). Many of these were based on famous classical themes, folk tunes, or songs surrounding holidays (such as Christmas and Easter). Folk and fiddle music were represented by the mountainous areas and southern plantations, especially during the antebellum period. Plantations, often places of heavy labor by the slaves during the antebellum period, offered a respite to the African Americans around Christmas. Plantation owners gave extravagant Christmas dinners, and all laborers were invited. After the dinners concluded, many listened to singing and participated in country dances, such as the Virginia reel. Fiddlers performed popular tunes of the day—“The Devil’s Dream,” “Black-Eyed Susan.” Slave families experienced a holiday that lasted three days, the only multiple-day vacation owners granted during the

Music in the Family

923

year. They also sang songs from Africa for the master. Some blacks played banjos in their own homes and with the fiddlers. Blues, a poetic genre primarily sung and played by black musicians, developed in east Texas and southern Louisiana. Many exhibit a deep structure, with 12-bar phrases, rhymes, melancholy, and humor. W. C. Handy (1873–1958), the father of the blues, wrote “Memphis Blues” (1912) and “St. Louis Blues” (1914). Myriad musicians followed in the singing of blues. Classical Music In urban areas, especially New York, families gathered in ethnically homogeneous neighborhoods for music on Sunday evening. Concerts were more frequent as a family gathering. To circumvent laws prohibiting nonreligious activity on the Sabbath, many included several sacred songs, and some were labeled sacred concerts. Sunday nights, in many places, the family stayed home and sang or prayed. This was particularly true of the more conservative Protestant denominations. For purchasing music during the 19th century, many publishers used the nickel as a unit of price, enclosing in a circle the number required for purchase. Many of the pieces sold were medleys, called potpourris or fantasias, on famous tunes and opera themes, arranged for piano in a schmaltzy style. Some of the earliest were Benjamin Carr’s Federal Overture. Instruments in the Home During the 1780s and 1790s, the piano construction industry grew as American crafters built pianos comparable to those from Europe. In Boston, Benjamin Crehore (1765–1831) began his piano making in 1792; John Geib (1744–1818) of New York began his piano- and organ-building business in 1798. In wealthy homes, one found a piano. Other instruments were present, including violins and stringed instruments, and winds. Homes of middle-class or rural families more often had a guitar or ukulele. While men gravitated toward the playing of the violin and flute, women were more often seen with the guitar, piano, or other keyboard instrument. The banjo, derived from Senegalese roots, was found with the African American population, first during the period of slavery and after the emancipation. Eli Whitney, with his invention of interchangeable parts, also influenced the growth of the purchase of

924

Music in the Family

musical instruments. During the Civil War, several young men joined the military to be drummers and buglers, some as young as 12. Several families founded businesses in music publishing and instrument making, among them Henry E. Steinway in New York, the Ditsons in Boston and New York, Schirmer (still active today in New York), Armand E. Blackmar in New Orleans, and George Root in Chicago. Conveyance of Music Music is conveyed by two principal means: the visual display of the musical notes to be performed, and second, by the audio or audiovisual presentation of the performance. Sheet music was printed for the benefit of the masses, and simplified for common use. Most of this printed sheet music displayed the standard staff notation seen in classical and church music today. But to simplify for the less educated people, some music publishers issued broadsides with only the words appearing. Sacred music publishers printed in shape-note notation, most often for singing of hymns. Here each note was represented by a shape (triangle, square, circle). Additionally, hymns were limited to a small number of tunes; congregants then had only to learn these few tunes and use multiple lyrics. Some early hymnals included only the text and named the tunes to be used. Publishers also created music with chord diagrams for ukulele or other simpler instrument; others used chord names, a letter, and sometimes numbers to indicate harmony. Another format used for instruction was the Sol-fa notation, a system of do re mi for moving the tonic of the scale. During the 20th and 21st centuries, musical sound has been conveyed, electronically, much a byproduct of miniaturization and technological advances. Thomas Edison developed tinfoil cylinders in 1877. By 1889 he had made wax cylinders. Simultaneously, radio developed in its delivery of news, political broadcasts, theatrical programs, and music. During the 1930s, many families would listen to commentary, political speeches such as Franklin D. Roosevelt’s “Fireside Chats,” and various dance orchestras performing live on the radio. Many of the arrangements of this dance orchestra music were handwritten or transcribed. Reel-to-reel tape recordings, which ranged in size from 7 to 10 inches in diameter, were primary used by professionals. By 1898, Danish engineer

Valdemar Poulsen (1869–1942) built the first magnetic recorder. Resembling a cylinder phonograph, the Telegraphone recorded on a carbon steel wire that was wrapped around a brass cylinder. Vinyl records appeared in various speeds, colors, and dimensions. Seventy-eight rpm records (running at 78 revolutions per minute) were introduced in the 1920s, with a coarse sound. Long-playing records (LPs at 33 1/3 revolutions per minute) were developed primarily during the 1930s. By 1948, Columbia, a major label, introduced the modern LP for public consumption. Record players (phonographs) of the 1950s included components that allowed the playing of 78s, LPs, and the smaller 45 rpm records. By 1957, though, 78 rpm records gave way to the dominant LP industry. Audio cassettes were introduced by Philips in 1963. Intended to replace the reel-to-reel tape recordings, they achieved this only in homes and with hobbyists. Professionals and those preserving archives continued to use reel tape. Eight-track tapes, alternatively called “stereo eight” or the Learjet cartridge, were developed in 1964 by William P. Lear (1902–78), owner of Learjet and the Lear Radio Company. Their short-lived existence allowed listeners to continuously listen to a set of tracks of music, primarily in the country-western genre. Both cassettes and eight-track tape declined with the development and marketing of compact discs and DVDs. One of the by-products of the online computer revolution has been streaming audio, in various forms, such as MP3 and WAV (wave) files. Today families can access these from various commercial vendors and even educational institutions, with restricted access. This miniaturization has also fragmented the family’s consumption of music, in that each individual can listen to the genre he or she prefers. The medium of television, during its rise in the 1950s, injected folk music into the urban landscape with artists such as Burl Ives, Harry Belafonte, and the Weavers. Peter, Paul, and Mary, Bob Dylan, and Pete Seeger produced many recordings during the 1960s. Ralph Hartsock University of North Texas See Also: Education, Elementary; Games and Play; Radio: 1920 to 1930; Radio: 1931 to 1950; Radio: 1951 to 1970; Technology; Television.

Further Readings Cornelius, Steven H. Music of the Civil War Era. Westport, CT: Greenwood Press, 2004. Koskoff, Ellen, ed. Music Cultures in the United States: An Introduction. New York: Routledge, 2005. Ogasapian, John. Music of the Colonial and Revolutionary Era. Westport, CT: Greenwood Press, 2004. Ogasapian, John. Music of the Gilded Age. Westport, CT: Greenwood Press, 2007. Starr, Larry and Christopher Alan Waterman. American Popular Music: From Minstrelsy to MP3. New York: Oxford University Press, 2007. Tawa, Nicholas E. High-Minded and Low-Down: Music in the Lives of Americans, 1800–1861. Boston: Northeastern University Press, 2000.

Myspace Myspace is a social network Web site that encourages users to establish and maintain connections with others through sharing photos and exchanging messages. Upon creating an account, Myspace users can build a profile and start communicating with others through the process of sending “friend requests.” Once friendship is confirmed between two users, they can view each other’s social network activity. The virtual nature of Myspace allows people to overcome geographic and time barriers that limit other forms of human communication, as Myspace users can interact with each other 24 hours a day from any location that has an Internet connection. Since launching in 2003, the Myspace Web platform has created a way for families to stay in touch by sharing photos and messages using the Internet. Employees of Internet marketing company eUniverse founded Myspace in August 2003. Initial creators, which include Chris DeWolfe, Brad Green­span, Tom Anderson, and Josh Berman, developed the idea for Myspace from an existing social network site called Friendster that was primarily intended for romantic relationship formation. Among various social network sites that have emerged since the 2000s, Myspace was one of the first to generate mass engagement and media attention attracting 5 million registered users within its

Myspace

925

first year of existence. In July 2005, News Corporation purchased the site. For the next three years Myspace remained the most visited social network site until its competitor, Facebook, overtook this top position. Since April 2008 the popularity of Myspace has continued to decline, despite several attempts to redesign the site to improve the organization of its interface and its usability. Although recent redesigns of Myspace reflect an effort to focus on music, the underlying purpose of the site is to encourage users to establish and maintain connections with others through viewing and posting photos, comments, links, and videos. Similar to other social network sites, a predominant attraction of Myspace lies in the way it allows people to communicate with other members and stay in contact with a large network of friends and acquaintances. Users can passively browse social information contained on the Myspace platform, as well as publicly or privately exchange messages with their friends. Upon registering for Myspace, users can construct a personal profile that features a main photo, a cover image, and a brief biographical statement. In offering opportunities to extensively customize one’s profile, Myspace allows users to express their identity in a virtual format. Critics have suggested that Myspace supports one’s desire to showcase their personality and to exhibit themselves for the purposes of mass validation. Further, it allows individuals an outlet for self-expression that may be limited in physical public spaces that are inaccessible, oppressive, or unequipped with appropriate resources. In addition to self-expression there are various reasons why people use Myspace. While the vast majority of participants simply log in to communicate with people they already know, other individuals and groups have leveraged the social network as a spamming device. More specifically, Myspace serves a marketing purpose for various attentionseeking populations such as politicians, entrepreneurs, businesses, and musicians who want to broaden the reach of their message to attract new audiences and maintain current fans. Redesigns of the site cater especially to the latter-mentioned musicians, as reflected in Myspace’s decision to embed music players on users’ home pages that facilitate easy access to live streaming music. Additionally, Myspace encourages musicians to create

926

Myth of Motherhood

special profiles that are fully accessible to the general public, and they also assist artists in attracting audiences and enhancing engagement. Myspace has also been used for political purposes, as individual politicians and political groups have successfully generated various levels of media attention through strategic social networking activity. Beginning in the 2008 presidential election, electoral candidates used Myspace to communicate with voters. More specifically, this media platform was primarily used to promote voter registration, recruit campaign volunteers, and achieve more public exposure. In turn, this gave voters the opportunity to follow candidates on the campaign trail and learn more about their positions on issues. In regard to using Myspace for political action, researchers have commented on the social network site’s capacity to level the playing field for democratic participation, as it is free for anyone to create an account. Because the online nature of social network sites allows people to overcome space and time barriers that inhibit other modes of communication, Myspace users can communicate with as few as one or as many as 1 million people for the same amount of money. But just because a message is made public does not mean that a mass audience will turn their attention to it. Trends in Web traffic show that gossip and celebrity profiles attract the most interest, while embarrassing photos and scandals have also garnered vast amounts of media attention. According to patterns of social behavior, Myspace users typically choose to connect with others with whom they share similar interests. In regards to family communication, the ability to search for family members’ profiles and to view relatives’ photos and status updates provides a convenient way for families living in disparate locations to keep in touch. Stephanie E. Bor University of Utah See Also: Blogs; Facebook; Internet; YouTube. Further Readings Boyd, D. “Can Social Network Sites Enable Political Action?” International Journal of Media and Cultural Politics, v.4/2 (2008). Stenovec, T. “Myspace History: A Timeline of the Social Network’s Biggest Moments.” Huffington Post (June 29, 2011). http://www.huffingtonpost.com

/2011/06/29/Myspace-history-timeline_n_887059 .html#s299496title=August_2003_Myspace (Accessed August 2013). Wilkinson, D. and M. Thelwall. “Social Network Site Changes Over Time: The Case of Myspace.” Journal of American Society for Information Science and Technology, v.61/11 (2010).

Myth of Motherhood The myth of motherhood is an idealization deeply embedded in American society. The myth states that motherhood is instinctual, it is totally fulfilling for a woman, and the mother is the best care provider for her children. Mothers are responsible for every aspect of children, including physical, emotional, and social development. The myth of motherhood is a relatively new notion. Prior to the Industrial Revolution in the late 18th and early 19th centuries, both men and women were involved in both child rearing and production labor. However, when the Industrial Revolution divided domestic and economic labor into separate spheres, women were solely in charge of raising the children while the men worked outside the home. The myth of motherhood still exists today as women are still expected to have children and raise them well. Now, though, a majority of mothers are employed and men are increasingly involved in their children’s lives, but the myth of motherhood still lives, which often leads to mother guilt. The term myth of motherhood was first used by Rachel Hare-Mustin and Patricia Broderick in 1979, and it is believed that the myth of motherhood is deeply embedded in American society. The myth of motherhood dictates that motherhood is instinctual (women have an innate maternal instinct) and that the mother is the best care provider for the child. The myth also states that becoming a mother is a socially prescribed role that a woman must adopt because women’s identities are tied to their role as mother. Having a child fulfills a woman unlike any other experience and is supposed to fulfill the woman totally. Additionally, it is believed that to be a “good mother” a woman must be physically present for her children, and all other pursuits should be secondary to meeting the children’s needs and being there for



Myth of Motherhood

927

the children. A mother’s role is one of self-sacrifice, placing everyone else’s needs above her own. Society’s idealization of motherhood does not necessarily apply to all women. Mothers who have children outside marriage and those who give birth to children and then place them for adoption are not idealized by the myth. It is also viewed as unnatural for a woman to not wish to have children. The idea that mothering is instinctive is the most pervasive component of the myth of motherhood, and it stems from the fact that women, unlike men, are biologically equipped to give birth. Because of a woman’s ability to give birth to a child, it is believed that the mother instinctually knows how to provide the best care for her children. The father is not considered to be naturally bonded to the child and, therefore, is unable to adequately care for the child. This premise is often used to justify how family life is ordered, with women taking on the majority of the child-rearing responsibilities and men having few responsibilities for the children. Because mothering is all-encompassing and the mother is viewed as the best caregiver for her children, it is also believed that the mother is responsible for her children’s mental health and behavior, for better or for worse. For example, mothers have been blamed in the clinical literature for a number of mental health disorders and by the mass media for juvenile delinquency.

17th and early 18th centuries provided for childrearing tasks to be completed in and around the home in the same way that work existed. Men’s and women’s work often overlapped, with both males and females involved in domestic and product labor. With the Industrial Revolution, the economic and domestic spheres were separated, leaving men to work outside the home in jobs requiring skilled labor and women to work within the home with the primary job of raising the children. In the 19th century, the birthrate decreased, and the focus on motherhood and providing high-quality child care through intensive mothering increased. Thus, every aspect of children’s development, including physical, emotional, and social, were the responsibility of the mother. In the early 20th century, there was an influx of child care experts, primarily men. At this point in time, women were expected to consult books written by medical doctors and psychologists (for example, Dr. Spock), and their children’s medical doctors to learn how to best care for their children. While this seems to be in opposition to the belief that mothering is instinctual and that mothers know best how to care for their children, it was still upheld that mothering was the woman’s most important duty, that the mother was the best person to care for her child, and that she must raise a happy and healthy child.

History of the Myth of Motherhood Although it is evident that the myth of motherhood is believed by many, there are many instances in the historical past that challenge the idea that mothering is instinctual. In the late 18th century in France, a vast majority of the babies born in Paris were sent to outlying areas to be fed and raised by wet nurses until the child was between two and five years old. In some cases, the mother saw the child rarely during these early years. This example is contradictory to the notion that mothering is instinctual and that the mother is the only person capable of raising the child. The idealized view of motherhood portrayed by the myth of motherhood is a relatively new concept. It was not until the Industrial Revolution in the late 18th and early 19th centuries when the domestic and economic spheres were separated that motherhood in its current state was established. The agrarian economy that existed in colonial America in the

Current View of the Myth of Motherhood We still see the myth of motherhood existing now at the beginning of the 21st century. Mothers are still considered to be the best care provider for their children, and motherhood is to be fulfilling and instinctual. The myth of motherhood is continually represented in the media as a celebration of motherhood, but in reality, motherhood is depicted by the media in ways that are idealized and unobtainable by the average mother. For instance, there are TV shows, magazines, books, Web sites, and blogs that show ways in which mothers can provide the best for their children. To be a “good mother” today, it is believed that the mother needs to breastfeed, make her own organic baby food, use cloth diapers, make the child’s birthday cake, serve as a chaperone for field trips, and serve as the president of the PTA (Parent-Teacher Association), all with a smile on her face. Mothers also need to make sure that children are properly scheduled so they have opportunities

928

Myth of Motherhood

to learn how to play a sport and an instrument and excel at these ventures. Mothering is still viewed as totally fulfilling for a woman, and women are often depicted in the media as loving every aspect of mothering, even the challenging times. Currently, over 70 percent of American mothers with children under the age of 18 are employed outside the home. While mothers are at work, someone other than the mother is caring for the children. Unlike recent historical times, there is now an influx of fathers involved in the raising of their children. Fathers are now involved in their children’s day-to-day activities and daily care. Although the increased involvement of fathers in the lives of their children is contradictory to the idea that mothers are the only people capable of caring for their children, the myth of motherhood continues to exist. Although fathers are increasingly involved in their children’s lives, it is not a requirement. Fathers are required to serve as the breadwinner, but not as the nurturer. Instead, the involvement of fathers in children’s daily activities and care is oftentimes viewed as a father going above and beyond his responsibilities. Fathers who are involved in their children’s lives are often put on a pedestal, while a woman who is not heavily involved in the daily care of her child is demonized. Although maternal employment and increased father involvement are viewed by some as an indication that the myth of motherhood no longer exists, there are still indications that the myth of motherhood is alive and well today. There has been a huge influx in the amount of research conducted on work–life balance. The vast majority of this research, however, has focused on how women balance their work and family responsibilities, with little focus on how men balance work and family. Similarly, researchers have investigated the impact of maternal employment and separation from mothers on children, but the research on paternal employment’s effect of children is nonexistent. It is still assumed, therefore, that men are supposed to work outside the home and be the breadwinners for the family, while it is viewed as being out of the norm for women to work outside the home instead

of taking care of the children. In instances where women do work, society is continually interested in how her children are coping without her full-time presence. Additionally, many working women still deal with guilt for the fact that they cannot always be with their children. This guilt arises because mothers are not able to always live up to this idealized view of mothering as is described in the myth of motherhood. The motherhood myth continues even though there is a great deal of research to contradict the notion that a mother is the only person who can adequately care for her children. Researchers have found that children of all ages are not harmed by maternal employment and being cared for by someone other than one’s own mother. Although times have changed, it is still evident that the myth of motherhood is alive and present in America today. The myth is far-reaching as it is reflected in belief systems, is depicted in the media, and influences laws, policies, and practices relating to procreation, pregnancy, abortion, adoption, breastfeeding, child care, maternal employment, and maternity leave. Melinda Stafford Markham Kansas State University Salina See Also: Intensive Mothering; Marital Division of Labor; Mommy Wars; Mothers in the Workforce; Parenting. Further Readings Braverman, Lois. “Beyond the Myth of Motherhood.” In Women in Families: A Framework for Family Therapy, M. McGoldrick, Carol M. Anderson, and Froma Walsh, eds. New York: W. W. Norton, 1989. Douglas, Susan and Meredith Michaels. The Mommy Myth: The Idealization of Motherhood and How It Has Undermined All Women. New York: Free Press, 2004. Hare-Mustin, Rachel and Patricia Broderick. “The Myth of Motherhood: A Study of Attitudes Toward Motherhood.” Psychology of Women Quarterly, v.4/1 (1979).

N National Affordable Housing Act The American Dream for many families is a home with a white picket fence in a safe neighborhood. The economic reality for many families, however, keeps this American Dream perpetually out of reach. The stark contrast between the dream of homeownership and the economic reality of affordable housing inspired the National Affordable Housing Act signed into law by President George H. W. Bush on November 28, 1990. The National Affordable Housing Act boldly states in Section 101 that it “affirms the national goal that every American family be able to afford a decent home in a suitable environment.” The National Affordable Housing Act, which authorized two large grant programs, is best understood within its historical context. Public and affordable housing began in the Progressive era and New Deal programs of the mid1930s to address the needs of low-income, or noincome, families left homeless during the Great Depression. The Franklin D. Roosevelt administration responded by passing the Housing Act of 1937 authorizing federal funds and the Public Works Administration to work with local housing authorities to build public housing and to offer rental assistance. The Roosevelt administration also

created the Federal Housing Administration with the National Housing Act of 1934 to stabilize home mortgages by providing low-interest, long-term mortgage loans. In the 1940s and 1950s, affordable housing focused on the needs of veterans returning home from World War II. Veterans purchased new homes with Federal Housing Administration (FHA) loans in neighborhoods outside the urban cities. Families with the financial resources to leave the cities left in droves in a process often called “white flight” or “middle-class flight.” During the first half of the century, efforts at affordable housing attempted to provide families with a step out of poverty and assistance in securing home loans. During the 1960s and 1970s the public housing projects built in the 1930s and 1940s were opening up to families without the resources to move to suburbia. The projects became dense concentrations of poverty that exacerbated issues facing low-income families by compounding the ill effects of crime, low employment, homelessness, and low civic engagement. By the late 1980s and 1990s, public housing projects became insular urban ghettos increasingly closed off from the rest of urban life. To compound the problems of affordable housing many low-income families had housing costs consume more than 50 percent of the families’ gross incomes. At the same time, most cities 929

930

National Affordable Housing Act

experienced a chronic shortage of rental vouchers available for low-income families. Without a family or individual wage to afford the high cost of rent, many people experienced homelessness. Unfortunately, homelessness often increases the likelihood of substance abuse, mental illness, violence, and trauma that in turn make it more difficult for families to make their way out of poverty. Crime and homelessness became crises for American cities and it was clear to lawmakers that a new approach was needed. In 1990 the passage of the National Affordable Housing Act offered a new approach to affordable housing by fostering public/private partnerships and empowering residents of public housing to transition into homeownership. Title II of the National Affordable Housing Act fosters public/private partnerships with the HOME Investment Partnerships Program. The HOME Investment Partnership Program allocates formula grants to states and qualifying local jurisdictions to use with local not-for-profit organizations to create affordable housing for low-income households using a variety of activities, including rental assistance, housing rehabilitation, purchasing housing, and building new housing units. The HOME Investment Partnership Program is the largest block grant program in the United States, allocating close to $2 billion a year to participating jurisdictions. The home program proves to be flexible by empowering communities to design appropriate local solutions to increase the stock of affordable housing. The HOME Investment Partnership Program also encourages local governments to develop community resources for housing assistance by requiring a 25 percent match for all federal funds. The program strengthens private-sector and nonprofit partnerships with a fixed set-aside allocation and technical assistance grants. Title IV of the National Affordable Housing Act authorized the Homeownership and Opportunity Through HOPE Act, which established three grant programs: HOPE I, HOPE II, and HOPE III. The HOPE programs were planning and implementation grants designed to pave the way for tenants of public housing to become homeowners themselves by transferring the public ownership in public housing units to private ownership. The HOPE Programs allowed for cooperative ownerships with tenants and not-for-profit management

A house with a white picket fence is the American dream for many families. The stark contrast between the dream of homeownership and the economic reality of affordable housing inspired the National Affordable Housing Act.

companies. The grants also authorized the rehabilitation of properties if needed to transfer ownership. The few requirements of the HOPE programs were that eligible families had to be tenants of the property and could not have an income that exceeded 80 percent of the local median income. The HOPE program also authorized a Youthbuild program for economically disadvantaged youths to receive educational and employment opportunities with local housing agencies. All program grants for the HOPE program have been replaced with other HOPE programs authorized under various legislative acts. The National Affordable Housing Act ushered in an approach to public housing that facilitated dynamic partnerships to provide housing needs in an urban environment. The legislation helped pave the way for President Bill Clinton’s National



National Center on Child Abuse and Neglect

Housing Strategy and a revitalized HOPE VI program of urban renewal that continued strengthening public–private partnerships. The HOPE VI program fostered a new vision of urban communities with mixed-income neighborhoods and retail space within walking distance from residences. Daniel Blaeuer Florida International University See Also: Homelessness; White Flight; Working-Class Families/Working Poor. Further Readings Cisneros, Henry and Lora Engdahl, ed. From Despair to Hope: HOPE VI and the New Promise of Public Housing in America’s Cities. Washington, DC: Brookings Institution Press, 2009. HOPE VI. http://portal.hud.gov/hudportal/HUD?src =/program_offices/public_indian_housing/programs /ph/hope6 (Accessed September 2013). National Affordable Housing Act. http://www.hud.gov/ offices/adm/hudclips/acts/naha.cfm (Accessed September 2013).

National Center on Child Abuse and Neglect In 2011, U.S. state and local child protective services (CPS) received approximately 3.7 million reports of child abuse or neglect, with the number of victims estimated at 676,569 (9.2 per 1,000). But that count is likely to be much higher, considering that many occurrences of child abuse and neglect are never reported. Some studies indicate that roughly one in five children in the United States may at some time in their lives suffer from child maltreatment. The number of child fatalities resulting from abuse and neglect in 2011 was estimated at 1,570 children, or 2.1 per 100,000 children. A division of the Children’s Bureau of the U.S. Department of Health and Human Services, the National Center on Child Abuse and Neglect, is working to improve the odds for America’s children. Federal law defines child abuse as “any recent act or failure to act on the part of a parent or caretaker

931

which results in death, serious physical or emotional harm, sexual abuse or exploitation; or an act or failure to act, which presents an imminent risk of serious harm” (42 U.S.C. §5101). Originating in 1912, the federal Children’s Bureau objective to help protect children from abuse gained successes in the 1960s when the medical profession and the media became more aware of instances of child abuse and neglect, and by 1967 nearly all states had implemented legislation requiring doctors to report suspected child abuse to police or child welfare. In the mid-1970s the Child Abuse Prevention and Treatment Act of 1974 (CAPTA) led states to take action to reduce the occurrence of physical child abuse, neglect, and sexual abuse. In addition to improving investigation and reporting, CAPTA funds supported training, regional multidisciplinary centers focused on child abuse and neglect, and demonstration projects. In 1975, the National Clearinghouse on Child Abuse and Neglect Information was formed to create and distribute training materials to those working in child welfare. The National Center on Child Abuse and Neglect (NCCAN) was established within the Children’s Bureau to administer the CAPTA program and related funding, which helped to shape the current system of child protective services. With the 1978 Child Abuse Prevention and Treatment and Adoption Reform Act, the U.S. Advisory Board on Child Abuse and Neglect was created to manage and direct NCCAN projects, such as NCCAN’s User Manual Series that offers a multidisciplinary approach to prevention for those in many fields from nursing to law enforcement. In the 1980s, incidents of child abuse and neglect increased and large numbers of children were placed in out-of-home care. CAPTA grant funding, allocated by the NCCAN, enabled the Children’s Bureau to provide research and prevention, identification, and treatment programs. NCCAN was also required to create a national clearinghouse to share the information and several National Resource Centers to provide training and technical assistance to states on specific areas of child welfare. Among other prevention programs and services that developed under NCCAN’s management is Child Abuse Prevention Month, which began in 1983 and continues today. Two additional legislative actions strengthened efforts to prevent child abuse

932

National Center on Elder Abuse

and neglect. The Child Abuse Amendments of 1984 (P.L. 98-457) created the National Clearinghouse on Family Violence Prevention to work with the clearinghouse overseen by NCCAN. The Omnibus Reconciliation Act of 1986 (P.L. 99-509) authorized the National Adoption Information Clearinghouse. The Child Abuse Prevention, Adoption, and Family Services Act of 1988 created the Inter-Agency Task Force on Child Abuse and Neglect, empowering the NCCAN to subcontract the operation of a child abuse information clearinghouse. Grants received in 1986 enabled the Children’s Bureau to fund several topical National Resource Centers (NRCs), including those supporting family-based services, legal resources for child welfare programs, and youth services. In 2006, the National Clearinghouse on Child Abuse and Neglect Information and the National Adoption Information Clearinghouse consolidated to form the Child Welfare Information Gateway. The Child Welfare Information Gateway is a national service of the Children’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services and an excellent source for comprehensive information and resources to safeguard children and strengthen families. While connecting child welfare, adoption, and related professionals and the public to reliable and current information, resources, and tools, the Child Welfare Information Gateway promotes the safety, permanency, and well-being of children, youth, and families. Joel Fishman Duquesne University Center for Legal Information Allegheny County Law Library Karen L. Shephard University of Pittsburgh See Also: Child Abuse; Child Advocate; Child Safety; Children’s Bureau; Domestic Violence; Family Therapy. Further Readings Centers for Disease Control and Prevention. “Child Maltreatment Facts at a Glance, 2012.” http://www .cdc.gov/violenceprevention/pub/CM_datasheet .html (Accessed December 2013). Children’s Bureau Express. “Centennial Series: CB’s Clearinghouses and National Resource Centers.” CBX, v.13/11 (December 2012/January 2013). https:// cbexpress.acf.hhs.gov/index.cfm?event=website.view

Articles&issueid=142§ionid=1&articleid=3723 (Accessed December 2013). Meyers, John E. B. “A Short History of Child Protection in America.” Family Law Quarterly, v.42/3 (Fall 2008). http://www.americanbar.org/content/dam/aba/pub lishing/insights_law_society/ChildProtectionHistory .authcheckdam.pdf (Accessed December 2013). U.S. Department of Health and Human Services, Administration for Children and Families, Administration on Children, Youth and Families, Children’s Bureau. “Child Maltreatment 2011.” http:// www.acf.hhs.gov/programs/cb/research-data-tech nology/statistics-research/child-maltreatment (Accessed December 2013). U.S. Department of Health and Human Services, Administration for Children and Families, Administration on Children, Youth and Families, Children’s Bureau. “Child Welfare Information Gateway.” https://www.childwelfare.gov/can (Accessed December 2013).

National Center on Elder Abuse As the average life span in the United States is on the rise, so too is the need to provide care to older adults. This need to provide care can sometimes result in elders being in abusive situations. There are different forms of elder abuse, including financial, emotional, and physical. The National Center on Elder Abuse (NCEA) provides a definition of elder abuse, warning signs of abuse, and what can be done to prevent elder abuse. The NCEA falls under the umbrella of the Administration on Aging, which is under the U.S. Department of Health and Human Services. Elder abuse is a health and human rights issue that often goes unnoticed. One area of interest of the NCEA is to raise awareness about elder abuse by keeping updated information for practitioners and the public. Information on research, training, and best practices and resources is available through the NCEA. The definition of elder abuse is a bit of a challenge. A legal definition of elder abuse may vary from state to state. Also, the various types of elder



National Child Labor Committee

933

abuse have specific definitions that vary from state to state. The types of elder abuse as defined by NCEA are as follows: physical abuse, which is inflicting, or threatening to inflict, physical pain or injury on a vulnerable elder or depriving the elder of a basic need; emotional abuse, which is inflicting mental pain, anguish, or distress on an elder person through verbal or nonverbal acts; sexual abuse, which is nonconsensual sexual contact of any kind or coercing an elder to witness sexual behaviors; exploitation, which is illegal taking, misuse, or concealment of funds, property, or assets of a vulnerable elder; neglect, which is refusal or failure by those responsible to provide food, shelter, health care, or protection for a vulnerable elder; and abandonment, which is the desertion of a vulnerable elder by anyone who has assumed the responsibility for care or custody of that person. While these types of abuse are listed separately, it is not uncommon for an elder to be a victim of multiple types of abuse at the same or different times.

provided so that they may know their rights as well as available resources. Specific prevention methods for elders include remaining active to decrease social isolation; taking care of health to maintain some ability to provide personal care for self; plan for the future; seek help from a professional for issues of depression, alcohol, and other drug abuse; and knowing their rights. Prevention methods for others include raising awareness about this health and human rights issue; advocating for this issue; volunteering in programs that support elders, participating in or creating local World Elder Abuse Awareness Day (WEAAD) events on or around June 15 each year across the globe. WEAAD creates an opportunity for communities worldwide to highlight the issue of elder abuse through community events that raise awareness about the significance of elder abuse and neglect as a health and human rights issue.

Health and Human Rights Issue Elder abuse should be considered an important health and human rights issue. However, elder abuse is often under-recognized and underreported. It is important for society to recognize that elder abuse can and does happen and that this issue can happen to anyone. Elders of all cultures, races, and varying economic statuses are vulnerable to varying forms of elder abuse. Women and very old elders are more likely victims of elder abuse. Elder abuse is commonly inflicted by a caregiver or other trusted person in the elder’s life. It is important to be aware of possible signs of elder abuse in an effort to report and help decrease the incidence of elder abuse. Physical warning signs of abuse include unexplained marks, bruises, or burns. Warning signs for neglect include dehydration, lack of medical care, malnutrition, and pressure ulcers or bedsores. Signs of emotional abuse include unusual behaviors, changes in alertness, and withdrawing from usual activities. Sexual abuse signs include unexplained sexually transmitted disease and bruising around the breasts and/or genital area. Warning signs of exploitation include changes in wills, loss of property, sudden change in financial accounts, and unusual bank withdrawals. Awareness building and education is key to preventing elder abuse. Education for elders should be

See Also: Assisted Living; Caregiver Burden; Caring for the Elderly; Elder Abuse.

Martha L. Morgan Alliant International University

Further Readings Administration on Aging. http://www.aoa.gov (Accessed December 2013). Eldercare Locator. http://www.eldercare.gov (Accessed December 2013). International Network for the Prevention of Elder Abuse. http://www.inpea.net (Accessed December 2013). National Center on Elder Abuse. http://www.ncea.gov (Accessed December 2013). Santos, E. J. and D. A. King. “The Assessment of Elder Abuse.” In Handbook of Assessment in Clinical Gerontology, P. A. Litchenberg, ed. Boston: Academic Press, 2012.

National Child Labor Committee Since its inception in 1904, the National Child Labor Committee (NCLC) has worked to advance the rights, welfare, and worth of children, especially as they are related to work, the workplace, and education. As

934

National Child Labor Committee

the 20th century began, many adults were troubled by the number of working children and the lack of protections for the welfare of these children. This concern fueled the founding of the NCLC as well as its incorporation in 1907 by Congress. Throughout its more than 100-year history, the NCLC has sought to safeguard children, ensuring work protections and access to education, and the organization continues to work toward this mission. This nonprofit, funded by foundations as well as corporate and individual donations, works on occupational safety and health for the young, education efforts, child labor laws, and employment and training for youth. Its overarching focus is creating growth opportunities for young people and helping them develop into productive citizens who are ready for the work world after graduating from school. A central focus is at-risk youth, and the NCLC collaborates with a number of other organizations in meeting its goals. Collaborators include other nonprofits, corporations, and private and public agencies. History The organization began from concerns regarding working children. After its chartering by Congress, Lewis Wickes Hine, a photographer with interests in social justice through social reform, was hired by the NCLC to document child-labor conditions. Hine’s photographs raised awareness, increased knowledge, and moved individuals to action on child and youth labor issues. The NCLC relatively quickly succeeded in establishing a Children’s Bureau in the U.S. Department of Labor as well as the U.S. Department of Commerce. Toward its goal of outlawing most types of child labor, the organization widely distributed Hine’s photos and simultaneously advocated mandatory education for all U.S. children. The next 30 years were filled with some successes and some disappointments, but, in 1938, the Supreme Court approved the Fair Labor Standards Act, which included key child labor guidelines developed by the NCLC. After World War II, the NCLC began work on issues of youth employment and training. By the mid-1950s, the organization had begun advocating on behalf of the children of migrant farmworkers living across the United States. NCLC efforts resulted in several acts passed in the mid-1960s, including the Economic Opportunity Act.

The NCLC played a key role in establishment of the National Youth Employment Coalition in the late 1970s, including housing the coalition and having its director serve as chair. In the 1990s, the NCLC’s Kids and the Power of Work (KAPOW) program began. This program, which creates collaborations between primary schools and businesses, has subsequently expanded, received considerable praise, and made strides in educating children about work, workplaces, and working. Today, the NCLC continues its efforts to prevent child labor and engage children and youth on issues of employment and workplace training. It also furthers recognition of these issues through its concern with migrant workers and their children as well as the Lewis Hine Awards. Headquartered in New York, the organization’s primary goals today are teaching children and youth about work, eliminating mistreatment of children and youth in work environments, ensuring that the children of migrant farm workers have access to education and health care, and increasing public attention on such issues. Kids and the Power of Work The KAPOW program of the NCLC couples primary schools and business partners. Through educational programs and lessons delivered by community volunteers who work in area businesses, schoolchildren learn about careers and the world of work. KAPOW also facilitates site visits with experiential activities for children to workplaces. These partnerships have benefits for children, teachers, schools, businesses, volunteers, parents, and participating communities. The overarching focus of KAPOW’s educational programming is career awareness and exploration and development of SCANS skills, or those skills specified by the Secretary’s Commission on Achieving Necessary Skills (SCANS). Examples of program foci include communicating effectively, working in teams, and eliminating stereotyping. The volunteers provide both relevant, true-to-life examples as well as opportunities for interactive work. KAPOW accomplishes several objectives, including educating primary school students about different types of work areas and opportunities, skills needed for employment, and applying classroom knowledge at work as well as motivating students through experience (site visits to workplaces, activities on-site). Conducted throughout

National Council on Family Relations



the school year, KAPOW reinforces information covered during monthly visits by volunteers with materials integrated into the curriculum. Lewis Wickes Hine and the Lewis Hine Awards Hine used his photographic skill and the art of photography to garner public attention for the treatment of child laborers. His photographs, illuminating the experience of children in workplaces such as sweatshops, canneries, and mines and the squalid conditions in which they worked, were important in securing support for child labor laws. Further, his photos conveyed that such work shortchanged children, depriving them of education, health, future opportunities, and, in fact, many of the qualities and experiences of a typical “childhood.” His work is credited with elevating issues of child labor to the national consciousness, and he is regarded as one of the most prominent social justice photographers of his time. Hine’s efforts were crucial in leveraging NCLC goals, such as state and federal protections for children’s rights. To recognize Hine’s instrumental contributions, the NCLC later created annual awards, named the Lewis Hine Awards for Service to Children and Youth, honoring individuals with similar convictions. Each year, 10 awards are given to individuals who have made remarkable strides in improving the lives of children and youth. Half of the awardees are volunteers; the other half are professionals. Joy L. Hart University of Louisville See Also: Child Labor; Children’s Rights Movement; Education, Elementary; Fair Labor Standards Act; Primary Documents 1943. Further Readings Encyclopedia Britannica. “Lewis W. Hine.” http://www .britannica.com/EBchecked/topic/266474/Lewis-W -Hine (Accessed August 2013). Kids and the Power of Work. http://www.kapow.org (Accessed August 2013). National Child Labor Committee. http://www.nat ionalchildlabor.org/index.html (Accessed August 2013). Sampsell-Willmann, Kate. Lewis Hine as Social Critic. Jackson: University Press of Mississippi, 2009.

935

Trattner, Walter I. Crusade for the Children: A History of the National Child Labor Committee and Child Labor Reform in America. Chicago: Quadrangle Books, 1970.

National Council on Family Relations The National Council on Family Relations (NCFR), founded in 1938, is the oldest multidisciplinary organization focused solely on family research, practice, and education. As a professional association, NCFR is dedicated to understanding and strengthening families through its members’ efforts in scholarship, outreach, and policy. NCFR members come from more than 35 countries and all 50 U.S. states and include researchers, demographers, marriage and family therapists, parent/family educators, university faculty, students, social workers, public health workers, extension specialists and faculty, early childhood family education teachers, clergy, counselors, and kindergarten-through-12th-grade teachers. History From the beginning, NCFR has focused on multiple scholarly and practical perspectives. The organization was formed in 1938, when Paul Sayre, a law professor at the University of Iowa, contacted sociologist Ernest W. Burgess about starting a national conference around the state of families. Burgess knew of the work of Rabbi Sydney E. Goldstein, a family therapist who had chaired the New York State Conference on Marriage and the Family, and the three met with other charter members in Chicago in April 1938 to begin planning the first National Conference on Family Relations, as the organization was then called, to be held later that year. The founders envisioned the new association as one that offered an interprofessional forum to provide “opportunities for individuals, organized groups, and agencies interested in family life to plan and act together on concerns relevant to all forms of marriage and family relationships; establish professional standards; promote and coordinate educational and counseling efforts; and encourage research.”

936

National Council on Family Relations

The first annual conference met on September 17, 1938, in New York City, and its theme was “The Contribution of the Family to the Cultural Wealth of the Nation.” Conferences for the next 15 years focused on relationships, parenting, work and family life, and family-related educations, reflecting the effects of the Great Depression, World War II, and socioeconomic changes through the 1950s, especially in the United States. During its first 50 years, conference participants and honorees included Eleanor Roosevelt, U.S. Senator Paul Wellstone, Coretta Scott King, anthropologist Margaret Mead (an early member), and novelist Pearl S. Buck. Organization Details In 1947, the name of the organization was changed to the National Council on Family Relations. Headquartered at the University of Chicago since its formation, the administrative offices were moved to Minneapolis, Minnesota, in 1955. Initial membership in 1938 was about 200 people; since 2000, NCFR has averaged about 3,500 members annually. As of 2013, about two-thirds of members were women, about 35 percent were students, more than half were researchers or faculty with a Ph.D., and about 40 percent were certified family life educator, a credential offered through NCFR. The organization is governed by a nine-person elected board of directors. In addition, NCFR sponsors 10 sections, or member interest groups: Education and Enrichment (family life education); Ethnic Minorities, Families and Health, Family Policy, Family Science (college teaching), Family Therapy, Feminism and Family Studies, International, Religion and Family Life and Research and Theory. Scholarship and Research NCFR publishes three scholarly journals (in conjunction with academic book publisher Wiley Blackwell), conducts an annual conference, and provides specialized workshops and training around research methodology and best practices. The Journal of Marriage and Family has been a leading research journal in the family field for over 75 years. The journal features original research and theory, research interpretation and reviews, and critical discussion concerning all aspects of marriage, other forms of close relationships, and families. Family Relations: Interdisciplinary Journal of Applied Family Studies began publication in 1951 and became an

NCFR journal in 1968. Family Relations publishes articles on basic and applied research focusing on diverse family forms and issues. Founded in 2009, the Journal of Family Theory & Review encourages integration and growth in the multidisciplinary and international domains of inquiry that define contemporary family studies. The NCFR Annual Conference has been held since 1938. The conference includes hundreds of peer-reviewed submissions delivered in symposia, paper, poster, and workshop formats. Sessions are organized around topics such as relationships, parenting, child and adolescent development, family therapy and counseling, health, religion and families, economics, impact, diversity, family policy and legislation, family education, and college teaching. Many workshops cover “hands-on” and professional development skills in presentation, teaching, research methods, and leadership. The conference meets in November for four days and also includes business meetings and special-interest sessions. Since 1971, the Theory Construction and Research Methodology (TCRM) workshop has been an annual preconference event that provides a collegial forum for the discussion, development, and refinement of theory and methods relevant to the study of families. Education, Outreach, Connections Since its founding, NCFR has provided resources for family researchers, educators, therapists, policy makers, and others who study or work on behalf of families. As a part of its education and research efforts, NCFR has created research compilations and many applied publications and materials, including the following: • The Handbook of Family Life Education, Volumes 1 and 2 • Vision 2000 • Understanding Families Into the New Millennium: A Decade in Review • Decade in Review—Journal of Marriage and Family • Tools for Ethical Thinking and Practice in Family Life Education, first compiled by the Minnesota Council on Family Relations and now published through NCFR NCFR and its members provide the curricular and pedagogical framework for college-level teaching in



The National Council on Family Relations (NCFR) Institutional Identity Hierarchy. NCFR is an advocate for family well-being and a resource for providing research, practice, and education.

academic disciplines that focus on families, such as family science, family studies, and human development. Best practices, research methodology, course design and delivery, lesson plans, syllabi, learning objectives, and useful articles and instructional materials are accessible through a variety of means, including conference sessions; the Professional Resource Library, a comprehensive, sortable online archive; Webinars; and member forums and discussion boards facilitated by NCFR. State and regional family councils have been an integral part of the makeup and culture of NCFR, especially in the beginning when the founders relied on local organizations to keep activities going and interest high. These affiliate councils have conducted state and regional meetings throughout NCFR history. In 1973, the Affiliated Councils was created within NCFR’s governance structure to provide a means for networking among the local groups. In 2014, there were seven state councils and three regional affiliates made up of 17 states and two Canadian provinces, as well as 23 student (university) affiliates. Awards and Fellows Twenty major awards are offered annually or biannually through NCFR. Awards recognize achievement in research, teaching, service, and contributions to the association. More than half of the

National Council on Family Relations

937

awards are directed to students, particularly focusing on research projects, dissertations, and academic papers. Another 20 awards are presented through NCFR sections with focus on research and leadership in specific areas such as feminist perspectives, health, family policy, family therapy, and families and religion. Research on a national level is recognized through two awards presented under the aegis of the Research and Theory Section. The Ernest W. Burgess Award and the Reuben Hill Award, each named for early leaders in NCFR, respectively, recognize lifelong contributions to research and theory development and to the outstanding journal article that combines theory and methodology in the analysis and interpretation of a significant family issue. NCFR Fellow status is the organization’s highest form of recognition for its members’ scholarship, teaching, outreach, practice, and professional service, including service to the organization. Initiated in 1998 with a charter group of 12 distinguished members, NCFR has conferred Fellow status on a total of 107 organization leaders and scholars. Promoting Family Life Education NCFR established and administers an internationally recognized credential, the certified family life educator” (CFLE) designation, first approved in 1985. Approximately 125 college and university family studies degree programs in the United States and Canada incorporate NCFR family life education certification standards into the curriculum for their undergraduate and graduate students. The credential demonstrates competence in 10 core areas that relate to understanding families, relationships, and interactions among family members; human development; family management and resources; policy and social issues; and professional practice and ethics. Informing Policy As a nonprofit, nonpolitical professional association, NCFR does not take stands on policy issues. However, NCFR is an advocate for family wellbeing and a resource for providing research, practice, and education. NCFR researchers and educators interpret and disseminate information on families to inform legislators and other decision makers about the possible effects of policy on families. Over the years, NCFR members have had a strong record of involvement in family

938

National Partnership for Women and Families

policy and education, as exemplified by the following examples: • The White House called upon NCFR and its leaders three times to help implement conferences around families. Conference action in 1953 helped lead to the creation of what would become the cabinet position for Health, Education, and Welfare. • NCFR sponsored family impact analysis seminars throughout the 1980s for staffers of congressional members. • After more than 10 years as a United Nations nongovernmental organization (NGO), NCFR gained consultative status in the United Nations Economic and Social Council in 2001, making NCFR part of this worldwide 2,700-organization network and enabling NCFR to offer assistance to the work of the council. • The organization has prepared researchbased and peer-reviewed policy briefs on certain issues in order to help build understanding and awareness and to inform discussion. Topics during the 1990s and 2000s have included Social Security reform, family preparedness, work–life balance, family trauma (released immediately after Hurricane Katrina in 2005), and community building for military families. Into the 21st Century By 2001, NCFR journals and all their archived articles had moved online through Wiley Blackwell’s technical division. Zippy News, the weekly e-newsletter, became an instant success that same year and has grown to almost 10,000 subscribers just over a decade later. The NCFR also maintains a Web site, by which it distributes its member communications and journals. NCFR throughout its history has provided a platform for members to share their scholarship and outreach efforts with other members and to showcase members’ work to society at-large. For example, in 2013, in celebration of the NCFR 75th anniversary, researchers Pauline Boss and Stephanie Coontz, both longtime NCFR members and widely published authors, were featured speakers at the annual conference. Reflecting on their life’s work in studying families, each spoke on her signature

topic—Boss on grief and the concept of ambiguous loss, and Coontz on the significant changes in families and relationships in modern America. Their writings, in both scholarly publications and the popular press, are a reflection of how NCFR members influence and educate the world about families, relationships, and human development. Charles Cheesebrough National Council on Family Relations See Also: American Family Association; Council on Contemporary Families; Family Life Education; Family Research Council; White House Conference on Families. Further Readings Arcus, Margaret E., Jay D. Schvaneveldt, and J. Joel Moss. Handbook of Family Life Education. Thousand Oaks, CA: Sage, 1993. Czaplewski, Mary Jo and Jason Samuels, eds. “NCFR History Book.” http://history.ncfr.org (Accessed April 2014). Milardo, Robert M. Understanding Families Into the New Millennium: A Decade in Review. Minneapolis, MN: National Council on Family Relations, 2000. National Council on Family Relations (2014). http:// history.ncfr.org (Accessed April 2014). Walters, James and Ruth Jewson, eds. The National Council on Family Relations: A Fifty-Year History, 1938–1987. Minneapolis, MN: National Council on Family Relations, 1988.

National Partnership for Women and Families Established in 1971 as the Women’s Legal Defense Fund, the National Partnership for Women and Families (NPWF) is a nonprofit organization that uses litigation, public education, and advocacy to promote fair work practices, improve the quality of health care, and help men and women meet the demands of work and family. The NPWF has been at the forefront of legislation related to women, health, and family in the last 30 years. Judith L. Lichtman was the organization’s first full-time employee



National Partnership for Women and Families

and became the partnership’s president in 1988. It became the National Partnership for Women and Families in 1998. NPWF is best known for its role in the nineyear struggle to pass the 1993 Family and Medical Leave Act, which guaranteed eligible workers job protection and unpaid leave to care for newborn or newly adopted children, serious family illness, or to recover from a serious health condition. The partnership has many important supporters, including First Lady Michelle Obama and former Secretary of State Hillary Clinton. NPWF now concentrates its efforts on five interconnected arenas: promoting family-friendly work environments, protecting fairness in the workplace, ensuring quality health care, advocating on behalf of reproductive rights, and judicial appointments. Family Friendly Workplace Since 1985, the NPWF has served as a dedicated advocate in promoting family-friendly workplace policies that expand workers’ access to job protection, flex time, paid sick days, and family and medical leave. From 1985 to 1993 the organization spearheaded the nine-year battle to pass the Family and Medical Leave Act (FMLA). An NPWF staff member authored the bill guaranteeing eligible workers 12 weeks per year of unpaid leave to care for a newborn or newly adopted child, serious family illness, or recover from a serious health condition. In 1993, the FMLA was the first bill that President Bill Clinton signed into law during his presidency. The partnership continues work to expand coverage and benefits of the FMLA. Less than 50 percent of the workforce are currently eligible for the family and medical leave. Although the FMLA provides 12 weeks of leave, many workers cannot afford to take unpaid leave. Through the National Partnership and Campaign for Family Leave Income, the NPWF is working to expand the eligibility to include more workers and promote national paid leave programs. Workplace Fairness The NPWF also works to expand and protect civil rights, equal opportunity employment, and fair pay practices. In 1977, it financed, litigated, and won Bernes v. Costle. This U.S. Court of Appeals decision reaffirmed that any retaliation by a boss against an employee for rejecting sexual advances violates Title VII of the Civil Rights Act prohibiting sexual

939

discrimination. In 1978, it lobbied to enact the Pregnancy Discrimination Act, making discrimination in the workplace based on pregnancy, childbirth, or related medical conditions illegal. Since the 1980s NPWF has worked to eliminate the pay gap between men and women through public education, litigation, and legislative efforts. In 1982, it launched a public education program about wage discrimination titled “It Pays to Be a Man.” It led the efforts to enact the Lilly Ledbetter Fair Pay Act of 2009, which reinstated laws against pay discrimination undermined in the Supreme Court decision in Ledbetter v. Goodyear Tire & Rubber Co. Reproductive Rights NPWF was at the forefront of efforts to repeal the Mexico City policy. Popularly known as the “global gag rule,” this U.S. foreign policy stipulates that nongovernmental organizations receiving U.S. assistance could not use separately obtained funds to provide legal abortions or advocate for legalizing abortion. Although the law allowed for exemption in case of proven rape, incest, or endangerment to the life of the mother, it did not allow for consideration of the woman’s physical or mental health. In 2009, President Barack Obama repealed the policy. NPWF now works on a number of campaigns to give women access to a full range of reproductive health information and services. Its online campaign Reproductive Health Watch compiles media coverage of proposed and passed state and federal legislation, ballot initiatives, and litigation efforts affecting access to reproductive health services. It also develops sex education materials and promotes access to sex education among U.S. youth, working to counteract abstinence-only education movements. Health Care The NPWF lobbied on behalf of the Child Health Insurance Program Reauthorization Act of 2009, which extended and expanded the State Children’s Health Insurance Program (SCHIP). The act provides $32.8 billion over the next four and a half years to both maintain existing coverage for approximately 7 million children and to expand coverage to an estimated 4.1 million additional children. NPWF also provided technical support for the 2010 Patient Protection and Affordable Care Act.

940

Native American Families

The NPWF continued to work with the Campaign for Better Care and Americans for Quality Health Initiatives, a coalition of consumers, health care workers and providers, community organizations, religious organizations, disability rights organizations, and other citizens to help create and advocate for an accessible, quality health care system. Judicial Watch The NPWF has initiated a campaign that supports judicial nominations that support these causes. The organization was highly influential in the campaign for Supreme Court Justice Elena Kagan. It continues as a vocal advocate in Supreme Court cases that involve sexual harassment, reproductive rights, workplace discrimination, and full application of the Family and Medical Leave Act. Juandrea Bates University of Texas at Austin See Also: Civil Rights Act of 1964; Family and Medical Leave Act; Health of American Families; Maternity Leaves; Planned Parenthood; Social History of American Families: 1981–2000. Further Readings Gerstel, Naoimi. “Job Leaves and the Limits of the Family and Medical Leave Act: The Effects of Gender, Race and the Family.” Work and Occupations, v.26/4 (November 1999). Kamerman, S. B. and A. J. Kahn. “Child and Family Policies in the United States at the Opening of the Twenty-First Century.” Social Policy and Administration, v.35 (2001). National Partnership for Women and Families. “Issues and Campaigns.” http://www.nationalpartnership .org (Accessed September 2013).

Native American Families Native Americans traditionally lived in extended families that included parents, their children, grandparents, uncles, and aunts, and many still live that way today. Relatives could live together in

one dwelling or in close proximity to each other. In a way, there were no orphans in traditional Native American societies, as there was always an extended family member to take care of children who lost their parents. In 2013, the U.S. Federal Register listed 597 federally recognized tribes and 66 state-recognized tribes. In addition, there are some 630 recognized First Nations governments or bands in Canada. There is great diversity among Native Americans, including more than 200 languages that can be as different from one another as English is from Chinese. For the Inuit (aka Eskimo) in the far north and on the Pacific coast, hunting and fishing were basic to life; in the southwestern United States, the Pueblo Indians depended on farming and hunting and the Navajo on herding sheep; on the Great Plains, buffalo were a main source of food; and in the eastern part of North America, farming and hunting were mainstays. How groups of Native Americans found food and shelter affected their religious beliefs and many other cultural aspects of their lives. Depending on the availability of food, some tribes lived in villages and the whole village raised the children, while other tribes lived in scattered small nomadic groups of extended families such as the Navajo and Piute. Unlike in nuclear families common in North America today, Navajo children can still be expected to address all their mother’s sisters as mother. Various taboos were also practiced by different groups, many of which are no longer widely practiced today. For example, Navajo men were not to speak to their mothers-in-law and were to avoid their presence. Lakota boys were to never speak directly to their sisters or female cousins. In some tribes, uncles were expected to be the disciplinarians, which made it easier for the parents to get along with their children. Family Traits While there is great diversity among American Indian tribes, anthropologists have documented similarities as well. Respect for elders who were the keepers of wisdom and knowledge in oral cultures was one universal. The elders had learned, and thus could teach the young, the skills needed to live off the land, knowing which plants are edible, how to hunt various forms of game, and other knowledge crucial to survival. Grandparents could have more authority in the raising of children than their parents.



Many Native Americans shared common values. For example, a list of Alaska Athabascan values includes care and provision for the family, family relations, love of children, village cooperation and responsibility to the village, and honoring ancestors. Generally, there were expectations of sharing, cooperation, hard work, and respect within extended families, clans, and tribes. Both humility and generosity were valued across Native North America. Concerning personal cleanliness, Blackfeet and other Plains Indians swam in streams even in winter. Many other tribes, including southwestern tribes in arid areas, used sweat lodges to cleanse themselves both physically and spiritually. In child rearing training, for survival was fundamental to Native education. Babies could be taught not to cry by cutting off their air supply, preventing them from revealing a band’s location to an enemy. The struggle for survival taught Native Americans humility. Knowledge of tribal heritage was another key part of children’s education. Through ceremonies, storytelling, and apprenticeship, children learned the culture of their tribe. Play was also educational. Some games, called “the little brother of war,” taught boys how to handle weapons and developed physical endurance. Girls could play at carrying out household activities they would later take on when adults. Early Christian missionaries from Europe commented on how Native Americans were loving parents and grandparents who avoided physically punishing their children and were permissive in their child-rearing practices. Child-rearing practices were often characterized by permissiveness and noninterference, which can lead today to Native American parents telling their children that if they do not want to go to school, then they do not have to go. Contrary to European practice, in many tribal cultures the use of corporal punishment to discipline children was unacceptable. Indian children were taught that enduring pain without showing emotion was a symbol of maturity. Therefore, using pain as a form of punishment did not make sense. Physically punishing a child could lead to a loss of courage. Discipline was enforced through teasing, ostracism, and peer pressure and the use of tribal stories that described how children who ignored tribal custom were severely punished by supernatural powers. Puberty ceremonies for girls and initiations for boys into adult religious and social groups

Native American Families

941

were common. Adolescents in some tribes were expected to fast for several days on vision quests where they sought spiritual guidance for their lives. Pueblo villagers expected children to “stand in” rather than “stand out” and to conform to Pueblo cultural teachings. Boys received extensive religious instruction preceding their initiations into adult societies. However, similar to other Native American groups, there was room for nonconformity and individual differences. There tended to be accepted roles for tribal members who were different. For example, We’wha (1849–96), a Zuni, filled a traditional role, now described as mixed-gender or TwoSpirit, wearing a mix of male and female clothes and doing women’s work and serving as a mediator. The Lakota had “contraries” who did things opposite of other tribal members, including riding backward on a horse into battle. Some tribes were patriarchal and some matriarchal. In matriarchal cultures, such as the Navajo, children were born into their mother’s clan and it was taboo to marry anyone from the same clan. There were expectations of mutual support among members of the same clan. In some tribes, women played important leadership roles at a time when non-Indian women could not own property, vote, or hold public office. For example, a Navajo man went to live with his wife’s family and that family owned the sheep and the hogan. If the couple split, the wife kept the house and sheep and the husband left with his horse and saddle. Among the Blackfeet, Navajo, and some other tribes, polygamy was practiced by some tribal members, with a man often marrying sisters. Part of the reason for polygamy was the difficulty a widow or other unmarried woman had surviving on her own. Special circumstances could lead to particular practices. For example, the Crow tribe tended to spoil young males because, surrounded by the Blackfeet, Cheyenne, Sioux, and other enemies, they might not live long. Housing depended on climate and the availability of resources. Inuit made igloos out of blocks of compacted snow; Plains Indians often lived in tipis made of buffalo skin stretched over poles erected in the shape of a cone. Navajo lived in wood hogans, and Pueblo Indians lived in multistory apartment houses built of adobe and wood. The Iroquois (Haudenosuanee) the northeastern United States and the various tribes on the Pacific coast lived in wooden longhouses occupied by several families related by clan.

942

Native American Families

Disruption The coming of immigrants from Europe caused major disruption to Native American families. Immigrants from Europe often killed Native Americans or pushed them onto less fertile lands where they sometimes starved. Virgin ground epidemics of measles, smallpox, and other diseases brought by immigrants to which Native Americans lacked immunity also ravaged Indian societies and made many open to the teachings of Christian missionaries who at times claimed the diseases were their God’s punishment of unbelievers. The missionaries sought to assimilate Indians into European culture and criticized Native American cleanliness, religious ceremonies, and a perceived lack of discipline among their children. However, they often made little effort to truly understand Native American cultures, including learning their many languages. Some missionaries declared that Indian languages were the language the devil spoke, and they preached that all the ancestors of Native Americans were in hell because they were not Christians. Their efforts disrupted and divided Indian communities and families as some members converted to Christianity while others maintained their Native religions. Missionaries, and later the U.S. and Canadian governments, found that they were more successful converting Native American children if they could remove them from their families and educate them in boarding schools where they were often punished for speaking their Native languages. Children thus educated often had great difficulty adjusting to living back in their home communities as adults, and they were often rejected by non-Indians because of pervasive racism that stereotyped all Native Americans as inferior savages. Some Native Americans were forcibly removed from their families to attend government-funded boarding schools in both Canada and the United States. These schools were chronically underfunded and used student labor to provide food and clothing and maintain the schools into the middle of the 20th century, at which time they were guilty of violating some child labor laws. Many children received an inferior education in boarding schools, where they were taught that Native Americans were savages who needed to forget their languages and cultures to become

“civilized,” which led to a breakdown of families. In addition, students ordered around by boarding school employees did not learn parenting skills, which later affected their own children negatively. This lack of child-rearing skills is aggravated today by poverty as lands and traditional means of livelihood were lost. Social workers in the United States and Canada after World War II, judging Native American families by white cultural standards, forcibly removed many Native American children from their homes and put them up for adoption or into foster care by non-Native parents, where many of these children did not thrive. Native organizations, including the militant American Indian Movement, in the 1970s were highly critical of this practice, and in 1978 the U.S. Congress passed the Indian Child Welfare Act, which promoted keeping Native children in Native families. Native American Families Today Native Americans in North America continue to face poverty, high dropout rates from schools, and other social challenges, including family breakdown. Many, though not all, have high suicide rates. One recent study in British Columbia found villages that had lost their language had six times higher suicide rates than those that had kept their indigenous language better. Various Native American language and culture revitalization movements today seek to reverse the family disintegration caused by assimilationist government policies. Today, more than half of Native Americans live off reservations or reserves in North America where they still face, despite laws to the contrary, discrimination, including attending “English-only” public schools that do not support, and even still discourage, the use of Native American languages. However, since World War II the various United Nations human rights conventions and declarations have been used to promote Native American self-determination, allowing Native people to retain their cultures, partially assimilate, or totally assimilate, depending on their desires. In the United Nations Convention on the Rights of the Child, entered into force in 1990, “States Parties agree that the education of the child shall be directed to . . . the development of respect for the child’s parents, his or her own cultural identity, language and values.” Only Somalia and the United States have

Natural Disasters



not ratified this convention. The United Nations 2007 Declaration on the Rights of Indigenous Peoples recognizes the following: . . . in particular the right of indigenous families and communities to retain shared responsibility for the upbringing, training, education and wellbeing of their children, consistent with the rights of the child . . . indigenous peoples and individuals have the right not to be subject to forced assimilation or destruction of their culture . . . the right to revitalize, use, develop and transmit to future generations their histories, languages, oral traditions, philosophies, writing systems and literatures, and to designate and retain their own names for communities, places and persons. Jon Reyhner Northern Arizona University See Also: Extended Families; Health of American Families; Nuclear Families; Primary Documents 1894. Further Readings Anderson, Karen. Chain Her by One Foot: The Subjugation of Native Women in SeventeenthCentury New France. New York: Routledge, 1991. Briggs, Jean L. Never in Anger: Portrait of an Eskimo Family. Cambridge, MA: Harvard University Press, 1970. Eastman, Charles Alexander. Indian Boyhood. New York: McClure, 1902. Fournier, Suzanne and Ernie Crey. Stolen From Our Embrace: The Abduction of First Nations Children and the Restoration of Aboriginal Communities. Vancouver, Canada: Douglas and McIntyre, 1997. Linderman, Frank B. Pretty Shield: Medicine Woman of the Crow. Lincoln: University of Nebraska Press, 1972. Pettitt, George A. Primitive Education in North America. Berkeley: University of California Publications in American Archaeology and Ethnology, 1946. Standing Bear, Luther. Land of the Spotted Eagle. Boston: Houghton Mifflin, 1933. United Nations. “Convention on the Rights of the Child, 1990.” http://www.ohchr.org/EN/ProfessionalInter est/Pages/CRC.aspx (Accessed July 2013). United Nations. “Declaration on the Rights of Indigenous Peoples, 2007.” http://www.un.org/esa /socdev/unpfii/documents/DRIPS_en.pdf (Accessed July 2013).

943

Natural Disasters Natural disasters have always been a part of the experience of families. Natural disasters are experienced in every region of the United States. Families have suffered, escaped, endured, recovered, and rebuilt as a result of natural disasters. Throughout U.S. history, families have reacted to these disasters in a wide variety of ways. The story of natural disasters in the United States is largely the story of the strength of American families and their ability to persevere, and even to prosper, despite the onslaught of natural forces that wreak havoc in the lives of families. Types of Natural Disasters Affecting American Families The kinds of natural disasters that affect the lands of America are many. From the oceans come hurricanes and other tropical and nontropical storms, striking first the coasts and then going inland sometimes hundreds of miles. These storms cause damage from high winds, a storm surge from rising ocean waters, and extreme rain. Infrequent tsunamis can also strike most of the coastlands of the nation. Large sections of the nation are prone to being struck by violent tornadoes, which come with little warning and can produce wide swaths of extreme devastation. In addition to the floods resulting from storms at sea, there are floods caused by rains, melting snow, and the occasional dam or levy break. Floods from excessive rain can occur in the form of flash floods in steep terrains, rivers no longer contained in their banks, and extreme street flooding where drainage systems, unable to handle the large amounts of rainfall, back up, sometimes flooding many man-made structures. Thunderstorms and straight-line winds can cause disastrous damage to structures, forests, and crops. Thunderstorms can also lead to hailstorms, which cause serious damage to buildings, vehicles, and crops. Winter weather can lead to massive blizzards, inundating large areas with deep snow. Avalanches are often in isolated mountain regions but can cause massive damage and sometimes loss of life. Ice storms cause widespread power outages due to ice weighing down and breaking power lines. These ice storms can also lead to disastrous road problems and large-scale damage to timber. Mudslides, often caused by heavy rains, can inflict heavy damage in communities in the paths of these sliding hillsides.

944

Natural Disasters

Several areas of the United States, especially those in close proximity to major fault zones, are prone to earthquakes. Alaska and California are wellknown earthquake risk areas; however, sections of the country near the Mississippi River and the New Madrid Fault are also at risk for major earthquakes. Drought has been a major cause of natural disaster. The weather conditions leading to the Dust Bowl of the 1930s caused the migration of large numbers of U.S. families who were no longer able to provide for themselves due to the drought and its attendant widespread dust storms. Wildfires strike each year, especially in the western United States. These fires can be natural, when caused by lightning strikes, or human beings can start them. Heat waves can lead to natural disasters. When excessive heat waves affect those without sufficient ways to keep cool, the elderly especially suffer. It appears that weather trends are such that weather-related natural disasters are on the rise. Scientists debate the causes of increased natural disasters, with many attributing them to global warming. Families React to Disasters Natural disasters can have dramatic consequences for families. In fact, a natural disaster can become a defining moment in a family. The worst-case scenario in such a disaster is the death of one or more members of a family. Sometimes the disaster is so widespread that many families face the death of loves ones. Whether the death is isolated or widespread, the impact on individual families is similar. Family members go through the processes of grief, often beginning with shock and disbelief. The death leads to the family going through the process of remembering their loved one and burying the body. The loss of one or more family members leads to a great need for adjustments. It may be that a loss of income has been experienced and the family has to adjust to a new income level. If a parent has died, the children will go through the adjustment process. Children are often more resilient than adults in dealing with the death of family members. If a child has died then parents have to deal with the difficult process of going on after this kind of loss. Housing and living arrangements of families often change as a result of natural disasters. The loss of a physical structure or home creates the first change. New permanent or temporary arrangements must be made. The new measures may be as simple as

putting a temporary tarp over a damaged roof or having to find a different living arrangement. Often a temporary solution is for families to move in with other family members. This can be nearby, or, in the case of a widespread disaster, with family members in more remote locations. Extended families can find themselves living together. This leads to opportunities for these extended families to grow closer, but it can also create tensions as the pressures of new living arrangements manifest themselves. Multigenerational living arrangements can also be multiplied through the need for new housing arrangements brought about by disasters. Whereas in earlier times in U.S. history multigenerational homes were common, recent history has seen a reduction in this phenomenon. The pressure placed on families in the face of natural disasters has led to at least a temporary increase in these homes after a natural disaster. Various family age groups react differently to the trauma of natural disasters. Senior adults often face the most serious consequences from natural disasters. Those especially affected are family members located in nursing homes or hospitals. Often emergency preparedness procedures of these facilities require an evacuation from the facility. It is understood that this kind of evacuation of sick and weak individuals is risky. Sometimes other family members will themselves come and evacuate their loved ones. At other times family members are unable or unwilling to evacuate their loved ones. This leaves the facility with the choice of keeping their residents in the facility or evacuating all members of the facility. Mass movements of patients often leads to a physical decline and sometimes the death of those evacuated. Children are also vulnerable to the trauma of natural disasters. Even if they are not physically harmed the changes necessitated by a natural disaster can lead to adjustment issues for children. Great care should be given to children who may be neglected with the stress parents face in disasters. The Impact of Specific Disasters on Families Many natural disasters have been recorded in the history of the United States. The San Francisco earthquake in April 1906 was the most devastating earthquake in U.S. history. In the earthquake and the fires caused by the quake, up to 3,000 people lost their lives. The fires resulting from the impact of the quake lasted up to four days. Over half of the population of San Francisco was made homeless. Families would



Natural Disasters

945

Two oil portraits and an undamaged china platter are among the few belongings that Ann and Curtis Lopez could find after Hurricane Katrina demolished their home in 2005. Their four-generation family lived in three FEMA travel trailers while they tried to reestablish a normal life. One of the largest natural disaster relocations in U.S. history resulted from the storm and its related flooding.

not have had the resources available to those who suffer such loss in modern U.S. history. There was no Federal Emergency Management Agency (FEMA) and other organizations that aid in U.S. disaster settings were nonexistent or were in their infancy. The Dust Bowl of the 1930s was a different kind of natural disaster, still impacting large numbers of American families. The increased farming of many plains grasslands led to an increasing erosion of valuable topsoil. During this period an extended drought let to increased erosion, especially from the strong winds that are frequent in the center of the country. Dust storms were frequent as the topsoil, no longer held in place by native grasses, was blown into the air. The drought and loss of soil, along with the severe economic conditions caused by the Great Depression, led many to abandon farming on the plains. Large numbers of families, especially from Texas and Oklahoma, began a migration farther west. Many of them made their way to California. They faced continued difficulties even in their new homes.

Hurricane Katrina was possibly the worst natural disaster in U.S. history. Almost 2,000 people died in New Orleans and other parts of southern Louisiana, Mississippi, and Alabama. Hundreds of thousands of homes were destroyed or seriously flooded in the immediate impact of the storm surge or, as in the case of New Orleans, after the breach of canal levies. One of the largest natural disaster relocations in the history of America resulted from the storm and its related flooding. The impacts on families were significant and long lasting. The first was in the face of the impending storm. Each family had to go through the process of decision making as whether to evacuate. The decision-making process varied from family to family, based in part on the family’s risk tolerance. This risk tolerance could have been the result of prior experiences with impending storms, education or lack thereof, or underlying family characteristics, often of at least one family member who felt that the costs of evacuation outweighed the perceived risk. Families who made the decision to evacuate had to bear the expense

946

Natural Families

of travel, while those who chose to stay faced the trauma of being trapped in flooded buildings and often the added trauma of being housed temporarily in spaces, such as the New Orleans Superdome, where conditions were barely survivable. Almost all families in the zones flooded by Katrina, especially New Orleans, had to find temporary housing in other states. The rest of the country was especially hospitable to the Katrina evacuees who arrived in their towns and cities. Many shelters were opened in public buildings and churches. Some families stayed for weeks in these shelters. Some were initially transported to U.S. Army bases around the country. From there, churches and others opened up shelters for them. Many people were able to find temporary shelter with relatives who were outside the flood zone. Some families were even housed by nonrelatives. As time progressed, FEMA began to provide temporary housing costs for evacuees. Some lived in hotels and motels for many months. Others rented apartments and houses and were reimbursed by FEMA or FEMA paid for them directly. FEMA began providing small travel trailers for people to live in, either on their property while their houses were being cleaned and repaired or in FEMA trailer parks. The FEMA trailer parks were especially for those who had no property or for some reason were not able to put a trailer on their property. Some families that owned flooded homes were able to receive grants to help in the rebuilding process. In the process of rebuilding, many families were taken advantage of by unscrupulous contractors. A large number of families were unwilling or unable to return to New Orleans or other nearby areas after Katrina. Some of these families were renters and could not afford the higher rents that were in place in New Orleans after recovery began. Others saw Katrina as a way to make a new start in a new location. Those families that did not return were often joined in their new cities or towns by other family members. Often families were separated after Katrina. Superstorm Sandy struck New Jersey, New York, and surrounding areas at the end of October 2012. The impact on families in the region was massive. Over 600,000 buildings were damaged or destroyed by the winds, flooding, and fires caused by the storm. Over 350,000 structures were impacted in the areas in and around New York City. As in

the case of New Orleans and Hurricane Katrina, many families were deeply impacted by the storm. Housing became an issue, with many losing their homes either temporarily or permanently. The help offered by governments and by nonprofit organizations aided many families to find temporary housing and to begin the process of rebuilding damaged homes. Recent years have demonstrated the disastrous effect that tornadoes can have on families. In recent years many tornado outbreaks have torn wide swaths through numerous towns and cities. Among them have been Tuscaloosa, Alabama; Joplin, Missouri; and Moore, Oklahoma. Homes were destroyed, schools were leveled, and family members were killed. Many families who had no storm shelter or safe room in their home, as a result of such tornadoes, have begun the process of installing these facilities in their home. Families, in a time of seemingly increasing frequency of natural disasters, are looking for ways to protect themselves. Ken B. Taylor New Orleans Baptist Theological Seminary See Also: Death and Dying; Food Shortages and Hunger; Food Stamps; Homelessness; Shelters. Further Readings Bradley, Arthur T. Handbook to Practical Disaster Preparedness for the Family. 3rd ed. North Charleston, SC: CreateSpace, 2012. Egan, Timothy. The Worst Hard Time: The Untold Story of Those Who Survived the Great American Dust Bowl. Boston: Houghton Mifflin, 2005. Hackbarth, Maria. “Natural Disasters: An Assessment of Family Resiliency Following Hurricane Katrina.” Journal of Marital and Family Therapy, v.38/2 (2012). Hodges, Deidra. Hurricane Katrina: One Family’s Survival Story. Self published, 2011. Miller, Paul A. “Families Coping With Natural Disasters: Lessons From Wildfires and Tornados.” Qualitative Research in Psychology, v.9/4 (2012).

Natural Families For several decades, the United Nations has identified the family as the basic unit of society. However,



many feel that over those decades, the natural basis of the family has slowly eroded away, leaving society with families who miss the mark of what families should be. Consequently, over the last several years, there has been increased interest in natural families. In one way or another, “natural families” have existed for years, but the concept began to be formally defined in the mid-1990s and has begun to surge since that time. A natural family is one that strives to live life in a way that is free of anything unnatural. This includes the areas of parenting style, birthing and medical care, chemical avoidance, eating organic foods, and living life in the outdoors. Those who strive to live this way believe that returning to a more natural way of living will bring increased health, happiness, and success in life. For many, the idea of a natural family reflects the desire to live a simpler life. Many natural families believe that the average family focuses on all the wrong things, neglecting proper care of children, proper health, and proper nutrition. The overworked and overscheduled family cuts bonding time with children in the name of returning to work sooner or participating in various activities. The age of electronics also keeps children and parents indoors instead of outdoors. Many modern children would rather spend time playing a video game instead of playing outside. The age of chemicals allows manufacturers to produce food, instead of farmers. These foods often lack the nutrients that humans need to have health and vitality. All of these things and more create a very unnatural family. Natural families try to return family life to the way they feel it was meant to be. The movement in U.S. society toward natural families attempts to restore what has been lost. By turning away, sometimes radically, from what family life has become, natural families hope that their family bonds will be stronger and their family life experience richer because of their adherence to what family life used to be. Once thought of as strange or odd, natural families have become increasingly common. There are support groups, blogs, and societies devoted to the support and help of those striving to live as a natural family. Natural families share what they have learned with others who are still discovering. This leads to a great community of families all committed to the same goal.

Natural Families

947

Attachment Parenting Proponents of attachment parenting believe in breastfeeding as a superior way to provide nutrition to an infant. Infant formula, while considered an acceptable way to provide nutrition, lacks the closeness and bonding of breastfeeding. Often, natural families will breastfeed for extended periods, believing that forcing the child to stop will damage the parent–child attachment. Attachment parenting also involves co-sleeping or having a family bed, believing that having as much family time together as possible is crucial for a happy family. Similarly, many natural families will engage in baby-wearing, wherein a child is attached to a parent and goes with the parent through daily activities. This provides the child and parent more time together and allows the young child to participate in all the activities. The purpose of attachment parenting is to securely bond with the child. Theorists such as John Bowlby and Mary Ainsworth theorized that children who are securely attached to a caregiver use that attachment as a foundation from which to explore the world. Attachment parenting therefore aims to create as secure an attachment as possible, giving the child the best possible foundation from which to learn about the world around them. Extended breastfeeding, co-sleeping, and baby wearing all allow for maximum time with the child in a nurturing, supportive way. Another important part of attachment parenting is the concept of “responsive parenting.” Responsive parenting allows the child to provide cues for the parents to follow. For example, the child will tell the mother when to stop breastfeeding by asking for the breast less and less until the child does not ask for it at all. Home Birth Many natural families are strong supporters of natural or home births. From a natural family perspective, birth is a natural occurrence and should be shared with family as opposed to being in a hospital with the potential exposure to disease and away from family. Those living as a natural family note that babies have been born since time began. Only comparatively recently have babies “needed” to be born in hospitals with the assistance of doctors. Historically, babies have been birthed with the assistance of another mother or a midwife. Natural families believe that a home birth creates less stress

948

Natural Families

on the mother and less stress on the baby, allowing for the beauty and uniqueness of the situation to permeate the experience. While natural birthing is considered ideal, most natural families recognize that there are circumstances that prevent the possibility of a home birth. In these scenarios, parents often find birthing centers the best alternative because they provide the needed care while also being more receptive to the wishes of the parents. Hospitals, while well intentioned, are often unable to attend to special requests from parents who want a more natural birth experience. Green Living Many natural families are concerned with “green living.” Green living is living a life that involves as few chemicals as possible. Chemicals are found in everything from toothpaste and mouthwash to plastic spoons and metal cans. With new chemicals being created at an alarming rate, it is prudent for natural families to be concerned with avoiding as many chemicals as possible. Little research exists that conclusively demonstrates that daily exposure to chemicals is safe. Natural families, therefore, err on the side of caution and avoid anything that contains synthetic compounds. This practice, arguably, allows the body to care for itself instead of trying to purify the unnatural compounds that most people eat or expose themselves to. This in turn results in less stress on the liver and kidneys, two organs that filter the majority of things in the human body. Living life in a way that avoids as many chemicals as possible contributes to a more natural way of life. Some natural families go as far as to seek out natural clothing. Many modern clothing items contain synthetic materials such as nylon and spandex. Even most of the cotton today is genetically modified and therefore unnatural. Natural families would seek non–genetically modified cotton and other natural fibers such as wool or silk for their clothing and other items. Many natural fiber options exist, but they are often more expensive than their synthetic counterparts. Some natural families raise sheep or alpacas and use their fiber to create yarn for clothing. Organic Nutrition Related to green living, striving for organic nutrition is another important part of any natural family. Foods that are organic are grown without the use of pesticides or artificial fertilizers. Many families

have become increasingly concerned over the years with the amount of pesticides used on food and, more recently, with genetically modified food, for example, genetically modified corn that produces its own pesticide. They are also concerned with “foods” that are actually more chemical than food (i.e., most processed, convenience foods). Natural families believe that the increase in the number of chemicals on or in food is responsible for the increases in health problems that plague Western civilization. Many common health concerns are diet related. Natural families believe that food can be nature’s medicine. According to natural families, clean living creates a multitude of health benefits. Organic nutrition is also about getting the correct balance of foods. For decades, society has been told to eat lots of fruits and vegetable and to limit fats and sugars. Reality is that most people eat too much meat and dairy and not enough fruits and vegetables. Most natural families emphasize the correct balance of foods by eating some of everything to maximize the nutrient intake. Eating a variety of fruits and vegetables easily contributes to organic eating because most meat and dairy is infused with hormones and genetically modified feed. While some fruits and vegetables are genetically modified, most are not. Therefore, eating organic vegetables is as easy as finding a farmer who supports organic fertilizers and minimal pesticides. Another important component of organic nutrition is “eating local.” Many of the fruits and vegetables in grocery stores are coated in pesticides because they are grown in large commercial facilities that then ship the produce all around the world. This means that the majority of the produce in grocery stores is not as nutrient dense because it has traveled a great distance, ripening in the back of a truck. Eating local means finding a local farmer to supply the produce for the household. Natural families have the opportunity to meet the farmer and see the produce. The produce was picked recently and therefore has more nutrients, making it a better choice than truck-ripened standard produce. Also, families can see the condition of the farm and verify how the produce is grown. Natural Remedies Similar to organic nutrition is natural remedies. Because natural families are interested in consuming and using things that are chemical free and as



natural as possible, health care becomes difficult. Most medical doctors prescribe drugs to treat an ailment. Natural families seek remedies that are natural in origin instead of man-made chemicals. For many natural families, essential oils are a critical part of their health maintenance. Essential oils are extracts of various plants. These oils have various health-promoting properties based on the plant they came from. Those seeking a natural, chemical-free life will use these oils to treat common ailments just as a medical doctor will prescribe a medication. Often, essential oils provide symptom relief without the side effects of traditional medication. Another important natural remedy is yoga and meditation. Those who use yoga and meditation on a regular basis often find increased health and vitality as a side benefit. Many believe that these activities focus the body’s energy in a way that allows the body to heal itself. Like essential oils, yoga and meditation often bring symptom relief with no side effects. The last important natural remedy is diet. While this may seem obvious, most families do not eat enough vegetables. Vegetables, especially green leafy vegetables, are loaded with phytochemicals and nutrients that human bodies need to be healthy. Having a healthy body means that one’s immune system is more effective, resulting in less sickness and disease. Like other natural remedies, eating enough green leafy vegetables keeps one healthy without negative side effects. Outdoors Another important component of being a natural family is living in the outdoors. Natural families feel that most people spend too much time inside offices, schools, and homes, missing out on the opportunity to be outside with nature. Being outside is important for two main reasons. Being outside encourages exercise, while being inside encourages a sedentary life. Maintaining a regular regimen of exercise promotes strength and vitality, allowing the body to heal itself and perform better. Second, being outside allows for exposure to the sun and exposure to natural bacteria that keep bodies healthy. Too often, beneficial microbes are killed by antibacterial hand soaps and other things, when in reality, human bodies need the beneficial microbes to survive. According to natural families,

Nature Versus Nurture Debate

949

people gain stronger immune systems and have better overall health when they spend time outside instead of inside. Joel Touchet University of Louisiana See Also: American Family Association; Attachment Parenting; Attachment Theories; Bowlby, John; Breastfeeding; Child-Rearing Practices; Family Values; Nuclear Family; Parenting Styles. Further Readings Jarvis, B. and D. Pibel. “How to Get Carbon-Free in 10 Years.” Green Living Journal (2009). http://www .greenlivingjournal.com (Accessed September 2013). Natural Life Magazine. http://www.naturallifemagazine .com (Accessed February 2014) Raatma, Lucia. Green Living: No Action Too Small. Minneapolis, MN: Compass Point, 2010.

Nature Versus Nurture Debate The nature versus nature debate is a category of old yet still perplexing questions concerning the source of human individuality. What makes a person— characteristics innate to the individual and present at birth, or the conglomeration of environmental factors accumulated over time and experience in society? Is there some quality or human nature common to all people? Beliefs about the sources of human individuality and personal improvement have shaped the contours of American social history since before the Revolution. Enlightenment ideas about human nature and the ability of the mind to learn and societies to improve formed the philosophical basis for Americans’ experiment in democracy. Over the country’s first century, belief in political and social meritocracy combined with professionalization of the social sciences to render a cultural emphasis on nurture. Late-19th-century innovations in cell science challenged this direction, however, and by the early 20th century, Progressive era medical practitioners and social reformers pressed for

950

Nature Versus Nurture Debate

close attention to and even legislation of acceptable inherited traits. As might be expected, however, the pendulum continued its swing for the next 100 years, extending while testing the boundaries of both nature and nurture. Despite the development of scientific means to study both sources of influence, the best answer as to the source of human heterogeneity remains the elusive “both.” The Eighteenth and Nineteenth Centuries Thinkers have tried to define the source and characteristics of human nature since early times. Aware of this debate, founders of the American republic emphasized the universality of reason and asserted optimistically that the new government they created spoke to this rational capacity. They gleaned these attitudes from Enlightenment thinkers such as John Locke (1632–1704), David Hume (1711– 76), and Jean-Jacques Rousseau (1712–78), who argued that humans were alike enough in their basic nature to be the subject of philosophical and scientific inquiry. Human diversity, on the other hand, was the result of variables such as historical era, geographical setting, and social structure. Scholars could study both human nature and its modifications as they were subject to the same laws of cause and effect as the physical world. It is on this last point that Enlightenment thinkers argued that humankind could be improved, setting about an emphasis on education that has become a hallmark of the modern era. The notion of improvement was based on Locke’s argument for the empirical nature of the human mind in An Essay Concerning Human Understanding (1689). That is, in addition to basic “human nature,” the individual mind is, at birth, a tabula rasa, or a blank slate onto which experience is impressed. This faculty accounted for the environmental and experiential differences from one person to the next. Locke’s empiricism became one of the cornerstones of American democracy and its attendant emphasis on the role of the family in Republican life. The architects of the American Republic created a government for a society structured not by wealth and accidents of birth (a monarch and aristocrats), but upon meritocratic principles and talents. When this philosophy met the Victorian era’s glorification of the home and the role of parental nurture therein, the stage was set for a near century of

emphasis on education and other types of nurture. After all, it was not the scion of the wealthy family who necessarily succeeded in American life but the man who learned as much as he could, combined book-smarts with street-smarts, and self-confidently took chances in the quickly developing commercial centers as well as in the rapidly expanding western frontier. This is why 19th-century American literature features so often the “self-made man,” or Horatio Alger’s “rags-to-riches” narratives. However, it is important to note that the heroes of these stories were often of Caucasian, Western European (but not Irish) stock. There were limitations to nurture; in other words, former nationality and family background still played a role. In the wake of the Civil War, for example, emancipated blacks sought new roles as legitimate citizens, but ideas about the ability of persons of color to function socially or intellectually like their white counterparts rendered any posi