Science strives to produce accurate, to the best of our knowledge data and findings to help us learn more about the world around us. With regards to psychology is strives to help us understand humans and their behaviour. Studies are carried out in the hope of finding evidence to support a hypothesis. Some studies find an effect, others don’t. The studies which do find evidence supporting their hypothesis, having carried out the research, publish their findings accurately as part of their responsibilities as researchers. To increase their popularity and ratings, the media often then manipulates these studies giving an exaggerated, in-accurate view of the findings.

One Example of this would be this newspaper article which suggests the rioters, who criminally damaged properties and injured many people, may not have had the power to stop their behaviours due to a lower level of a chemical in their brains which keeps impulsive behaviour under control, somehow trying to justify their appalling behaviour last summer.

Reports like this are so damaging to science as it is being undermined along with the funding that is essential in carrying out research. This article attempts to put the record straight but many people who have read the original article may not see this more accurate version of events which is very damaging.

The media also presents confusing data to the public, trying to promote their products. This is another example, an advert by L’Oreal which is here advertising a shampoo product.

The advert states the shampoo combats the “5 UK top hair problems”, yet there is no evidence in the advert of any research having been done on what the UK’s top hair problems are. The advert displays data stating L’Oreal surveyed 2983 women, with no mention of their ages, occupation- they could have all worked for L’Oreal therefore would have invested interest in the study the advert just doesn’t tell us. It then goes on to state that 73%, 356 women agree. 356 is definitely not 73% of the original 2983 women they surveyed yet it does not state why, is this a different survey on a different aspect of the study?  Have some participants withdrawn? As the statistics 356, 73% are shown on screen the adverts says “My hair feels replenished stronger and with a healthy shine” does this mean 73 agree with this statement or agree the original top 5 hair problems?

These are just two of many, many examples of adverts manipulating data and statistics to confuse the public to boost sales within their own companies. This is misleading information and I believe there should be tighter restrictions surrounding this issue.

Researchers create a hypothesis they hope to find supporting evidence for during their research, but what happens when the research doesn’t find the results they are looking for? Especially in sales and media when researchers are often looking to produce statistics that will attract people into buying their product over other competitor’s products. What if they don’t get the results they want?

Many studies over time have manipulated the data in their study until they find an effect, by reworking elements of their research; for example, by adding participants or selecting a certain type of participants their product is more likely to have an effect on.  This is bad science and is misleading to the customers buying products that have been advertised after the data or study being manipulated. I have never seen a product in the media advertising 10% of people agree or even 40% of people agree with the desired effects advertised, but I have often tried products which do not do what is says on the bottle. All information should be printed on the label and companies should truly reflect the effects of the product in media advertisements as customers have a right to know what they are buying.

When using the word ‘science’ or ‘scientifically proven’ as the media often does, the public automatically appear to trust the study and the statistics that go with it, without realising the statics may have been manipulated to find an effect. This could lead in the public having lowered trust and believe in real science.


Some people are debating whether putting these electrodes painlessly onto a baby’s head is ethical. In my opinion this is ethical.

The method used in this clip breaks no ethical boundaries to this baby. The baby is happy looking around the room and is enjoying watching the bubbles. He is sat on whom I presume to be his mother’s lap (if not somebody he seems comfortable and happy with) and is under no obvious distress. His family will have had to consent to the procedure on his behalf and will have been fully informed of all the details of it as with any medical procedure in this country.      

Autistic children have individual needs and requirements which need to be met. Surely the earlier this happens, the better. Early diagnosis of autism instantly improves the child’s quality of life. It improves both their social skill and functioning skills which will be most effective when diagnosed early.

I appreciate this method need lots more work and research onto it as it is far from reliable but it is the beginning to an exciting new method and research into a new area of diagnosing babies with autism. If this method reaches the point where it is 100% effective in diagnosing children with autism early it would help so many families and children, and answer so many currently unanswered questions for some families for years.

Some families may think giving their child a label of autism would have negative consequences, but I would argue it has so many more benefits. The child may come across the odd ignorant person who does not understand their condition and puts him/her down about it but the amount of opportunities and possibilities the diagnosis will open up will far outweigh this. The child will receive the appropriate help and support from everybody around them such as their family, healthcare practitioners and teachers/child care practitioners. Teachers will know and understand the child’s condition right from their first day at school so can adapt activities and strategies to include the child along with his individual needs with all the other children. Some autistic children may be wrongly branded as ‘naughty’ or ‘never listens’ by teachers and this is a much more negative label to have than autistic. Autistic children can thrive to a classroom and education and way of life suited to their needs but can often struggle in their everyday life if their environment is not adapted to their needs.

A correlation is a relationship between two variables, which psychologists may use to make predictions about outcoms of future research. A positive correlation shows as the value of one variable increases, the value of the other variable also increases. A negative correlation demonstrates how the value of one variable decreases as the value of a second variable decreases.

Whilst doing a psychology degree we have it installed into us that correlation doesn’t imply causality. Correlation shows a relationship between two different variables, but this does not mean one caused the other. For example, the more you dislike liquorice, the more you understand HTML (bases on a study using 984 participants, 341 of whom strongly dislike liquorice). Not liking liquorice does not cause these participants to understand HTML (although some may argue it takes a type of person not to like liquorice, and this type of person may be naturally better at understanding HTML), therefore it is displaying a chance correlation between the two variables. These types of correlations should not be used as research on which to base further research upon as there is no scientific evidence to back up the correlation.

But can correlation show causality? One example of correlation showing causality is, the more exercise I do, the more weight I will lose. This is an example of one variable (exercise) having a direct effect on the other variable (weight loss) as exercising burns off calories and can raise your metabolism so you can carry on losing weight even after finishing exercising. These types of correlations can be used by scientists and psychologists in the real world as they are backed with scientific evidence that the correlation does show causation. For example with the example I used earlier, doctors could advise people who wish to lose weight to engage in regular exercise, as there is a direct relationship between doing more exercise and losing more weight.

In conclusion, yes, correlation can often show causality, I could find a greater number of examples of correlations which do show causality, yet we should never assume the correlation implies causality as it doesn’t in every case.

Case studies are very detailed investigations. Some of the most interesting psychological research has been researched through case studies, such as Dr Money’s biology of gender research ( and the case of Genie a victim of some of the most horrific abuse ( ). These cases would be impossible for psychological researchers to create as they are highly unethical but case studies allow us to investigate these cases without breaking the rules of ethics. Psychologist do not create the situation but investigate what has happened and work alongside victims and participants to investigate concepts which would otherwise never be able to be investigated. Case studies each surround one individual/group or community, and the results and observation from this/these people are the only thing the researcher is interested in. Although this can be seen as a disadvantage as it would be hard to generalise the findings of the research to the general population, it is a benefit as great details, observation and research can be taken on that individual. As well as observations researchers usually interview the person/small group of participants to collect their data which is reliable as the participants are talking directly to the researcher themselves and together with observations can provide a great incite to the person. Using interviews to gather information can produce researcher bias especially in unstructured interviews and often as case studies are hard to repeat because of their great detail and individuality of the participant(s) this may influence the results. Case studies often trigger further research into the field as an extension of their findings to support or further investigate hypothesis the research supports.

Single case design studies measure behaviour over time, and also take into account individual results but usually over a much shorter period of time. They are much less detailed than case studies and measure a specific dependant variable over time and how different independent variables can affect this. The studies are usually carried out by having a baseline period where the usual behaviour of the participant is measured, an intervention period where an independent variable is brought in to see how it affects the dependant variable then finally with a reversal where the independent variable is withdrawn to again record the effects on the dependant variable. This is usually done repeatedly to make the research more reliably through consistency but we cannot be exactly sure what the results are down to. Carryover effects may occur, behaviours from one stage overlapping into the next stage giving false results to the effect the independent variable is having. Also in some cases once the independent variable has changed the dependant variable this may not be able to be undone by simply removing the independent variable again it may be more complex than this. I would argue reversal in these studies especially when effecting behaviour is unethical. Our behaviour directly effects our everyday lives, and potentially making a negative behaviour worse just to ensure the independent variable had an effect in my eyes is unethical.

Unstructured interviews are very informal and produce qualitative data (descriptive data). Although the researcher has an aim for the research there are no set questions as such. The researcher can ask the participant questions based on their previous answers, giving the participant opportunity to raise related topics and take the interview in their chosen direction. Using this technique can bring in new ideas for the researcher, the participant may discuss something which the researcher had not previously considered related to their research. As the interview is informal, more like a chat than an interview, participants may feel a lot more at ease in the situation. This may help them to build up a relationship of trust with the researcher which may encourage complete honesty in their answers which will give the researcher a more accurate view on his research. But, as there are no question set previously the researcher may ask leading questions or be bias in the way they choose the questions. Also if there is more than one researcher, there may be individual differences in the way they conduct the interview which could again lead to biases within the findings of the research.

Structured interviews are very formal. The participants will be asked set questions which have previously been carefully thought out in order to get the best findings for the research. As these questions have already been set the participants will all receive the same questions in the same order which makes the research fair and able to collect quantitative data. It would be very unlikely for there to be any misleading or bias questions within the interview as the question would have been checked beforehand, and as the researcher will be reading questions in a formal manor they will therefore be unable to impose any of their personal beliefs into the conversation. The researcher will act as a professional, this may make some participants feel uncomfortable which may cause them to be less honest especially if the questions are personal, they may not feel comfortable revealing certain information in this way to a professional’s face.

It would depend on the research whether a researcher would choose to conduct an unstructured or structured interview but as you can see they both have their pros and cons.

Random Samples

Psychologists often work hard to have a completely random group of participants in their research, where every member of the population has an equal chance of being selected for the study. This is often so they can generalise to a larger group of people than are tested.

But is a sample ever completely random?

It is very hard to get a random sample of willing participants. The method in which the psychologist uses to find participants must be carefully thought out in order for it to be a random sample.

It takes a certain type of person to want to take part in research. People may take part in research for many reasons for example because of their personality, they may want to help, or because they think the results of a study will help them or their family. It then becomes a ‘type’ of person taking part in the research.

For ethical issues all participants must be allowed to have the right to withdraw from a study at any time. It would take a confident person to withdraw from a study after agreeing to take part, someone who is willing to stand up to an authority figure and say they no longer want to take part in their research, but this then eliminates the random sample too as participants without this confidence are more likely to go ahead with the study.

Some studies offer rewards for taking part. Again, this will limit how random the sample willing to take part is. For example if there was a cash reward for a study, the participants which are more likely to take part are students or families/people with a lower income. Not many well off families/people would have motivation or reason to participate. We cannot then generalise the results across the whole population.

With the ethical issues surrounding the research today – right to withdraw and consent in taking part in the study (and the motivation for taking part) I think it would be very difficult to have a sample which is completely random.