click to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own text click to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own textclick to generate your own text

Sunday, 18 November 2018

Meeting 6

Saturday, 17th November 2018

The dreaded quiz came up for the 6th meeting. It started at 2.00 pm and finished at around 5.00 pm. We finished the class after a little bit housekeeping from Dr Hamimah on what to present for the final meeting. She also asked us to determine the data analysis for each research question because it had to be included in the write up for Chapter 1, 2, and 3

Friday, 16 November 2018

Validity and Reliability

Issues of research reliability and validity need to be addressed in methodology chapter in a concise manner.
Reliability refers to the extent to which the same answers can be obtained using the same instruments more than one time. In simple terms, if your research is associated with high levels of reliability, then other researchers need to be able to generate the same results, using the same research methods under similar conditions. It is noted that “reliability problems crop up in many forms.
Reliability is a concern every time a single observer is the source of data, because we have no certain guard against the impact of that observer’s subjectivity” (Babbie, 2010, p.158). According to Wilson (2010) reliability issues are most of the time closely associated with subjectivity and once a researcher adopts a subjective approach towards the study, then the level of reliability of the work is going to be compromised.
Validity of research can be explained as an extent at which requirements of scientific research method have been followed during the process of generating research findings. Oliver (2010) considers validity to be a compulsory requirement for all types of studies. There are different forms of research validity and main ones are specified by Cohen et al (2007) as content validity, criterion-related validity, construct validity, internal validity, external validity, concurrent validity and face validity.
Measures to ensure validity of a research include, but not limited to the following points:
a) Appropriate time scale for the study has to be selected;
b) Appropriate methodology has to be chosen, taking into account the characteristics of the study;
c) The most suitable sample method for the study has to be selected;
d) The respondents must not be pressured in any ways to select specific choices among the answer sets.
It is important to understand that although threats to research reliability and validity can never be totally eliminated, however researchers need to strive to minimize this threat as much as possible.




Thursday, 15 November 2018

Instrumentation

Instrument is the general term that researchers use for a measurement device (survey, test, questionnaire, etc.). To help distinguish between instrument and instrumentation, consider that the instrument is the device and instrumentation is the course of action (the process of developing, testing, and using the device).
Instruments fall into two broad categories, researcher-completed and subject-completed, distinguished by those instruments that researchers administer versus those that are completed by participants. Researchers chose which type of instrument, or instruments, to use based on the research question. Examples are listed below:
Researcher-completed InstrumentsSubject-completed Instruments
Rating scalesQuestionnaires
Interview schedules/guidesSelf-checklists
Tally sheetsAttitude scales
FlowchartsPersonality inventories
Performance checklistsAchievement/aptitude tests
Time-and-motion logsProjective devices
Observation formsSociometric devices
Usability
Usability refers to the ease with which an instrument can be administered, interpreted by the participant, and scored/interpreted by the researcher. Example usability problems include:
  1. Students are asked to rate a lesson immediately after class, but there are only a few minutes before the next class begins (problem with administration).
  2. Students are asked to keep self-checklists of their after school activities, but the directions are complicated and the item descriptions confusing (problem with interpretation).
  3. Teachers are asked about their attitudes regarding school policy, but some questions are worded poorly which results in low completion rates (problem with scoring/interpretation).
Validity and reliability concerns (discussed below) will help alleviate usability issues. For now, we can identify five usability considerations:
  1. How long will it take to administer?
  2. Are the directions clear?
  3. How easy is it to score?
  4. Do equivalent forms exist?
  5. Have any problems been reported by others who used it?
It is best to use an existing instrument, one that has been developed and tested numerous times, such as can be found in the Mental Measurements Yearbook. We will turn to why next.



Wednesday, 14 November 2018

Sampling

When researching an aspect of the human mind or ​behavior, researchers simply cannot collect data from every single individual in most cases. Instead, they choose a smaller sample of individuals that represent the larger group. If the sample is truly representative of the population in question, researchers can then take their results and generalize them to the larger group.

Types of Sampling

In psychological research and other types of social research, experimenters typically rely on a few different sampling methods.

1. Probability Sampling

Probability sampling means that every individual in a population stands an equal chance of being selected. Because probability sampling involves random selection, it assures that a different subset of the population has an equal chance of being represented in the sample. This makes probability samples more representative, and researchers are better able to generalize their results to the group as a whole.
There are a few different types of probability sampling:
  • Simple random sampling is, as the name suggests, the simplest type of probability sampling. Researchers take every individual in a population and randomly select their sample, often using some type of computer program or random number generator.
  • Stratified random sampling involves separating the population into subgroups and then taking a simple random sample from each of these subgroups. For example, a research might divide the population up into subgroups based on race, gender, or age and then take a simple random sample of each of these groups. Stratified random sampling often provides greater statistical accuracy than simple random sampling and helps ensure that certain groups are accurately represented in the sample.
  • Cluster sampling involves dividing a population into smaller clusters, often based upon geographic location or boundaries. A random sample of these clusters is then selected and all of the subjects within in cluster are measured. For example, imagine that you are trying to do a study on school principals in your state. Collecting data from every single school principal would be cost-prohibitive and time-consuming. Using a cluster sampling method, you randomly select five counties from your state and then collect data from every subject in each of those five counties.

    2. Nonprobability Sampling

    Non-probability sampling, on the other hand, involves selecting participants using methods that do not give every individual in a population an equal chance of being chosen. One problem with this type of sample is that volunteers might be different on certain variables than non-volunteers, which might make it difficult to generalize the results to the entire population.
    There are also a couple of different types of nonprobability sampling:
    • Convenience sampling involves using participants in a study because they are convenient and available. If you have ever volunteered for a psychology study conducted through your university's psychology department, then you have participated in a study that relied on a convenience sample. Studies that rely on asking for volunteers or by using clinical samples that are available to the researcher are also examples of convenience samples.
    • Purposive sampling involves seeking out individuals that meet certain criteria. For example, marketers might be interested in learning how their products are perceived by women between the ages of 18 and 35. They might hire a market research firm to conduct telephone interviews that intentionally seek out and interview women that meet their age criteria.
    • Quota sampling involves intentionally sampling a specific proportion of a subgroup within a population. For example, political pollsters might be interested in researching the opinions of a population on a certain political issue. If they use simple random sampling, they might miss certain subsets of the population by chance. Instead, they establish criteria that a certain percentage of the sample must include these subgroups. While the resulting sample may not actually be representative of the actual proportions that exist in the population, having a quota ensures that these smaller subgroups are represented.
      Learn more about some of the ways that probability and nonprobability samples differ.

      Sampling Errors

      Because sampling naturally cannot include every single individual in a population, errors can occur.
      Differences between what is present in a population and what is present in a sample are known as sampling errors.
      While it is impossible to know exactly how great the difference between the population and sample may be, researchers are able to statistically estimate the size of the sampling errors. In political polls, for example, you might often hear of the margin of errors expressed by certain confidence levels.

      In general, the larger the sample size the smaller the level of error. This is simply because as the sample becomes closer to reaching the size of the total population, the more likely it is to accurately capture all of the characteristics of the population. The only way to completely eliminate sampling error is to collect data from the entire population, which is often simply too cost-prohibitive and time-consuming. Sampling errors can be minimized, however, by using randomized probability testing and a large sample size.


      Tuesday, 13 November 2018

      Ethics and Research

      Research ethics provides guidelines for the responsible conduct of research. In addition, it educates and monitors scientists conducting research to ensure a high ethical standard. The following is a general summary of some ethical principles:
      Honesty:
      Honestly report data, results, methods and procedures, and publication status. Do not fabricate, falsify, or misrepresent data.
      Objectivity:
      Strive to avoid bias in experimental design, data analysis, data interpretation, peer review, personnel decisions, grant writing, expert testimony, and other aspects of research.
      Integrity:
      Keep your promises and agreements; act with sincerity; strive for consistency of thought and action.
      Carefulness:
      Avoid careless errors and negligence; carefully and critically examine your own work and the work of your peers. Keep good records of research activities.
      Openness:
      Share data, results, ideas, tools, resources. Be open to criticism and new ideas.
      Respect for Intellectual Property:
      Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data, methods, or results without permission. Give credit where credit is due. Never plagiarize.
      Confidentiality:
      Protect confidential communications, such as papers or grants submitted for publication, personnel records, trade or military secrets, and patient records.
      Responsible Publication:
      Publish in order to advance research and scholarship, not to advance just your own career. Avoid wasteful and duplicative publication.
      Responsible Mentoring:
      Help to educate, mentor, and advise students. Promote their welfare and allow them to make their own decisions.
      Respect for Colleagues:
      Respect your colleagues and treat them fairly.
      Social Responsibility:
      Strive to promote social good and prevent or mitigate social harms through research, public education, and advocacy.
      Non-Discrimination:
      Avoid discrimination against colleagues or students on the basis of sex, race, ethnicity, or other factors that are not related to their scientific competence and integrity.
      Competence:
      Maintain and improve your own professional competence and expertise through lifelong education and learning; take steps to promote competence in science as a whole.
      Legality:
      Know and obey relevant laws and institutional and governmental policies.
      Animal Care:
      Show proper respect and care for animals when using them in research. Do not conduct unnecessary or poorly designed animal experiments.
      Human Subjects Protection:
      When conducting research on human subjects, minimize harms and risks and maximize benefits; respect human dignity, privacy, and autonomy.


      Monday, 12 November 2018

      Ethnographic Research

      What is ethnographic research?

      Ethnographic research is a qualitative method where researchers observe and/or interact with a study’s participants in their real-life environment. Ethnography was popularised by anthropology, but is used across a wide range of social sciences.
      Within the field of usability, user-centred design and service design, ethnography is used to support a designer’s deeper understanding of the design problem – including the relevant domain, audience(s), processes, goals and context(s) of use.
      The aim of an ethnographic study within a usability project is to get ‘under the skin’ of a design problem (and all its associated issues). It is hoped that by achieving this, a designer will be able to truly understand the problem and therefore design a far better solution.

      Advantages of ethnography

      One of the main advantages associated with ethnographic research is that ethnography can help identify and analyse unexpected issues. When conducting other types of studies, which are not based on in-situ observation or interaction, it can very easy to miss unexpected issues. This can happen either because questions are not asked, or respondents neglect to mention something. An ethnographic researcher’s in-situ presence helps mitigate this risk because the issues will (hopefully) become directly apparent to the researcher.
      Ethnography’s other main benefit is generally considered to be its ability to deliver a detailed and faithful representation of users’ behaviours and attitudes. Because of its subjective nature, an ethnographic study (with a skilled researcher) can be very useful in uncovering and analysing relevant user attitudes and emotions.

      Disadvantages of ethnography

      One of the main criticisms levelled at ethnographic studies is the amount of time they take to conduct. As discussed above, ethnographic studies do not always require a long period of time, but this consideration is nonetheless valid. Because of its richer output, an ethnographic study will tend to take longer to generate and analyse its data than many other methods.
      During previous ethnographic studies, we have found that it is possible that subjects may not act naturally during a short study. Longer studies normally counter-act this because the subjects grow to trust the researcher and/or get tired of any pretence.
      For example: During the first week of an ethnographic study into an insurance claim processing system, all the subjects were observed to be following the strictest interpretation of the correct procedures. As time progressed, however, it became increasingly apparent that almost all employees had ‘work-arounds’ and ‘short cuts’ which were liberally used in order to speed things up. These behaviours were very instructive in helping to re-design the process flow. Had the researcher not stayed in-situ long enough to observe these, they may have gone unrecorded.

      Risks associated with ethnography

      As stated above, ethnographic studies consist of the researcher observing and/or interacting with subjects within the environment which the (future) design is intended to support. The two main potential weaknesses with ethnographic studies are:
      Researcher
      Ethnographic researchers need to be very highly-skilled to avoid all the potential pitfalls of an ethnographic study. Some of these include the detail & completeness of observations, as well as potential bias (and mistakes) in data collection or analysis.
      Subjects
      It is essential that any studies’ subjects are as true a representation of the larger user audience as possible (assuming that the study has been designed this way). It is also vital that the subjects are open and honest with the researcher. Of course, both of these issues are related to the quality of the researcher themselves and their role in the study’s design.
      As we can see from the above, most of the risks associated with ethnographic studies relate to the researcher, either directly or indirectly. This, of course, means that the choice of ethnographic researcher is critical to a study’s success. We recommend choosing a researcher with a proven background of past involvement in successful projects across varying domains.



      Sunday, 11 November 2018

      Phenomenology Research

      Phenomenology has its roots in a 20th century philosophical movement based on the work of the philosopher Edmund Husserl. As research tool, phenomenology is based on the academic disciplines of philosophy and psychology and has become a widely accepted method for describing human experiences.  Phenomenology is a qualitative research method that is used to describe how human beings experience a certain phenomenon.  A phenomenological study attempts to set aside biases and preconceived assumptions about human experiences, feelings, and responses to a particular situation.  It allows the researcher to delve into the perceptions, perspectives, understandings, and feelings of those people who have actually experienced or lived the phenomenon or situation of interest.  Therefore, phenomenology can be defined as the direct investigation and description of phenomena as consciously experienced by people living those experiences.  Phenomenological research is typically conducted through the use of in-depth interviews of small samples of participants.  By studying the perspectives of multiple participants, a researcher can begin to make generalizations regarding what it is like to experience a certain phenomenon from the perspective of those that have lived the experience.
      Following is a list of the main characteristics of phenomenology research:
      • It seeks to understand how people experience a particular situation or phenomenon.
      • It is conducted primarily through in-depth conversations and interviews; however, some studies may collect data from diaries, drawings, or observation.
      • Small samples sizes, often 10 or less participants, are common in phenomenological studies.
      • Interview questions are open-ended to allow the participants to fully describe the experience from their own view point.
      • Phenomenology is centered on the participants’ experiences with no regard to social or cultural norms, traditions, or preconceived ideas about the experience.
      • It focuses on these four aspects of a lived experience:  lived spaced, lived body, lived time, and lived human relations.
      • Data collected is qualitative and analysis includes an attempt to identify themes or make generalizations regarding how a particular phenomenon is actually perceived or experienced.



      Saturday, 10 November 2018

      Grounded Theory

      All research is "grounded" in data, but few studies produce a "grounded theory." Grounded Theory is an inductive methodology.  Although many call Grounded Theory a qualitative method, it is not.  It is a general method. It is the systematic generation of theory from systematic research.  It is a set of rigorous research procedures leading to the emergence of conceptual categories.  These concepts/categories are related to each other as a theoretical explanation of the action(s) that continually resolves the main concern of the participants in a substantive area.  Grounded Theory can be used with either qualitative or quantitative data.



      Friday, 9 November 2018

      Case Study Research Design

      Basically, a case study is an in depth study of a particular situation rather than a sweeping statistical survey. It is a method used to narrow down a very broad field of research into one easily researchable topic.
      Whilst it will not answer a question completely, it will give some indications and allow further elaboration and hypothesis creation on a subject.
      The case study research design is also useful for testing whether scientific theories and models actually work in the real world. You may come out with a great computer model for describing how the ecosystem of a rock pool works but it is only by trying it out on a real life pool that you can see if it is a realistic simulation.
      For psychologists, anthropologists and social scientists they have been regarded as a valid method of research for many years. Scientists are sometimes guilty of becoming bogged down in the general picture and it is sometimes important to understand specific cases and ensure a more holistic approach to research.



      Thursday, 8 November 2018

      Experimental Research Design

      Experimental research designs are the primary approach used to investigate causal (cause/effect) relationships and to study the relationship between one variable and another. This is a traditional type of research that is quantitative in nature. In short, researchers use experimental research to compare two or more groups on one or more measures. In these designs, one variable is manipulated to see if it has an effect on the other variable. Experimental designs are used in this way to answer hypotheses. A hypothesis is a testable statement that is formulated by the researcher to address a specific question. The researcher designs an experimental study which will then support or disprove the hypothesis.
      To further the discussion of experimental research in future modules, it is important to understand the basic terminology related to experimental research. Following is a list of key terminology:
      • Independent Variable – This is the variable that will be manipulated, the “cause” or treatment variable. This variable may be an activity or characteristic that the researcher believes will make a difference.
      • Dependent Variable – This variable is the “effect” or outcome of manipulating the independent variable. The only constraint is that the outcome must be measurable.
      • Experimental Group – The group that receives the treatment being investigated.
      • Control Group – The group that remains the same in order to have something to compare the experimental group against.
      Experimental research is based on a methodology that meets three criteria that are important if the results are to be meaningful. These criteria are as follows:
      • Random Assignment – Test subjects must be randomly assigned to the treatment groups to control for creation of groups that may systematically differ in another way that impacts the outcome of the treatment.
      • Experimental Control – All aspects of the treatments are identical except for the independent variable. If all other factors are controlled and kept constant, then if measurable differences are found in the outcomes, the researcher can be assured that the difference is due the independent variable (treatment).
      • Appropriate Measures – The measures or outcomes must appropriate for testing the hypothesis. The outcome measured must represent the idea being tested in the hypothesis in order for the results to be valid.
      Considering the definitions and criteria from above, it is now to time to explore an example of experimental research using those concepts.   Let’s say that a researcher wanted to investigate the effects of using flipped classroom teaching techniques in an American history course.  The hypothesis being tested is that the flipped classroom teaching style will result in higher test scores among the students. The researcher will begin by randomly assigning students into two different sections of the course. The first section will be taught using the traditional lecture format. The second section will be taught used flipped classroom teaching techniques. The learning objectives and content for both sections will be identical. Both sections will be given identical exams throughout the semester and the scores between the two sections will be compared to assess student learning. The flipped classroom teaching style is the independent variable. The dependent variable is the test scores. The experimental group is the section of the course where the flipped classroom technique is being used and the control group is the section that continues to utilize the traditional lecture format. This is a classic example of the use of experimental research design. The following modules will delve deeper into various aspects of experimental research.



      Wednesday, 7 November 2018

      Survey Research Design

      If you've ever been sitting at a train station, a particular lecturer's classroom, or in a public area and a person with a stack of papers in his hands comes up to you out of the blue and asks if you have a few minutes to talk, then you have likely been asked to take part in a survey.
      There are a lot of ways to conduct research and collect information, but one way that makes it really easy is by doing a survey. A survey is defined as a brief interview or discussion with individuals about a specific topic. The term survey is, unfortunately, a little vague, so we need to define it better. The term survey is often used to mean 'collect information.' For instance, you may imagine a researcher or a television scientist saying, 'We need to do a survey!' (I know, riveting television).
      So, besides our definition above, survey also means to collect information. We have our first definition of a brief interview, and we have a second definition of collecting data. There is a third definition for survey. This third definition of survey is a specific type of survey research. Here are the three specific techniques of survey research:
      • Questionnaires - a series of written questions a participant answers. This method gathers responses to questions that are essay or agree/neutral/disagree style.
      • Interviews - questions posed to an individual to obtain information about him or her. This type of survey is like a job interview, with one person asking another a load of questions.
      • Surveys - brief interviews and discussions with individuals about a specific topic. Yes, survey is also a specific type of survey, to make things even more confusing. A survey is a quick interview, with the surveyor asking only a few questions.
      Below are the notes that I made from the slideshow hand-outs given by my classmates.


      Tuesday, 6 November 2018

      Correlational Research Design

      Correlational research is a type of non-experimental research method, in which a researcher measures two variables, understands and assess the statistical relationship between them with no influence from any extraneous variable.
      Our mind can do some brilliant things.  For example, it can memorize the jingle of a pizza truck. Louder the jingle, closer is the pizza truck to us. Who taught us that? Nobody! We relied on our understanding and came to a conclusion. We just don’t stop there, do we? If there are multiple pizza trucks in the area and each one has a different jingle. We would be able to memorize it all and relate the jingle to its pizza truck.
      This is precisely what correlational research is, establishing a relationship between two variables, “jingle” and “distance of the truck” in this particular example. Correlational research is looking for variables that seem to interact with each other so that when you see one variable changing, you have a fair idea how the other variable will change.
      Below were the notes I made.