answersLogoWhite

0


Best Answer

Contrastive analysis is the systematic study of a pair of languages with a view to identifying their structural differences and similarities. Historically it has been used to establish language genealogies.

Error analysis assumes that errors indicate learning difficulties and that the frequency of a particular error is evidence of the difficulty learners have in learning the particular form.

The main difference between these two is that the former tries to predict the errors one may make in L2 but the latter identifies the errors from L2 production.

Abu Ula Muhd. Hasinul Islam can be reached at hasinul_islam AT Yahoo DOT com

User Avatar

Wiki User

13y ago
This answer is:
User Avatar
More answers
User Avatar

AnswerBot

6mo ago

Contrastive analysis compares languages to predict potential areas of difficulty for language learners based on the differences between the learner's native language and the target language. Error analysis, on the other hand, focuses on analyzing errors made by language learners to understand the underlying causes, such as interference from the native language, overgeneralization of language rules, or interlanguage fossilization. Both approaches aim to improve language learning and teaching by identifying linguistic challenges and providing insights for effective instruction.

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: What is the differences between contrastive analysis and error analysis?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Linguistics
Related questions

What are the branches of contrastive linguistics?

The main branches of contrastive linguistics are contrastive analysis (comparing linguistic features of two languages), error analysis (identifying errors made by language learners based on differences between their native language and the target language), and contrastive rhetoric (examining how cultural and rhetorical differences influence language use).


What has the author James Carl written?

Carl James has written: 'Contrastive analysis' -- subject(s): Contrastive linguistics 'Contrastive analysis' -- subject(s): Contrastive linguistics 'Errors in language learning and use' -- subject(s): Study and teaching, Language and languages, Error analysis


Differences between fraud and error?

differences between errors and frauds


What are the differences between Russian orthodox between baptist?

The difference is between truth (Orthodox) and error (Baptists).


What is error analysis?

the precentage of error in data or an experiment


What type of procedural error often involves and error in logic?

Analysis


What are some of the sources of error in your analysis?

Some sources of error in analysis can include data collection inaccuracies, incomplete data, biased sampling methods, human error in data entry or analysis, and assumptions made during the analytical process.


What is is quantitative analysis?

Experiments are often likely to contain errors. Quantitative error analysis means determining uncertainty, precision and error in quantitative measurements.


What is quantitative analysis?

Experiments are often likely to contain errors. Quantitative error analysis means determining uncertainty, precision and error in quantitative measurements.


Which statistic estimates the error in a regression solution?

The mean sum of squares due to error: this is the sum of the squares of the differences between the observed values and the predicted values divided by the number of observations.


What is quantative analysis?

It is a typographical error. A quantitative analysis is one in which the observations have numeric values.


What is weighted residual in particle size analysis?

Weighted residuals in particle size analysis refer to the differences between the actual measurements of particle sizes and the predicted values from a mathematical model, adjusted by applying a weight to each residual based on its importance or significance. Weighted residuals are used to evaluate the accuracy and fit of a particle size distribution model to experimental data, with the goal of minimizing the overall error between predicted and measured values.