Specialists or All-Rounders: How Best to Select University Students?
Abstract
I study whether universities should select their students on the basis of only specialized subject-specific tests or a broader set of skills and knowledge. Theoretically, I show that even if broader skills do not improve the outcomes of graduates in the labor market, a university optimally chooses to use them as a criterion for selection alongside the mastery of more subject-specific tools. Empirically, I exploit the variation between subject-specific and nonspecific entrance exam sets on a large administrative dataset of Portuguese students. My central finding is that universities with less specialized admission policies admit a pool of students who obtain a higher final GPA.
I. Introduction
Which university entrance criteria matter for the academic performance of the students admitted? Most higher education institutions (HEIs) in developed countries rely on standardized tests to select their students (Zwick 2007; Sternberg 2010; Edwards, Coates, and Friedman 2012). Additionally, they can activate a second source of information. In some cases, the institution considers other elements in the admission process, such as student essays, interviews, as well as information on students’ extracurricular activities and experience. Other HEIs design their own entrance exams as a complement to standardized tests. The purpose is to obtain a more comprehensive assessment of higher education (HE) applicants (for reviews, see, e.g., Hoxby 2009; Zwick 2017). However, how broad the admission criteria should be is a question that has been given little attention in the literature.
I study the role of specialization of the admission criteria as a choice variable used by HEIs to maximize their objective function of admitting the best pool of students possible. These are expected to have better academic performance, better placement after degree completion, and better future labor market achievement. As a result, HEIs benefit from improvements in their own reputation.
The central finding of this paper is that universities with less specialized admission policies admit a pool of students who perform better at university by the end of the degree (namely, they obtain a higher final GPA on average). This is a surprising and relevant result. Many universities operate under the assumption that candidates who perform better at a subject-specific test will go on to perform better in that subject at university. For instance, in the artificial intelligence and data science course in Portugal, mathematics is often perceived as a relevant field-specific skill, while languages convey information on broader cognitive skills.1 I define general skills as those that are not directly rewarded by the field of study. They might be informative about a student’s ability, but they are not specific to the field. Ability refers to the potential or innate ability that an individual is born with, while skills are learned and acquired with time. Strong emphasis on field-specific skills rewards students who specialize early in high school rather than cultivate a more versatile portfolio of skills.
In my theoretical framework, I consider a single university and a continuum of students. The university’s objective is to admit the best pool of students possible. In turn, students aim to maximize a utility function that depends on their own future performance and the level of effort exerted on the university admission tests. I assume that the university can select students on the basis of a wider and broader set of tests. Of the tests available, some tests measure skills that will be considered specialized, while others are tests of skills that serve solely as a signal of ability (the general skills). These skills are still informative about future professional life (Kautz et al. 2014), although in the model I assume a scenario where they might not be as strongly related to student performance as the specific skill. Thus, the university must decide how to set its admission criteria, either (i) by using only a field-specific test or (ii) by combining it with a test of a more general nature. In the latter case, the university gains additional information about a student’s ability. However, in return, it must contend with less effort being put by students into the specialized test, because students optimally reallocate the effort. In the case of a single field-specific test, the effort of students would be directed solely toward acquiring knowledge and preparing for the exam that contributes to future academic performance and is directly valued in the labor market (see, e.g., Card et al. 2018).
From a theoretical point of view, I find that the university chooses to use a general skills test as a criterion for selection alongside a test to evaluate subject-specific skills. Although, in equilibrium, students deviate effort from the field-specific test (the productive test) to the general skills test, a different set of students (more able on average) is admitted and ultimately performs better than when only a field-specific requirement is in place.
In my empirical analysis, I use Portugal as a case study. Although the Portuguese system has specificities, the lessons one learns from it provide generalizable insights to any HE system that considers at least one of two different metrics in the admission process: one that matches closely to what a student actually learns in the subject of study (a field-specific test for instance) or another one that conveys information of a more generic nature, not directly rewarded by the field of study.
I rely on an extremely rich administrative dataset of the population of HE applicants from 2008 to 2018. Over that period, I observe approximately 800,000 student applications to the first year of an HE program. For each applicant, there is information regarding personal characteristics, socioeconomic background, previous academic achievement, the application process (including all program preferences stated with the corresponding overall application score and each exam score), and the HEI placement. Moreover, for 5 years, I have information on the performance of students at HE for each year of their degree and subsequent graduation information (namely, the graduation date and the final GPA).
I consider the Portuguese exam as a general skills test. The Portuguese exam is compulsory and the only common exam in all academic tracks in high school. The performance in this exam is available for all applicants to HEIs. With some exceptions, most programs have multiple tracks of admission. They require either a subject-specific set of exams as admission criteria or a broader set of exams. The student will be ranked according to whichever set of exams yields the highest overall application score. For instance, suppose a student applies for an economics degree at a particular university. The score that determines the ranking in the application process is either the mathematics exam score or the average of mathematics and Portuguese exam scores, depending on which generates the highest application score and hence the highest feasible position in the ranking. The mathematics exam is considered to be the field-specific requirement, while the Portuguese exam is the general skills test. However, not all HE programs consider a nonspecific set of exams as an admission criterion. Thus, I perform my empirical analysis on HE programs that include the general skills test (the Portuguese exam) in the broader set of exams.2 Therefore, my results do not apply to highly specialized subjects, such as medicine.
Because of this admission criteria design, I observe students being admitted to the same program on the basis of performance in different exam sets. In my empirical analysis, the treatment is the inclusion of the Portuguese exam as an admission criterion at university. Nevertheless, students might self-select into programs on the basis of unobserved attributes. I tackle the problem of selection into treatment by first determining an alternative application score for each student: the application score in the case that the student was admitted with a different exam set. For each student, I compute (i) her application score in the case the Portuguese exam was considered (my measure of general skills) and (ii) her application score in the case the Portuguese exam score was not considered (my field-specific skills measure).
I define exposure to treatment in the following way. I focus my analysis around the program admission threshold (the admission score of the last admitted student). Within each program, I define three categories of admitted students: the specialists (those admitted with a field-specific test and whose general skills would not have been enough to gain entry to the HEI), the generalists (those admitted purely on the basis of their performance on the general skills test who would not have been admitted on the basis of only the field-specific test result), and the all-rounders (those who could have been admitted with each one of the two types of tests). I observe differences in academic student performance across the different groups at distinct landmarks of the academic program. In particular, I find that (i) the generalists perform no worse than their specialist peers by the end of the first academic year, and (ii) by the end of the program, the specialists are outperformed by the all-rounders and the generalists.
My results have substantial implications for university admission practices. Universities that include a general skills test in their set of entrance exams benefit at the margin of admission, because this has a positive effect on the average student performance. Although this effect is not observable by the end of the first year, students admitted on the basis of a broader set of skills and knowledge perform better on average toward the end of the program.
An additional policy implication surrounds the role of the field-specific exam, because it may or may not be fit for purpose. Differences in student performance across groups suggest that the field-specific exam on its own might not be an effective test of the specialist skills associated with high achievement at university, which is the main policy message of my paper. Although I find evidence that the field-specific exam is a predictor of student performance by the end of the first academic year, I observe that the generalists and all-rounders have a better performance by the end of the degree when compared with specialists. Thus, considering only the field-specific exam does not guarantee that those who perform well on it will perform better at university.
Alternatively, another possible interpretation is that the general and field-specific exams measure the same characteristics, but the general exam is just a more precise signal. In that case, the result would be driven by the exams’ precision rather than the differences between general and specific skills. Thus, the generalization of the previous policy implication will depend on how exams are designed in other settings.
Finally, in my empirical analysis, I consider only programs that allow for the Portuguese exam as an alternative requirement, that is, programs where there is a variation of admission requirements on the standardized tests. Thus, my empirical results should not be interpreted as meaning that universities should all ask students to take the Portuguese exam, while my empirical analysis does not allow me to conclude that. In the mechanism section (sec. X), I discuss and acknowledge that my empirical results may not be generalizable across all fields of study.
The remainder of the paper proceeds as follows. Section II summarizes the literature review, and section III sets up the theoretical framework and its assumptions. Section IV defines the equilibrium in the two stages: the effort of students and the university’s choice of admission criteria. In that section, I also consider an extension of the model where the social planner plays a role. Section V describes the institutional setting, and section VI presents the datasets. Section VII presents the empirical setup, and section VIII presents the estimation strategy. Section IX presents the results, and section X discusses possible mechanisms. Finally, section XI concludes.
II. Literature Review
I contribute to different strands of literature. First, my paper contributes to the body of work that seeks to evaluate the role of standardized tests in the process of admission to HEIs (e.g., Rothstein 2004; Zwick and Green 2007). Examples of standardized tests include A-levels in the United Kingdom and SAT/ACT in the United States. The widespread use of and reliance on these tests has been criticized in the literature for favoring ethnicity (e.g., Bridgeman, McCamley-Jenkins, and Ervin 2000; Freedle 2003), gender (e.g., Leonard and Jiang 1999), or socioeconomic background (e.g., Zwick 2019; Campbell et al. 2022). Despite a lack of consensus in the literature on whether such tests predict performance at university (e.g., Burton and Ramist 2001; Kuncel and Hezlett 2007; Radunzel and Noble 2012), they are the most common instrument used to for admission. In this paper, I conclude that admission tests indeed help to predict student performance at university. Namely, the nonspecific tests are informative of overall performance, while the specific tests are good predictors of student performance in the first year of the HE program.
Second, my paper contributes to the literature on the effects of broadening admission criteria in HE (e.g., Sternberg 2010; Sternberg et al. 2012; Schmitt 2012; Stemler 2012; Niessen and Meijer 2017) and the effect of “noncognitive” skills, including for instance social skills and leadership skills (e.g., Deming 2017). Some HEIs rely on a second source of information. For instance, in the United States, some Ivy League universities rely on interviews. In the United Kingdom, Oxford and Cambridge establish their own entrance exams for some degrees. Often, this extra step in admission broadens the type of information gathered about the student by the institution. My work suggests that broadening the nature of cognitive admission tests leads to increased performance at university because abler students are admitted. Nevertheless, the general admission test should be considered as a complement to field-specific requirements.
Third, this paper contributes to the literature on the predictive performance of test scores and its implications for admission policies. For instance, although high-stakes assessments influence the decisions of students to apply to HEIs (Papay, Murnane, and Willett 2016), Bettinger, Evans, and Pope (2013) propose to reduce the number of ACT components to improve college admissions. At the same time, other papers show the predictive power of past performance, such as high school transcripts (e.g., Belfield and Crosta 2012; Cyrenne and Chan 2012; Dooley, Payne, and Robb 2012). Moreover, prior work shows that combining teacher scores and high-stakes assessments is a better practice for selection of students (Westrick et al. 2015; Zwick 2019; Silva et al. 2020). My results corroborate that high-stakes assessments are a predictive tool of students’ future academic performance.
My work relates to literature in the field of the economics of HE. While previous work has focused on different aspects of admission to HEIs, such as the role of student portfolios, admission tuition, or college choice and selectivity (e.g., Epple, Romano, and Sieg 2006; Avery and Levin 2010; Avery et al. 2012; Hoxby and Turner 2013), my paper focuses on one novel aspect of the process of admission to HEIs: the choice of the examination structure, namely, how specific the admission exams should be. Even under the assumption that some skills have no direct influence on student performance, and thus evaluating them may distract from learning more productive skills, I show that the best way for the university to resolve the resulting trade-off is to have a nonzero weight on the general skills test. My empirical evidence also shows that universities with broader admission exams select a better pool of incumbents.
Finally, the results of this paper are also linked to previous literature that shows it is useful to employ different types of instruments to select into HEI a pool of high-ability students. Although in this paper I focus on broadening the nature of standardized tests, countries often use a wide system of selection mechanisms to identify best-matching options that are not exclusively based on standardized tests. Most European countries rely on a combination of admission criteria of standardized tests and final high school GPA (an average of all courses taken at high school). The literature finds that high school GPA tends to be a better predictor of student performance in HE compared with standardized tests and that both measures should be considered as complements in the process of admission to HEIs (Cerdeira et al. 2018; Zwick 2019; Silva et al. 2020). I find that in Portugal, where high school GPA is considered together with standardized tests as admission requirements to HEIs, by broadening the nature of standardized tests, universities can select more high-ability students who have a broader set of skills. In other countries where admission is based solely on admission tests, considering high school GPA or teacher grades might also help to broaden the type of skills required to gain admission to HEIs.
III. Theoretical Framework
The design of admission policies aims at overcoming problems of imperfect information (Stiglitz 2000), namely, information asymmetries (Teixeira et al. 2006) that characterize HE markets, given that it is not possible to directly observe a candidate’s ability.3 College admission assessments provide incentives for signaling via both productive and nonproductive activities. In this section, I build a simple model where I compare two types of admission requirements (field-specific and general) that can potentially be used to overcome the problems of imperfect information.
In my theoretical framework, I focus on the choice of the examination structure in an HEI during the process of admission.4 I study a single university (or college) offering a single degree. A continuum of individuals of measure 1 want to apply to the university. However, there are more applicants than university places. I assume that only half of the applicants are able to be admitted to the university. The time of the sequential game that I model below is as follows: the university sets the admission exams required to obtain admission, and then students decide on the amount of effort to exert on each exam. The text below describes the two types of admission exams considered, the goals of each one of the agents, and the key assumptions of the model.
A. Ability and Admission Tests
Students are identical in every respect except their innate ability, and there are two levels of ability, the low type αL and the high type αH. I follow MacLeod and Urquiola (2015) and Gary-Bobo and Trannoy (2008) by assuming that students do not observe their own ability. Both types of students have the same expectation of their own ability, .5 To be admitted to the university, each student needs to perform on two admission tests. I assume that the observed score Ti on the admission test provides a noisy measure of ability and effort:
B. Labor Market and Wages
Similarly to Gary-Bobo and Trannoy (2008), I assume that there are only two categories of workers, the graduates from university (the skilled ones) and the ones that did not study at university (unskilled). I assume an individual’s wage, w, as a possible measure of the future student’s performance that depends on an individual’s innate ability and on the effort exerted for T1.8 The individual’s wage is given by the following relation:
The presence of the term e1 in both equations indicates that T1 (the field-specific exam) has a long-term benefit to the individual. The term T1 is assumed to be strongly related to potential future performance. Nevertheless, both admission tests produce a signal of ability that affects wages.9 There is an increment in wages from attending university, , which is proportional to the individual’s ability. I assume that is positive. Students with higher ability are expected to earn higher wages and to benefit more from attending university.
For clarity, labor market potential performance will be a function of a student’s innate ability, α, and of e1. As opposed to ability α, which is time-invariant, e1 is acquired with maturity. Therefore, even if a student does not attend university, these skills have an effect on the student’s future performance.10
C. University
The university needs to determine the weight that it allocates to each admission test. Let λ be the normalized weight of T2. The goal of the university is to maximize the total wage of its own students by using λ as an instrument, such that all students with are admitted, where τ* is an endogenous threshold.11 The term τ* is the minimum admission score of the weighted average of the two test scores among the admitted students. Considering that the university admits half of the population, τ* is the median of the weighted average of the two admission tests. I assume that there are as many low-ability students as high-ability students in the population and that the university is not able to distinguish between high-ability and low-ability students. Depending on λ and measures nH of high ability and nL of low ability, students will be admitted to the university.
D. Students
In the model, the track of a student is as follows: a student leaves high school, and she applies to the university. Conditional on passing the admission tests, she is accepted at university. Otherwise, she goes directly to the labor market. A student’s objective function is the maximization of the difference between her future wage and effort cost. When applying to university, she knows λ and she exerts effort on the two admission tests. The effort has a utility cost measured by C(e), increasing and convex. In the model, I assume the cost function to be with .12
By exerting effort, the student knows that with probability Π, she would pass the admission threshold (where ). The benefit to the student of attending university is that her wage will be boosted. The potential increase in earnings will be proportional to the student’s ability. The student’s ability is revealed when she leaves university.
IV. Equilibrium
In this section, I find the equilibrium by first determining the student’s optimal level of effort exerted on the two admission tests and then incorporating this into the university’s admission problem.
A. Effort of Students
In equilibrium, each student knows the effort distribution of the entire cohort, so each student takes as given the effort choice of all her fellow students (, ), thus τ* is a function of (, ).
The assumptions that all students have the same expectation about their ability and ability is revealed in the job market implies that the student expected wage is as follows:13
The maximization problem of a student is given by
The optimal effort is
Proof. Available in the online appendix.
According to proposition 1, for small values of λ I obtain a partial corner solution (for ), and thereafter I have an interior solution (for ). In other words, when the weight on T2 is too small, the student would not exert effort for that test, and e1 is constant on λ.15
As a result, the optimal amount of effort varies with λ as shown in figure 1.16
Figure 1 represents the student’s optimal strategy in a Nash equilibrium. For low values of λ, the student would not change her decision when compared with the case where . The same occurs if the cost of switching from one test to another is too high (which means a high value for δ). Alternatively, when the university increases the weight on T2, for values of the student reallocates effort from T1 to T2. Thus, under some conditions, allowing for a second exam deviates effort from the productive test (T1). From a student’s perspective, this would be the optimal strategy to maximize the difference between expected wage and costs.
B. University Choice
In this subsection, I show that not considering the second test () is generally not optimal for the university. I normalize the high-ability level to and, consequently, .
The university cares about the wage of its own students. Let be the solution of the university problem and (nH(λ), nL(λ)) the measure of high-ability and low-ability students admitted at university. The maximization problem of the university is as follows:
In equilibrium, the university takes the student’s optimal effort into consideration to maximize the wage for those students that it admits.17 As a result, the payoff function is . The university payoff function is a function of (defined in proposition 1) and (nH(λ), nL(λ)). The measure of admitted students is determined in the following lemma.
Proof. Available in the online appendix.
Instead of determining , I show that the optimal solution is not zero. The main result is presented in the following theorem.
If , the solution of the university’s problem is .
Proof. The university chooses λ to maximize its payoff. According to the result presented in proposition 1, is a continuous function on λ. Following lemma 1, I can also verify that the measure of admitted students is a continuous function on λ.19 Additionally, is not differentiable at , and nH(λ) is not differentiable at (for ). Thus, ΠU is continuous on λ and differentiable at its domain except at and .
Given that I know the payoff function (πU), the number of admitted students (nH(λ), nL(λ)), and the student’s optimal effort (), I can determine the first derivative of the payoff function, namely,
C. Interpreting the Results
I model the university admission problem with the focus on two aspects: the noise and the nature of the admission tests. Figure 2 presents the two test score distributions for students of both abilities. On the horizontal axis, I have the test score distribution of T1, and the test score distribution of T2 is represented on the vertical axis. Ability is fixed for each student type, and (e1, e2) is also fixed in the equilibrium. Given that the two idiosyncratic terms of both tests follow a uniform distribution, the shape of the distribution is rectangular. Besides the gap on ability, the noise of the two tests is a key factor in both distributions.20
According to figure 2, when λ increases, the university can select more high-ability students.21 The increment in λ is illustrative of the following: if a university is faced with two students who have the same score on T1, the university should select the one with the highest score on T2. Indeed, as argued by Holmström (1979), additional information would allow a more accurate judgment of a student’s performance. For students who have the same score on T1, the university would like to pick the ones who are also good on T2 (even if T2 conveys less relevant skills).22
The other important feature of my model is the nature of the second test. I assume that there are no administrative costs of introducing a second test. However, there is an inefficiency associated with that test: it reduces the productive effort, e1 (see fig. 1).23 The cost of running a less relevant test (the general admission exam) is a decrease in the student’s future productivity, which reduces the university payoff. In the model, I assume the extreme scenario where the less productive effort, e2, is less informative about future wages than e1. Hence, when the university is deciding about introducing a second admission test, it faces a trade-off between gaining new information about the student’s ability and losing productive effort. Given this, there are two effects on equilibrium occurring at the same time. On the one hand, geometric intuition shows that by increasing λ, the university is able to increase the number of high-ability students. On the other hand, from figure 1, I conclude that an increase of λ has a detrimental effect on e1, which has a negative effect on the student’s future wage. The two effects move in opposite directions.
Nevertheless, in theorem 1, I have shown that the overall effect of introducing a second admission test is positive. Even if students deviate effort from the relevant test, the university still benefits from including a general admission exam in its admission criteria.24 A separated question is whether that is socially desirable.
Alternatively, I also considered the case of the social planner. I took a utilitarian approach to government intervention. The government’s goal is to maximize the unweighted sum of wages of all individuals, internalizing the cost of effort exerted in the two admission tests. This approach is common in the literature since Arrow (1971). I find that the university and the government do not always agree about λ. Nevertheless, both the university and the government agree that the weight on T2 should be positive.
V. Admission Policies in Portugal
The theoretical framework can be applied to any HE system where different admission conditions are combined, such as field-specific tests and additional requirements. This combination is common across countries, and Portugal is one such case. In Portugal, universities use national central exams and the high school GPA as mandatory requirements.
As explained before, I aim to test whether the inclusion of a general skills test as an admission criterion generates a better pool of students. However, it is crucial to clarify the admission procedure of HEIs in Portugal before presenting my estimation strategy.
A. Centralized Allocation of Candidates
In Portugal, the process of admission to public universities is comparable to that in several other countries (e.g., Spain, Brazil, Colombia, Hungary, and Denmark). Admission to public HEIs is centralized and managed by the government. HEIs choose their admission requirements. Students, in turn, can apply to up to six HE programs (institution-degree pairs), ranked by order of preference. Each year, the government sets the number of vacancies for each institution and degree, the numerus clausus. There are approximately 50,000 vacancies each year in more than 170 public HEIs offering a total of 1,180 programs.
Table A1 provides further information. The Portuguese HE system is binary and composed of polytechnics and universities, which can be either public or private. I focus on the public HE system. Applications to the private HE system are not centralized, and the number of students enrolled in the private system between 2011 and 2019 varied between 20% and 18%, respectively (Biscaia, Sá, and Teixeira 2021). An institution offering a degree can be a department/faculty within a polytechnic or university or can be the university or polytechnic itself, depending on its organizational structure. I consider only first-degree cycles in my analysis. Throughout the text I will refer to an institution-degree pair as an HE program or simply a program.25
The government also sets boundaries on admission requirements that universities must respect. The weight to attach to the high school GPA (which is an average of all courses taken by the student at high school) must be between 50% and 65%. The remaining weight is allocated to the standardized test(s), chosen by the university out of the national exams.
Finally, the government manages the allocation of candidates via a deferred acceptance (DA) mechanism (Gale and Shapley 1962).26 Having set the number of vacancies at each program, it then ranks all the candidates to that program on the basis of the admission criteria set by the university. Note that the admission score of each student is specific to each program, because different exams and different weights are enforced across programs. Therefore, the government computes each year approximately 1,000 rankings. Each candidate will be listed in as many rankings as the number of programs she applied to. She will be offered admission into her highest feasible stated preference, given the number of vacancies and the quality of the pool of competing applicants. Each student is allocated to a single program, and the student cannot change the result of the allocation.
B. The University Problem in Practice
Each HEI defines, for each program, the number and nature of national exams required as its admission criteria, as well as their weights. The institution has a set of 19 entrance exams available to choose from (see table 1).
Field-Specific Exams | General Exams | |||
---|---|---|---|---|
Core Exam | Additional Exams | |||
Track | (Twelfth Grade) | (Eleventh Grade; Choice of Two) | Twelfth Grade | Eleventh Grade |
Arts | Drawing | Descriptive geometry, mathematics, history of culture and arts | Portuguese | Philosophy a |
Science and technology | Mathematics | Biology and geology, physics and chemistry, descriptive geometry | Portuguese | Philosophy a |
Socioeconomics | Mathematics | Economics, geography, history B | Portuguese | Philosophy a |
Languages and humanities | History A | Geography, Latin, German, French, English, Spanish, Portuguese literature, applied mathematics | Portuguese | Philosophy a |
All students in high school must take the core field-specific exam at the end of the twelfth grade, which is defined according to the academic track they have followed: arts, science and technology, socioeconomics, or languages and humanities. They must additionally have taken at the end of eleventh grade two other field-specific exams chosen out of a set of three to eight possible exams (see table 1). Independently of the track chosen, all students must take a general exam of Portuguese at the end of twelfth grade.27
The university must respect one single constraint when defining the number of exams required and their nature and weights. The number of exams required is either one or two, but there may be different exam combinations considered by the university.28 If requiring one exam, the institution can specify it or allow candidates a choice among a defined set. In either case, the university is free to require a field-specific exam or a general one (see table 1). If the institution requires a second exam, the same procedure applies. If considering two exams, they must have equal weight (thus, the exams’ weight is divided equally by the two exams). If the institution allows for different access options to a program (exam combinations), and the student fulfills different access options (given the exams she took), the combination that is considered for ranking her among the candidates to that program is the one that yields her the highest admission score. As a result, within the same program, I can observe students being admitted with different exam combinations. This means in the Portuguese setting, given that if considering two exams the weights will need to be the same, λ of the model would be either 0 (only field-specific exam) or 1 (50% for the field-specific exam and 50% for the general exam).
C. The Student Problem in Practice
The process of applying to HEIs requires students to rank their preferred programs. After the announcement of the admission criteria rules and after observing her own scores in the national exams, each student can order up to six programs to which she applies.
Students have an incentive to report a set of six preferences that they judge feasible. Because they observe their scores and they know past program thresholds of admission, they will try to exclude options of programs that are beyond reasonable choice. They will not waste preference slots with scenarios that are way out of their possibilities. Hence, the student has an incentive to report her truthful rank of preferences. She knows that she will be allocated to her highest-feasible stated preference.
VI. Datasets
In my study I link three primary datasets: (i) applications to public HEIs (DGES 2019b), (ii) students’ performance, and (iii) graduation from HEIs (DGEEC 2019).29
The application dataset provides information on the population of applicants to the HE system. For each student, I have demographic characteristics, socioeconomic background, previous academic achievement, the application process including all program preferences stated, and the placement. I have microdata for 11 years, from 2008–9 to 2018–19.
The students’ performance and graduation from HEI datasets are comprehensive data sources that cover all the HEIs in Portugal and have information on all the students enrolled. They report a student’s performance each year she has been enrolled and her graduation, whenever applicable. The performance dataset also reports information regarding the mobility status, such as the placement at an exchange program.
Over the period 2008–18, I observe approximately 800,000 student applications to the first year of an HE program under the General Access Regime (GAR; the centralized process of application to public HEIs). Additionally, I track enrolled students over the 5 years from 2013–14 to 2017–18. This panel comprises 1.5 million observations on individual-year for 760,000 individuals.30 I also observe approximately 330,000 individuals graduating from 2012 to 2017, irrespective of their application year and years of enrollment.31 I link the three datasets (application, performance, and graduation) restricting the link to those individuals who applied to public HEIs through the centralized national system, who were offered a place, and who enrolled.32 From the individuals observed in the performance dataset, I was able to match 96% to the application dataset, namely, 205,297 individuals. Table 2 presents descriptive statistics for the linked individuals. The majority of the individuals studied at a public high school, and 57% of the individuals are female. One-third of the individuals are nonlocal students, which means these students live in geographic areas different from those of their households.33 Moreover, only 30% of the individuals have a mother or father who has an HE degree.
Mean | SD | ||
---|---|---|---|
(1) | (2) | (3) | |
A. Data Structure | |||
Initial year | 2013–14 | ||
Final year | 2017–18 | ||
No. of years | 5 | ||
No. of individuals | 205,297 | ||
B. Individuals | |||
Age | 18.42 | 1.71 | |
Female (share) | .57 | ||
High school GPA | 149.46 | 20.05 | |
Public high school (share) | .84 | ||
Nonlocal student (share) | .30 | ||
Mother has HE (share) | .33 | ||
Father has HE (share) | .26 | ||
Applied to a maintenance grant (share) | .32 | ||
Received a maintenance grant (share) | .25 | ||
C. Placement | |||
Degree of placement (no. of individuals): | |||
Bachelor’s | 166,741 | ||
Integrated master’s | 38,556 | ||
Preferences of placement (share): | |||
First | .56 | ||
Second | .21 | ||
Third | .11 | ||
Fourth | .06 | ||
Fifth | .04 | ||
Sixth | .02 | ||
Application score (0–200) | 144.72 | 20.13 | |
No. of admission exams | 1.37 | .55 | |
Portuguese admission exam (share) | .22 | ||
Portuguese exam score (0–200) | 121.66 | 28.21 | |
No. of programs ranked | 4.79 | 1.58 | |
No. of institutions ranked | 2.85 | 1.46 | |
No. of broad fields ranked | 2.02 | .99 |
Once I define the outcomes of interest, the variables of interest, and the methodology to be used, I will impose further constraints that should be applied to the linked dataset in order to determine the analysis population.
VII. Empirical Setup: Tests and Performance
A. Outcomes
In this study, I consider three measures of student performance at university (yi) for an individual i: (i) the number of credits obtained through the European Credit Transfer System (ECTS) by the end of the first academic year, (ii) whether the individual completed his or her HE program on time (completion on time), and (iii) final GPA.34 These outcomes are different in their nature. They measure student performance at different stages of the degree. For each measure, the subpopulation of analysis is different. In table 3, I report descriptive statistics for each subset that will be considered in my analysis (for descriptive statistics of the total sample population, see table 2).
ECTS Credits Accumulated by the End of the First Year | Completion on Time (for 3-Year Degrees) | Final GPA | |||||||
---|---|---|---|---|---|---|---|---|---|
Mean | SD | Mean | SD | Mean | SD | ||||
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | |
A. Data Structure | |||||||||
Initial cohort | 2013–14 | 2013–14 | 2013–14 | ||||||
Final cohort | 2016–17 | 2014–15 | 2015–16 | ||||||
No. of years | 4 | 2 | 2 | ||||||
No. of individuals | 134,143 | 52,966 | 29,263 | ||||||
B. Individuals | |||||||||
Age | 18.40 | 1.67 | 18.63 | 1.99 | 18.39 | 1.73 | |||
Female (share) | .58 | .56 | .67 | ||||||
High school GPA (0–200) | 134.14 | 19.91 | 144.69 | 17.32 | 149.06 | 17.19 | |||
Public high school (share) | .85 | .86 | .87 | ||||||
Nonlocal student (share) | .31 | .29 | .31 | ||||||
Mother has HE (share) | .32 | .28 | .26 | ||||||
Father has HE (share) | .26 | .21 | .19 | ||||||
Applied for a maintenance grant (share) | .32 | .32 | .36 | ||||||
Received a maintenance grant (share) | .24 | .26 | .28 | ||||||
C. Placement | |||||||||
Degree of placement (no. of individuals): | |||||||||
Bachelor’s | 108,373 | 52,966 | 29,257 | ||||||
Integrated master’s | 25,770 | … | 6 | ||||||
Preferences of placement (share): | |||||||||
First | .59 | .58 | .64 | ||||||
Second | .21 | .21 | .20 | ||||||
Third | .10 | .10 | .09 | ||||||
Fourth | .05 | .05 | .04 | ||||||
Fifth | .03 | .03 | .02 | ||||||
Sixth | .02 | .01 | .01 | ||||||
Application score (0–200) | 144.96 | 20.05 | 138.95 | 17.41 | 143.01 | 17.48 | |||
No. of admission exams | 1.37 | .56 | 1.15 | .36 | 1.19 | .39 | |||
Portuguese admission exam (share) | .22 | .28 | .29 | ||||||
Portuguese exam score (0–200) | 121.78 | 28.36 | 117.00 | 27.08 | 120.99 | 26.94 |
I observe the number of ECTS credits only for students that completed the first academic year of their degree. I do not have information on the ECTS credits for the 2017–18 entry cohort and for individuals who dropped out during their first year (on average, dropouts represent 17% of enrollments per year). Overall, there are no differences between the subsample of individuals with ECTS credits (see table 3) and the total sample (see table 2). I define completion on time as a dummy variable equal to 1 if a student graduated on time and 0 otherwise. I consider only students enrolled either in a 3-year bachelor’s program or in an integrated master’s program (which awards a bachelor’s certificate after successful completion of the first 3 academic years). I exclude from the analysis the cohorts 2015–16 to 2017–18 and those individuals enrolled in 4- or 6-year bachelor’s programs in the 2013–14 and 2014–15 cohorts. By comparison of the number of ECTS credits, in this subsample of analysis students have, on average, a lower admission score but better performance in high school. Additionally, I observe a slightly higher share of students admitted with the Portuguese exam (28%).
Finally, the final GPA is available only for students who have finished their degree during the period of analysis, and it is an average score of all courses taken at university (including those taken in the first year).35 For that reason, in this subset I consider only students enrolled in a 3- or 4-year program for the 2013–14 cohort and those enrolled in a 3-year program for the 2014–15 cohort (the ones that could potentially have graduated).
Comparing the three different subsamples of analysis, the majority of the descriptive statistics remain the same. The major difference is associated with the gender and the high school GPA variables. In particular, both the share of females and the average high school GPA tend to increase when I look at those that graduate. It is important to control for both characteristics in the empirical analysis. Although the descriptive statistics are similar, I believe that it is relevant to look at the three different outcomes.
The number of ECTS credits and final GPA can vary across programs. Namely, some programs might be more restrictive in their grading standards compared with others. Thus, these outcomes can be very different across programs. For that reason, I consider deviations to the mean instead of total scores in my analysis. I standardize the number of ECTS credits and final GPA within the program by subtracting the mean and dividing by the standard deviation. I consider for each student the deviation to the mean outcome for the program in which she is enrolled.
B. The Term T2 in Practice: The Portuguese Exam
From my model, there are two relevant attributes of the admission requirements that should be considered: the number of exams and their nature (generic or field-specific). In fact, the existence of a second exam always adds more information to the selection process (e.g., Holmström 1999). The addition of a general exam diversifies the nature of the information gathered by the institution. Furthermore, allowing the inclusion of a general exam favors candidates who are relatively better at general skills than at field-specific skills. However, the general exam is taken into account only when it improves the student’s admission score.
Consider the following example in table 4, where two students (Pedro and Alexandre) apply to the same degree at universities A and B. Both students prefer university A over B. However, university A considers only a field-specific exam (T1) as an admission criterion, while university B allows students to apply with a general exam (T2) combined with T1. Both Pedro and Alexandre have taken T1 and T2 in high school. However, when applying to university B, T2 is considered only in Alexandre’s case. Only in his case, the score on T2 increases the application score. On the other hand, in Pedro’s case, only the score on T1 is considered given that he performed better in T1 relative to T2. This means, holding constant preferences, Pedro will be allocated to university A and Alexandre to university B.
Student and Exam Taken | Exam Score | Student and Exam Required | Student Admission Score | |
---|---|---|---|---|
University A | University B | |||
Pedro: | Pedro: | |||
T1 | 170 | T1 | 170 | 170 |
T2 | 110 | T1 + T2 | NA | 140 |
Alexandre: | Alexandre: | |||
T1 | 160 | T1 | 160 | 160 |
T2 | 200 | T1 + T2 | NA | 180 |
In this example, I observe that Pedro performed much better on T1 relative to T2. Instead, Alexandre had a similar score on T1 compared with Pedro but performed even better on T2. Pedro represents a student profile strong on field-specific skills (specialist). Alexandre represents students that performed sufficiently well across disciplines, both field-specific and general skills (all-rounder profile). University A places a higher value on specialist students, while university B allows all-rounders to compete with specialists for the same place. Thus, university B captures a different pool of students compared with university A.
Note that if both universities had given the students the option to add the exam T2, Alexandre would have displaced Pedro at university A. Indeed, for the student, the option of inclusion of a general exam favors a student profile relatively more robust on general skills, all else equal. I aim to understand whether allowing all-rounders to compete with specialists would have increased student performance at a university.
The example also illustrates another implication for the university. If both universities had considered T1 as the single admission criterion, the allocation of students would have remained unchanged. However, the admission threshold of university B would have decreased from 180 to 160 (the admission score of the last admitted student). Therefore, one might imagine that the introduction of T2 can be seen as an incentive that universities use to boost the admission threshold score and signal themselves as more selective universities.
Alternative entry requirements.—For each program, there may be different alternative entry requirements (exam combinations). On average, each program allows for three alternative entry-exam combinations. Some combinations include the Portuguese exam, and others do not. Within and across programs, it is up to the discretion of each institution whether to include the Portuguese exam in one or more of the exam combinations allowed.
Table 5 presents the total number of exam combinations within programs. In total, there are 11,951 programs over the 11 years of analysis. I refer to each program per year as program-year. For instance, there are 2,316 program-years that allow for a single entry-exam combination, of which only 204 included the Portuguese exam.36 For each of the individual’s stated preferences (up to six), her application score is computed as the highest score out of the exam combinations set by the program.37
No. of Exam Combinations with Portuguese | |||||||||
---|---|---|---|---|---|---|---|---|---|
No. of Exam Combinations | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 8 | Total |
1 | 2,112 | 204 | 2,316 | ||||||
2 | 1,305 | 322 | 25 | 1,652 | |||||
3 | 3,535 | 3,657 | 24 | 35 | 7,251 | ||||
4 | 63 | 351 | 2 | 0 | 23 | 439 | |||
5 | 2 | 117 | 2 | 0 | 0 | 15 | 136 | ||
6 | 9 | 117 | 2 | 2 | 0 | 0 | 19 | 149 | |
7 | 0 | 0 | 7 | 0 | 0 | 0 | 0 | 7 | |
8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
Total | 7,026 | 4,768 | 62 | 37 | 23 | 15 | 19 | 1 | 11,951 |
In my analysis, I consider only programs that include Portuguese as an alternative requirement (and not mandatory). That is, Portuguese is included in at least one of the allowed exam combinations but not in all. Additionally, I define τ* as the program admission threshold.
VIII. Estimation Strategy
The treatment is the inclusion of the Portuguese exam as an admission criterion at the individual level. I define y(1) as the observed outcome of student performance in case of treatment and y(0) otherwise. The problem I must tackle is that selection into treatment might not be random. However, I can accommodate selection in my model by understanding how individuals are assigned to treatment.
In my analysis, I define three different groups of students: (i) the specialists, (ii) the generalists, and (iii) the all-rounders (see fig. 3). First, I observe students admitted without the Portuguese exam, and so their field-specific skills are better than their general skills ( and ). Among these students, some of them could not have been admitted with a general admission exam (specialists). Some could have been admitted with such an exam (all-rounders). Second, I also observe students admitted with Portuguese but could have been admitted even without it (another type of all-rounders). These are students who are relatively better at general skills but whose field-specific skills would have been enough to get them admitted (). Finally, I observe students who were admitted only because their Portuguese exam score was sufficiently high (generalists). These are students whose field-specific skills are relatively weak compared with their peers and would not have been admitted without the Portuguese exam ().
There is a potential trade-off in allowing generalists to compete with specialists. The university can increase the average quality of the pool of students by admitting all-rounders. At the same time, the university might get pure generalists in its pool. In the example, all students gained entry to the program. A consequence of including Portuguese as an alternative requirement (T2) is that generalists may take places that would otherwise have been allocated to specialists. Thus, I frame my research question in the following way: Do the generalists perform better at university relative to their peers?
To answer the question, I look at performance in T1 and T2. For individuals that were admitted with Portuguese, I dichotomize them according to whether their performance in T1 was sufficient by itself to meet the entry requirement (τ*).38 For individuals that were admitted without Portuguese, I look at performance in T2. In my estimation strategy, the specialists will be the comparison group. I hypothesize that the effect of allowing the inclusion of the Portuguese exam on individual performance will be different across groups. Namely, I expect generalists to perform worst at university compared with their peers, given that the generalists are weaker in terms of field-specific skills.
In the data, I attempt to determine the application score for each student in the case that she was admitted with a different exam combination. For each student, I compute (i) her application score in the case the Portuguese (PT) exam was considered (τPT) and (ii) her application score in the case the Portuguese exam was not considered (). On the basis of four blocks of information (admitted with Portuguese, τ*, τPT, and ), I can distinguish between the different comparison groups. Table 6 reports the number of students for whom I can identify each one of the two types of application scores.39
All Programs | Programs That Allow for Portuguese | |||||
---|---|---|---|---|---|---|
Mean | SD | Frequency | Mean | SD | Frequency | |
No. of individuals | 205,297 | 78,233 | ||||
No. of programs (institution-degree pairs) | 1,242 | 482 | ||||
No. of program-years | 5,059 | 2,076 | ||||
Actual application score (τ) | 144.72 | 20.13 | 205,297 | 139.02 | 17.66 | 78,233 |
Admission score included Portuguese (share) | .22 | 45,154 | .51 | 40,016 | ||
Portuguese exam score | 127.90 | 23.08 | 139,241 | 120.10 | 25.90 | 67,896 |
Application score with Portuguese (τPT) | 134.55 | 18.05 | 73,034 | 133.71 | 17.79 | 67,896 |
Application score without Portuguese (τ∼PT) | 146.56 | 20.31 | 172,500 | 141.65 | 18.06 | 46,837 |
(τPT − τ∼PT) | −9.59 | 12.40 | 33,896 |
In the case that , student i is classified as being a generalist. Otherwise, she is classified as an all-rounder. As a result, according to table 7, I observe that 8% of the overall sample of enrolled students are all-rounders that were admitted with Portuguese, and 4% of the enrolled students are generalists. Additionally, for 39% of the overall sample, I cannot distinguish between the two groups. This occurs because I cannot observe their scores on the exam combination required in the case that Portuguese was not considered.
Observed | Imputation | |||||||
---|---|---|---|---|---|---|---|---|
0 ≤ τ − τ* ≤ x | 0 ≤ τ − τ* ≤ x | |||||||
Overall Sample | x = 10 | x = 5 | x = 2 | Overall Sample | x = 10 | x = 5 | x = 2 | |
No. of individuals | 78,233 | 42,350 | 26,358 | 13,592 | 78,233 | 42,350 | 26,358 | 13,592 |
Admitted without Portuguese exam (share): | ||||||||
Specialists | .21 | .29 | .31 | .33 | .28 | .37 | .41 | .43 |
All-rounders | .15 | .07 | .04 | .02 | .21 | .11 | .07 | .04 |
Nonidentified | .13 | .11 | .11 | .11 | ||||
Admitted with Portuguese exam (share): | ||||||||
All-rounders | .08 | .06 | .04 | .03 | .38 | .34 | .30 | .27 |
Generalists | .04 | .06 | .08 | .09 | .14 | .18 | .23 | .26 |
Nonidentified | .39 | .40 | .41 | .42 |
To distinguish between specialists and all-rounders (who were admitted without Portuguese), I need to look at performance in T2. Individual i is classified as being a specialist when I observe that . Otherwise, she is classified as an all-rounder. In fact, I observe 21% of the enrolled students are specialists, and 15% of the enrolled students are all-rounders, while for 13% I am not able to distinguish between the two groups.
In both distinctions, I assume that the pool of allocated students and the threshold (τ*) would remain the same in the case of changing the admission criteria.40 Because of data constraints, the division between specialists and all-rounders is more accurate than the division between generalists and all-rounders. In the dataset, I observe the Portuguese exam score for the majority of students. As a result, when looking at those admitted without Portuguese, I cannot identify the group for 13% of the students, while for those admitted with Portuguese the percentage increases to 39%. The unbalance on assigning the sample into groups can potentially bias the results.
Therefore, I run my analysis around the program admission threshold. I am interested in comparing student performance among those whose Portuguese exam was, at the margin, the only reason why they were admitted (generalists). In this setup, I consider the reduced-form regression:
where yi is the outcome, Xi is a vector of controls (gender, high school GPA, and nonlocal student), Θ represents the fixed effects (I consider year, preferences, and program fixed effects), and generalists is an indicator equal to 1 if the student’s application score without Portuguese is smaller than or equal to the threshold. The treatment toggles on whether the student needed her result on the Portuguese exam to gain entry into the HE program. The term α1 is the coefficient of interest, and it tells the reader whether the generalists performed better at university in comparison with the specialists (the omitted category) at the margin of gaining entry into the HE program, and α2 and α3 measure differences in student performance when comparing all-rounders with specialists.
The advantage of comparing students close to the threshold in the distribution of the admission score is that students were admitted to the same program, and I can observe their outcome. I assume that individuals around the assignment rule are comparable (comparing students who at the margin were admitted only because of the Portuguese exam with the students for whom it was not necessary). I run my analysis within 10 and 5 points distance from the threshold, and table 8 provides descriptive statistics for all variables considered in the estimations (according to all constraints imposed in the analysis population).
Analysis by Groups | ||||||
---|---|---|---|---|---|---|
(0 ≤ τ − τ* ≤ 10) | ||||||
ECTS Credits | Completion on Time | Final GPA | ||||
Mean | SD | Mean | SD | Mean | SD | |
No. of individuals | 26,594 | 12,976 | 6,837 | |||
Female (share) | .66 | .62 | .71 | |||
High school GPA (0–200) | 141.30 | 16.61 | 139.39 | 15.49 | 142.90 | 15.34 |
Nonlocal student (share) | .32 | .31 | .32 | |||
Preferences of placement (share): | ||||||
First | .50 | .49 | .53 | |||
Second | .23 | .24 | .24 | |||
Third | .13 | .13 | .12 | |||
Fourth | .07 | .07 | .06 | |||
Fifth | .04 | .04 | .03 | |||
Sixth | .02 | .02 | .02 | |||
Application score (0–200) | 137.37 | 16.31 | 134.26 | 14.99 | 137.12 | 14.76 |
Portuguese admission exam (share) | .54 | .55 | .56 | |||
Portuguese exam score (0–200) | 119.27 | 24.96 | 115.67 | 23.91 | 118.20 | 23.74 |
IX. Results
I perform my analysis at the margin of gaining entry in HE. All results in this section should be interpreted in that context. Table 9 presents the different estimations proposed in equation (9). In column 1, I look at differences in student performance across the four different groups, including year and preference fixed effects only. The omitted and comparison group is the specialists. In column 2, I additionally control for individual characteristics, and in column 3, I include program fixed effects as well. For that reason, the complete estimation is reported in column 3. Coefficients in panels A and C should be interpreted as standard deviation changes.
0 ≤ τ − τ* ≤ 10 | |||
---|---|---|---|
(1) | (2) | (3) | |
A. No. of ECTS Credits by the End of the First Year (Cohorts 2013–14 to 2016–17) | |||
Generalists | .056*** | −.016 | −.009 |
[.018] | [.018] | [.018] | |
All-rounders (with Portuguese) | .060*** | .060*** | −.001 |
[.015] | [.015] | [.017] | |
All-rounders (without Portuguese) | .137*** | .099*** | .036 |
[.021] | [.021] | [.022] | |
Year and preference FE | ✓ | ✓ | ✓ |
Controls | ✓ | ✓ | |
Program FE | ✓ | ||
R2 | .009 | .035 | .058 |
Observations | 26,594 | 26,594 | 26,594 |
B. Completion on Time (Cohorts 2013–14 to 2014–15) | |||
(Probit–Average Marginal Effects) | |||
Generalists | .094*** | .034*** | .006 |
[.012] | [.012] | [.012] | |
All-rounders (with Portuguese) | .020** | .014 | .005 |
[.010] | [.010] | [.011] | |
All-rounders (without Portuguese) | .064*** | .024 | .037** |
[.015] | [.015] | [.015] | |
Year and preference FE | ✓ | ✓ | ✓ |
Controls | ✓ | ✓ | |
Program FE | ✓ | ||
Pseudo R2 | .019 | .064 | .166 |
Observations | 12,976 | 12,976 | 12,621 |
C. Final GPA (Cohorts 2013–14 to 2014–15) | |||
Generalists | .171*** | .127*** | .126*** |
[.032] | [.032] | [.034] | |
All-rounders (with Portuguese) | .154*** | .152*** | .063* |
[.027] | [.027] | [.034] | |
All-rounders (without Portuguese) | .192*** | .157*** | .089** |
[.037] | [.037] | [.041] | |
Year and preference FE | ✓ | ✓ | ✓ |
Controls | ✓ | ✓ | |
Program FE | ✓ | ||
R2 | .027 | .043 | .093 |
Observations | 6,837 | 6,837 | 6,837 |
Starting by the final GPA, in panel C of table 9, I find that both generalists and all-rounders perform better than specialists (the omitted category). This result also holds within program (col. 3). An all-rounder admitted without Portuguese would have, on average, an increment of 0.089 standard deviations on her final GPA when compared with a specialist student of the same ability (within the same program). Moreover, a generalist has, on average, an increment of 0.126 standard deviations on her final GPA when compared with a specialist within the same program. Therefore, all groups outperform specialists in all specifications considered concerning final GPA.
Panel A of table 9 similarly reports that generalists and all-rounders (with and without Portuguese) seem to perform better than their specialists peers. However, these differences in the number of ECTS credits accumulated at the end of the first year do not hold once I introduce the program fixed effects. Within program, generalists and all-rounders perform no worse (and no better) than specialists.
Finally, when I consider completion on time as an outcome (panel B), the effect associated with the all-rounders who were admitted without Portuguese is positive and statistically significant in columns 1 and 3. The effect of all-rounders who were admitted with Portuguese is also positive but not statistically significant.
At the margin of the cutoff, I expect students to have a similar ability, and I would not expect many differences across groups. My results show that differences in performance across groups exist, depending on the outcome measure considered. First, I find no evidence that students admitted purely on the basis of their performance on the Portuguese exam perform worse at university than the others. I do not observe generalists performing any worse than the specialists, which is a surprising result. In fact, when considering the final GPA, generalists on average have a better performance when compared with specialists, and the effect is very similar across the different estimations.
In turn, I observe consistent differences in performance regarding all-rounders relative to specialists. Overall, I still observe that for both types of all-rounders, they seem to obtain slightly better final GPA when compared with their specialists peers, and all-rounders (without Portuguese) have a higher probability of completion on time. These results suggest that general skills matter at the margin of gaining entry in HE.
The next section further discusses these results and potential associated mechanisms.
X. Discussion and Mechanisms
Universities that allow for Portuguese as an alternative requirement benefit at the margin of admission. Students that perform better in general skills exams perform no worse at university than their specialist peers. Alternatively, there is some evidence suggesting that students with a sufficiently high score on the Portuguese exam perform better at university, even when they did not need the general skills exam to gain entry in HE. As a result, the inclusion of a general skills test as an admission criterion can have a positive effect on student performance at the margin of admission.
The evidence suggests that all groups performed better relative to specialists in terms of final GPA. However, that effect is less pronounced and not statistically significant when I look at completion on time and at the number of ECTS credits by the end of the first year. Differences across groups vary according to the type of measure I consider; namely, at the beginning of the program, differences are less pronounced than toward the end. I interpret that result from two perspectives.
Final GPA includes all course scores, while ECTS credits measure only the number of credits accumulated independently of the scores. Specialists might have a head start, but they do not capitalize on that advantage. Students with better general skills do not substantially outperform specialists by the end of the first year. On average, both groups have the same quantity of ECTS credits accumulated within the same program. Even so, generalists outperform specialists when I consider the final GPA as outcome. To a certain extent, this signals that generalists are able to adapt to changes. Students with different skills need time to adapt.
Additionally, I can also infer that the specialist exam may not be well designed. This result is particularly relevant in a centralized admission system that emphasizes the importance of field-specific skills. Throughout the paper, I made the assumption that the field-specific exam provides a good measure of specialist skills. However, in the light of my results, I can hypothesize that the specialist exam is not very accurate at checking the field-specific skills required. In fact, by the end of the program, specialists obtain a worse final GPA on average. The field-specific exam seems not to match the skills needed to obtain good performance in the degree.
Moreover, policy implications should be drawn carefully. The outcome measures considered (ECTS credits, completion on time, and final GPA) are different in their nature. They all measure student performance but at different levels. ECTS credits (number of credits completed by the end of the first year) evaluates initial performance and conveys limited quantitative information, because it reports whether the student completed a course but not its evaluation; completion on time is a qualitative measure of students’ performance, while the final GPA is a quantitative one. In other words, the first two measures inform the policy maker about how fast the student was on completing her studies, while the final GPA aims to quantify student performance. Hence, the measures considered in the empirical analysis are different regarding their nature, and the number of observations in the estimations is not constant across the three outcome measures, given that they evaluate performance at different moments of the student’s degree. Thus, each outcome measure is estimated on a different sample subject to attrition that might vary by both observable and unobservable dimensions between different groups of interest (generalists, specialists, and all-rounders). To address this issue, I adopt two different approaches. First, I reestimate the models on common sets of students (either those who completed on time or those who finished the degree and have a final GPA). Second, I estimate a selection model.
First, in table A3, I consider the final GPA only for those students who completed the degree on time. There, although the effect for all-rounders is not statistically significant, the effect for generalists remains statistically significant and of the same magnitude.
I also ran additional regressions on credits at the end of the first year on a constant sample (students who finished the degree and have a final GPA) to impose the same level of attrition. These results are presented in table A6. By construction, panel C is the same as panel C in table 9, given that we are considering only students with final GPA. Outcome completion on time (panel B of table 9) was not reestimated on this sample of students, given that the students who have a final GPA invariantly completed on time (94%). In panel A of table A6, which refers to the number of ECTS credits completed by the end of the first year, results on all-rounders (with and without Portuguese) are robust when compared with those in panel A of table 9. The results on generalists are now stronger. This suggests the set of generalists who finish their degree and have a final GPA is positively selected. It might be that only the better generalists survived and finished the degree.
Second, I undertake an analysis of selection in table A4, where I estimate a Heckman selection model. In the selection equation, I estimate the probability of students not dropping out by the end of the first year. In the outcome equation, I model students’ final GPA. Estimates in table A4 are very similar to those in table 9. Generalists and all-rounders obtain a better final GPA on average than specialists.
Nevertheless, other selection issues my play a role in my results. My empirical analysis considers only programs that already allow for Portuguese as an alternative requirement. A skeptical reader might argue that such a policy has different effects according to the goal and type of institution. For instance, in less well-regarded programs, often the goal is to admit as many students as possible. They have an incentive to allow for as many exam combinations as possible in order to fill the number of vacancies. Opening the application process to all types of students might be perceived in different ways. Allowing for different alternative entry requirements can be perceived as a sign of being a less rewarded program. Students that perceive themselves as high-ability students might not apply to those programs.
I acknowledge that the type of institution can potentially drive the results.41 My results might not be verified for elite institutions. Nevertheless, the reader should keep in mind that those institutions are able to select the top distribution of students, even with only one admission criterion. Regardless of the nature of the admission exams, there is always a high demand for programs in elite institutions. In any case, my results are relevant for the average institution, which represents the majority of the HE system.
Additionally, the reader might ask herself whether my results would depend on the field of study. Even controlling for differences across fields, my main results remain the same. The access to some fields might be more restricted than others and may affect students’ choices (see Hastings, Neilson, and Zimmerman 2013; Kirkeboen, Leuven, and Mogstad 2016). My results might be driven by the type of field that allows more often for Portuguese as an alternative admission requirement. Therefore, the distribution of students across different fields of study is relevant to understand my results.42 In the subpopulation of analysis, I observe very few natural sciences programs, and these results might not be representative for that field of study.43
Finally, in this work, I focus only on the public HE system, which is the one for which I observe the entrance and admission exam grades. The private sector enrolls a small share of students, approximately 20% of the system (Biscaia, Sá, and Teixeira 2021). Moreover, over the past decades, private institutions have been facing a declining student demand (Teixeira and Amaral 2007). Given that admissions to the private system are not centralized, we do not observe the entrance grade or admission exam grades for the private sector in our datasets. Nevertheless, the major difference between the private and public sectors is related to the fields of study they offer. The public sector offers all fields, while the private sector offers mainly courses in the fields of social sciences, business and law, and health and welfare (Teixeira et al. 2013). My analysis of the field of study for the public sector (table A5) shows that the performance of the different groups of students (generalists, specialists, and all-rounders) does not vary significantly across fields. Therefore, I believe the bias of not considering the private sector is, most likely, small.
XI. Conclusion
Intended and unintended consequences of university admission practices have been a focus of the academic, political, and judicial agendas.44 I contribute to that debate by analyzing the nature of the admission requirements. I test whether the inclusion of a general admission exam generates a better pool of admitted students.
In this paper, my first contribution is theoretical. I set up a simple model in which a university faces a particular trade-off when deciding to combine a general admission requirement with a field-specific exam. The university gains information about a student’s ability at the cost of reducing the weight allocated to the field-specific requirement. As a result, in equilibrium, the student would decrease the effort level in the productive test (the field-specific exam). Nevertheless, I find that the university benefits from including a general admission test in its requirements.
My second contribution is the provision of an empirical application of the model by using Portugal as a case study. I find evidence that including a general exam as an entry requirement increases the average student performance at university at the margin of admission to an HEI. In particular, I conclude that generalists outperform specialists in terms of final GPA. This is a surprising result that has not been a consistent finding in the literature and has two possible interpretations.
First, universities that allow for general admission requirements should not put too much weight on first-year assessment. Albeit the fact that performance in the first year is often considered as a good proxy to evaluate who the good students are, generalists might need time to adapt in order to obtain a better performance. This is a common practice in the United Kingdom, for instance, where first-year grades do not count for final GPA.
Second, my results show that the field-specific exam may not be well designed in some fields. Students admitted purely on the basis of their field-specific skills perform worse at university. That means the field-specific exam is not accurate at checking the specialist’s skills needed to complete the degree.
Additionally, I also find that all-rounders have better performance at university when compared with specialists. Although this result is expected, it reinforces the idea that having knowledge in multiple subjects can provide all-rounders with a more holistic understanding of the HE system and its interconnections, which can be an advantage in certain academic contexts. Additionally, a broader range of skills and knowledge across different subjects that all-rounders have might be beneficial in multidisciplinary university programs or in scenarios where the curriculum covers a wide range of topics. Nevertheless, it is also true that a student’s performance depends on various factors, including the specific university program, the student’s strengths and interests, and the learning environment, and this should be taken into consideration when interpreting the results.
My findings have substantial implications for the determination of university selection policies. Universities should rethink their admission practices by either designing admission tests that evaluate field-specific skills more accurately or introducing general admission requirements to effectively distinguish their candidates. In short, diversity in admission criteria has a positive effect on student performance at university. However, the empirical results of the paper might not apply to all fields of study, particularly in engineering and physical sciences. In my analysis, I considered only HE programs where there was variation in the type of admission exams required by HEIs, and so my empirical results are not necessarily verified in all HE programs. Nevertheless, the theoretical results are valid for all fields of study.
In fact, my results offer an optimistic view of the possibility to affect the pool of admitted students at university. Even though I focus on the Portuguese system, this paper provides generalizable insights about the importance of the choice of admission criteria.
Finally, this paper poses some interesting questions that are worthwhile to explore in future research. I have shown that the general admission exam has an important role to play in university admission practices. However, this paper does not provide an answer to what should be the optimal weight to allocate to that exam. Additionally, there might be some unintended consequences of increasing the weight associated with the general exam. For instance, by introducing a general admission exam, a university may change the gender composition of the admitted pool of students and/or it can also change the way students rank their preferences when applying to HEIs. In sum, these possibilities open the scope for future research.
Appendix. Additional Tables
Total Number | |
---|---|
Institutions | 170 |
At university | 75 |
At polytechnic | 95 |
Degrees | 529 |
Bachelor’s | 485 |
Integrated master’s | 38 |
Prep bachelor’s/master’s | 6 |
Institutions × degrees | 1,180 |
Bachelor’s | 1,066 |
Integrated master’s | 112 |
Prep bachelor’s/master’s | 2 |
Vacancies | 50,852 |
Programs That Do Not Consider Portuguese | Programs That Consider Portuguese | |||||
---|---|---|---|---|---|---|
Mean | SD | Mean | SD | |||
(1) | (2) | (3) | (4) | (5) | (6) | |
No. of individuals | 71,647 | 78,233 | ||||
High school GPA | 142.42 | 17.50 | 150.96 | 18.62 | ||
Nonlocal student (share) | .28 | .30 | ||||
Applied for a maintenance grant (share) | .31 | .36 | ||||
Received a maintenance grant (share) | .22 | .31 | ||||
Female (share) | .50 | .65 | ||||
Mother has HE (share) | .35 | .23 | ||||
Father has HE (share) | .27 | .16 |
0 ≤ τ − τ* ≤ 10 | |||
---|---|---|---|
(1) | (2) | (3) | |
Outcome equation (final GPA): | |||
Generalists | .170*** | .127*** | .121*** |
[.032] | [.036] | [.036] | |
All-rounders (with Portuguese) | .155*** | .094*** | .047 |
[.027] | [.031] | [.036] | |
All-rounders (without Portuguese) | .191*** | .163*** | .124*** |
[.037] | [.043] | [.044] | |
Year and preference FE | ✓ | ✓ | ✓ |
Controls | ✓ | ✓ | |
Program FE | ✓ | ||
Selection equation (probability of students not dropping out by the end of the first academic year): | |||
Generalists | .036*** | .009 | .003 |
[.010] | [.009] | [.009] | |
All-rounders (with Portuguese) | −.017** | −.018** | −.004 |
[.008] | [.008] | [.008] | |
All-rounders (without Portuguese) | .027** | .008 | .026** |
[.012] | [.011] | [.011] | |
Year and preference FE | ✓ | ✓ | ✓ |
Controls | ✓ | ✓ | |
Program FE | ✓ | ||
Observations | 12,707 | 12,707 | 12,707 |
Uncensored observations | 6,837 | 6,837 | 6,837 |
ρ | −.024 | −.030 | .834 |
σ | .915 | .908 | 1.022 |
λ | −.022 | −.027 | .853 |
Wald test for ρ = 0 (Prob. > χ2) | .536 | .315 | .000 |
0 ≤ τ − τ* ≤ 10 | ||||||
---|---|---|---|---|---|---|
No. of ECTS Credits First Year | Final GPA | |||||
(1) | (2) | (3) | (4) | (5) | (6) | |
Generalists | −.021 | −.028 | .121*** | .079** | ||
[.018] | [.021] | [.032] | [.039] | |||
All-rounders (with Portuguese) | .059*** | .049*** | .147*** | .132*** | ||
[.015] | [.018] | [.027] | [.034] | |||
All-rounders (without Portuguese) | .097*** | .117*** | .151*** | .094** | ||
[.021] | [.026] | [.037] | [.048] | |||
Medical sciences | .014 | −.025 | −.011 | −.004 | −.008 | −.073 |
[.022] | [.022] | [.044] | [.040] | [.040] | [.074] | |
Physical sciences | −.088* | −.090** | −.008 | −.115 | −.127 | −.279** |
[.045] | [.046] | [.059] | [.138] | [.138] | [.142] | |
Arts and humanities | −.045*** | −.049*** | −.072*** | −.070*** | −.067*** | −.116*** |
[.015] | [.015] | [.025] | [.025] | [.025] | [.041] | |
Generalists × medical sciences | .018 | .175 | ||||
[.063] | [.119] | |||||
Generalists × physical sciences | −.230 | −.710*** | ||||
[.154] | [.147] | |||||
Generalists × arts and humanities | .035 | .092 | ||||
[.045] | [.070] | |||||
All-rounders (with Portuguese) × medical sciences | −.008 | .031 | ||||
[.057] | [.096] | |||||
All-rounders (with Portuguese) × physical sciences | −.204 | .171 | ||||
[.126] | [.389] | |||||
All-rounders (with Portuguese) × arts and humanities | .055 | .043 | ||||
[.035] | [.061] | |||||
All-rounders (without Portuguese) × medical sciences | −.094 | .131 | ||||
[.070] | [.122] | |||||
All-rounders (without Portuguese) × physical sciences | −.123 | .681* | ||||
[.123] | [.379] | |||||
All-rounders (without Portuguese) × arts and humanities | −.024 | .147* | ||||
[.051] | [.085] | |||||
Controls | ✓ | ✓ | ✓ | ✓ | ||
Year and preference FE | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
R2 | .008 | .035 | .036 | .021 | .044 | .045 |
Observations | 26,524 | 26,524 | 26,524 | 6,829 | 6,829 | 6,829 |
0 ≤ τ − τ* ≤ 10 | |||
---|---|---|---|
(1) | (2) | (3) | |
A. No. of ECTS Credits by the End of the First Year (Cohorts 2013–14 to 2016–17) | |||
Generalists | .166*** | .164*** | .106*** |
[.030] | [.030] | [.032] | |
All-rounders (with Portuguese) | .076*** | .078*** | .021 |
[.022] | [.022] | [.025] | |
All-rounders (without Portuguese) | .071*** | .069*** | .043* |
[.021] | [.021] | [.022] | |
Year and preference FE | ✓ | ✓ | ✓ |
Controls | ✓ | ✓ | |
Program FE | ✓ | ||
R2 | .020 | .020 | .146 |
Observations | 6,837 | 6,837 | 6,837 |
C. Final GPA (Cohorts 2013–14 to 2014–15) | |||
Generalists | .171*** | .127*** | .126*** |
[.032] | [.032] | [.034] | |
All-rounders (with Portuguese) | .154*** | .152*** | .063* |
[.027] | [.027] | [.034] | |
All-rounders (without Portuguese) | .192*** | .157*** | .089** |
[.037] | [.037] | [.041] | |
Year and preference FE | ✓ | ✓ | ✓ |
Controls | ✓ | ✓ | |
Program FE | ✓ | ||
R2 | .027 | .043 | .093 |
Observations | 6,837 | 6,837 | 6,837 |
Notes
This work was funded by the National Portuguese Funds and by the European Social Fund Plus, Programa Operacional do Capital Humano Portugal 2020, through the Foundation for Science and Technology (Portugal) under project ref. SFRH/BD/120793/2016, http://doi.org/10.54499/PTDC/CED-EDG/5530/2020, and http://doi.org/10.54499/UIDP/00757/2020. This work was done during my PhD studies at the University of Nottingham. I declare that I have no relevant or material financial interests that relate to the research described in this paper. I thank the Direção-Geral do Ensino Superior (DGES) and the Direção-Geral de Estatísticas da Educação e Ciência (DGEEC) for the access to the two datasets used in this paper. The datasets were merged at the premises of DGEEC and cannot be disclosed to anyone who has not signed a confidentiality agreement with DGEEC. I am deeply indebted to Gianni De Fraja, Alex Possajennikov, and Ana Rute Cardoso for their guidance and valuable discussions during this study. I am also very grateful for all the help provided by Joana Duarte and Catarina Affaflo from Divisão de Estudos e Gestão do Acesso a Dados para Investigação (DEGADI), a department of DGEEC. I also would like to thank Ada Ferrer-i-Carbonell, Albert Marcet, Antonio Cabrales, Daniel Siedmann, Robert Gary-Bobo, Maia Güel, Marta Lopes, Miguel Portela, Miguel Urquiola, Paola Bordon, Pedro N. Teixeira, Pilar Beneito, Rakesh Vohra, Sarah Bowen, Silvia Sonderegger, and Steve DesJardins for their comments on earlier drafts of this paper. I also acknowledge the contribution of participants in the following events: Lisbon Economics and Statistics of Education conference 2019 (Lisbon), Leuven Economics of Education Research conference 2019 (Leuven), Economics Brown Bag Seminar (Nottingham), PhD Conference 2019 (Nottingham), 9th CESifo Labor Workshop (Dresden), Second Catalan Economic Society conference (Catalonia), European Society of Population Economics conference 2019 (Bath), International Workshop of Applied Economics of Education conference 2019 (Catanzaro), Spanish Meeting of Economics of Education conference 2019 (Canarias), Portuguese Economic Journal conference 2019 (Evora), Strengthen HE through Innovative Financial Tools 2019 (Nottingham), PhD Economics Virtual Seminar 2020 (virtual), Association of Southern European Economic Theorists conference 2020 (virtual), Royal Economic Society conference 2021 (virtual), Spring Meeting of Young Economists 2021 (virtual), European Economic Association conference 2021 (virtual), European Association of Labour Economics conference 2021 (virtual), and Fórum Estatístico (DGEEC). All remaining errors are my own.
1 For instance, in Portugal, students can be admitted to the artificial intelligence and data science course on the basis of the mathematics exam or the combination of mathematics and Portuguese exams.
2 Note that although the definition of a general skills test may be context dependent, I exclude from my analysis all programs for which the Portuguese exam is a mandatory admission criterion (such as linguistic degrees). Later in this paper, in sec. X, I discuss differences across fields of study.
3 For a review of the student portfolio problem see, e.g., Araujo, Gottlieb, and Moreira (2007), Chade, Lewis, and Smith (2014), and Che and Koh (2016).
4 To simplify the analysis, I ignore the student choice of subjects as well as the existence of competition effects between universities. For a survey of the literature on returns to curriculum and college-major choices, see, e.g., Bound and Turner (2011), Altonji, Blom, and Meghir (2012), and Patnaik, Wiswall, and Zafar (2020).
5 Albeit her expectation might not be the correct one, I assume that the expectation is constant for simplicity purposes. The assumption that students do not know their ability will allow me to present a “representative agent” setting from the student’s perspective. Nevertheless, students who perceive themselves to be specialists, for instance, might decide to invest solely in T1, while others might take the opposite approach. Specialized strategy is a phenomenon that might play a role, and it is studied in the countersignaling literature (e.g., Feltovich, Harbaugh, and To 2002). Alternatively, I could have assumed that different types of students would have different expectations. That would require use of the Bayesian updating rule to determine the expected student’s ability. To keep the framework as simple as possible, I consider that all students have the same expectation of their own ability.
6 Alternatively, more elements such as high school GPA, CVs, and recommendation letters could have been considered as screening devices. This could have been seen as different measures of students’ abilities and skills. For simplification purposes, I am only considering two types of admission tests in the model.
7 An alternative way of dichotomizing admission tests is to think about a student’s talent as multidimensional. The student has a portfolio of skills, cognitive and noncognitive. In that setting, we can consider T1 as a test that measures cognitive skills, and the student can prepare for it, and T2 as a noncognitive test, for instance, an IQ test. Moreover, in my model, I have only one type of innate ability α. I could have explicitly expressed ability as being multidimensional, i.e., one ability level that would measure cognitive skills and another that would measure noncognitive skills, so that the solution of the model would remain the same. As I will show in the equilibrium section (sec. IV), there is always a solution (under certain conditions) for which the second test becomes informative. Additionally, Che and Koh (2016) showed that in a decentralized system, when “student’s attributes are multidimensional, colleges avoid head-on competition by placing excessive weight on school-specific attributes such as essays,” but they still consider them as admission requirements.
8 Following the neoclassical framework proposed by Becker (2009) and Mincer (1974).
9 I consider the extreme situation where T2 is not directly informative of labor market performance. Nevertheless, although e2 is not considered in the equation, a high score on T2 may be indicative of ability α, which is reflected in the wage equation.
10 For a review regarding the effect of test scores on labor market outcomes and national income, see Chetty, Friedman, and Rockoff (2014) and Hanushek and Woessmann (2010).
11 Universities may care about outcomes other than academic performance in college or wages. That would be directly related to the stated mission of each university, and use of a single proxy to measure college quality may induce measurement error (see Black and Smith 2006). However, for simplification purposes, I will assume that universities care about academic performance and wages.
12 The parameter δ measures the interaction between the two efforts. If I assume that they are complements, δ would be negative. However, that would drive me to the solution where the student should exert effort toward only one test. A more interesting case is when δ is positive, which means that the marginal cost of one type of effort is not independent of the other type.
13 When determining the student expected wage, I relied on the fact that α and Π are independent, given the assumption of everyone having the same expectation for α.
14 In a Nash equilibrium, every student tries to manipulate her scores, and therefore all students exert the same level of effort, , and no student is able to manipulate her test scores (see De Fraja and Landeras 2006).
15 Similarly, for , the converse situation occurs. The student will exert effort only for the second test, () and . For simplicity, I neglect that case. I am interested to show only that is not necessarily optimal for the university.
16 From proposition 1, I infer that e*(λ) is a continuous function on its domain, . Additionally, it is differentiable on its domain except at . Also, e*(λ) is a smooth function on its domain except at .
17 According to Hoxby (2009) and MacLeod et al. (2017), for instance, universities care about reputation issues, which might translate into a student’s lifetime wage. However, for simplification purposes, I focus on the wage immediately after the job market. For research on labor market returns to school identity, see, e.g., Dale and Krueger (2002) on college selectivity, Autor (2014) on skills formation, and MacLeod and Urquiola (2019) on school choice.
18 The increase of high-ability students admitted to university when λ changes is equal to the change in the number of low-ability students (in absolute value).
19 According to table O1, the limit of the number of admitted students when is equal to the limit when .
20 According to fig. 2, if , the two test score distributions will not overlap, and that would mean the university would use a very precise test, and, with that, it can admit only high-ability students. I preclude that case by imposing . The case of low noise is not very interesting because one test is enough to completely separate students by ability.
21 Notice that when λ increases to values higher than , the two rectangles move in the northwest direction, but the two areas do not change.
22 According to my model, it is also true that two field-specific exams provide a more efficient solution than only one. That is because in the model, I assume that the errors are uncorrelated. If you assume the errors of field-specific exams to be correlated, then it is no longer true that two field-specific exams are better than one. Take the extreme case where ε is the same in all the field-specific tests. Then, having a second field-specific test is not necessarily optimal.
23 That occurs when . I believe that small values of δ are more realistic than high values.
24 This idea traces back to the debate in Italy about the survival of the liceo classico and the role of skills. It has been suggested that learning classical languages (e.g., Latin and Ancient Greek) is useful because those who studied at a liceo classico did better in life (e.g., obtain higher earnings on average).
25 For more details on the distinction between choosing a combination of a college and major instead of only a college, see Bordon and Fu (2015). The authors develop a sorting equilibrium model where they exploit variation in the college-major-specific admissions regime.
26 For a review, see, e.g., Roth and Sotomayor (1992) and Kara and Sönmez (1997).
Note that a DA mechanism is an algorithm that finds a stable matching between agents on both sides of the problem, taking into account their preferences. In this case, the government considers the capacity constraint for each program and matches the students with explicitly stated preferences to the institutions; the institutions, in turn, have stated their preferences through the choice of admission criteria. This is an efficient and fair mechanism, which ensures that students with the same preferences and admission scores have the same opportunity of being admitted to a program (equal treatment of equals property). When vacancies are binding, the assignment procedure described above generates quasi-experimental variation in institution assignment. For a review, see Abdulkadiroğlu et al. (2017).
27 Since 2011, they can as well swap one field-specific optional exam with a general exam in philosophy at the end of grade 11.
28 Except for the medicine degree, which requires three exams (the mathematics exam, the biology and geology exam, and the physics and chemistry exam). Nevertheless, this is excluded from my analysis.
29 The performance and graduation dataset are supplied linked by DEGADI (Divisão de Estudos e Gestão do Acesso a Dados para Investigação) from DGEEC, and the link between the application and performance dataset was made by the author at the premises of the Ministry of Science, Technology and Higher Education in Portugal.
30 In this paper, I define an individual as a student enrolled in a specific program. If a student is enrolled in two distinct programs at the same time, according to this definition that will count as two individuals. The reader should bear in mind that a student can be admitted to only one program in each contest.
31 These students may have obtained more than one diploma over this period, for instance, a bachelor’s degree and a master’s degree.
32 In my analysis, I consider only individuals who were admitted, for the first time, into a bachelor’s or integrated master’s program in a public university through the GAR (in order to match those to the application dataset).
33 This variable is self-reported. However, I also have information on the county of residence for each individual.
34 In order to finish the degree, each student needs to complete a minimum of 180 ECTS credits.
Note I am using final GPA as a proxy for income. Although, for instance, on-the-job human capital accumulation might play a role in wage determination (Stinebrickner, Stinebrickner, and Sullivan 2019), students with better academic achievement are expected to obtain better earnings (see, e.g., Jones and Jackson 1990).
35 For a review on the determinants of final GPA, see, e.g., Betts and Morell (1999).
36 Moreover, there are 136 program-years that select students on the basis of their best result among five different exam combinations. Among those 136, only two programs did not allow for the Portuguese exam, while 117 programs allowed for it in one out of the five exam combinations. Additionally, 15 programs allowed for five different exam combinations where all of them included Portuguese.
37 The computation is conditional on the result of the national exams the student takes in high school. For instance, imagine that Pedro took only the mathematics and the Portuguese exams, and he is applying to digital marketing at a particular university as one of his stated preferences. The digital marketing program specifies that students can apply with the mathematics exam or the combination of the IT and the Portuguese exams. As Pedro did not take the IT exam, his application score is determined by the mathematics exam only.
38 Remember that for each program and each student, I observe the actual application score and the actual placement threshold (τ*).
39 In table A2, I compare both types of programs for fields that consider the Portuguese exam as an optional requirement.
40 This is not necessarily true because a change in admission criteria rules would change the pool of applicants, their preferences, and also their performance in the high school exams, for instance.
41 An alternative approach would have been to perform a general equilibrium analysis where I contemplate the scenario of all institutions allowing for the Portuguese exam as an alternative entry requirement. However, in that scenario, students could have performed differently in the Portuguese exam. Students might have allocated more effort to the Portuguese exam if they knew that such an exam would have been a valid admission criterion for all programs.
42 Figure O8 show how students are distributed according to fields.
43 Nevertheless, my model is applicable to all fields, and the optimal choice of λ is not zero. How far λ is from zero depends on other factors, such as the precision of the field-specific exam. Thus, if one believes that for natural sciences, the field-specific test is relatively more precise than in fields such as sociology or economics for instance, then my model predicts that for some fields, λ is closer to zero in comparison with other fields.
44 For instance, in the United States there is an ongoing discussion centered on the affirmative action policies of some Ivy League universities (e.g., Card and Krueger 2005; Arcidiacono et al. 2011; Calsamiglia, Franke, and Rey-Biel 2013; Arcidiacono and Lovenheim 2016).
References
Abdulkadiroğlu, A., J. D. Angrist, Y. Narita, and P. A. Pathak. 2017. “Research Design Meets Market Design: Using Centralized Assignment for Impact Evaluation.” Econometrica 85 (5): 1373–432. Altonji, J. G., E. Blom, and C. Meghir. 2012. “Heterogeneity in Human Capital Investments: High School Curriculum, College Major, and Careers.” Ann. Rev. Econ. 4:185–223. Araujo, A., D. Gottlieb, and H. Moreira. 2007. “A Model of Mixed Signals with Applications to Countersignalling.” RAND J. Econ. 38 (4): 1020–43. Arcidiacono, P., E. M. Aucejo, H. Fang, and K. I. Spenner. 2011. “Does Affirmative Action Lead to Mismatch? A New Test and Evidence.” Quantitative Econ. 2 (3): 303–33. Arcidiacono, P., and M. Lovenheim. 2016. “Affirmative Action and the Quality-Fit Trade-Off.” J. Econ. Literature 54 (1): 3–51. Arrow, K. J. 1971. “A Utilitarian Approach to the Concept of Equality in Public Expenditures.” Q.J.E. 85 (3): 409–15. Autor, D. H. 2014. “Skills, Education, and the Rise of Earnings Inequality among the ‘Other 99 Percent.’” Science 344 (6186): 843–51. Avery, C. N., M. E. Glickman, C. M. Hoxby, and A. Metrick. 2012. “A Revealed Preference Ranking of US Colleges and Universities.” Q.J.E. 128 (1): 425–67. Avery, C., and J. Levin. 2010. “Early Admissions at Selective Colleges.” A.E.R. 100 (5): 2125–56. Becker, G. S. 2009. Human Capital: A Theoretical and Empirical Analysis, with Special Reference to Education. Chicago: Univ. Chicago Press. Belfield, C. R., and P. M. Crosta. 2012. “Predicting Success in College: The Importance of Placement Tests and High School Transcripts.” CCRC Working Paper no. 42, Community College Res. Center, Columbia Univ. Bettinger, E. P., B. J. Evans, and D. G. Pope. 2013. “Improving College Performance and Retention the Easy Way: Unpacking the Act Exam.” American Econ. J. Econ. Policy 5 (2): 26–52. Betts, J. R., and D. Morell. 1999. “The Determinants of Undergraduate Grade Point Average: The Relative Importance of Family Background, High School Resources, and Peer Group Effects.” J. Human Resources 34 (2): 268–93. Biscaia, R., C. Sá, and P. N. Teixeira. 2021. “The (In)effectiveness of Regulatory Policies in Higher Education: The Case of Access Policy in Portugal.” Econ. Analysis and Policy 72:176–85. Black, D. A., and J. A. Smith. 2006. “Estimating the Returns to College Quality with Multiple Proxies for Quality.” J. Labor Econ. 24 (3): 701–28. Bordon, P., and C. Fu. 2015. “College-Major Choice to College-Then-Major Choice.” Rev. Econ. Studies 82 (4): 1247–88. Bound, J., and S. Turner. 2011. “Dropouts and Diplomas: The Divergence in Collegiate Outcomes.” In Handbook of the Economics of Education, vol. 4, edited by Eric A. Hanushek, Stephen J. Machin, and Ludger Woessmann, 573–613. Amsterdam: North-Holland. Bridgeman, B., L. McCamley-Jenkins, and N. Ervin. 2000. “Predictions of Freshman Grade-Point Average from the Revised and Recentered SAT I: Reasoning Test.” College Board Research Report no. 2000-1 (ETS Research Report no. 00-1), College Entrance Examination Board, New York. Burton, N. W., and L. Ramist. 2001. “Predicting Success in College: SAT Studies of Classes Graduating since 1980.” Research Report no. 2001-2, College Entrance Examination Board, New York. Calsamiglia, C., J. Franke, and P. Rey-Biel. 2013. “The Incentive Effects of Affirmative Action in a Real-Effort Tournament.” J. Public Econ. 98:15–31. Campbell, S., L. Macmillan, R. Murphy, and G. Wyness. 2022. “Matching in the Dark? Inequalities in Student to Degree Match.” J. Labor Econ. 40 (4): 807–50. Card, D., A. R. Cardoso, J. Heining, and P. Kline. 2018. “Firms and Labor Market Inequality: Evidence and Some Theory.” J. Labor Econ. 36 (S1): S13–S70. Card, D., and A. B. Krueger. 2005. “Would the Elimination of Affirmative Action Affect Highly Qualified Minority Applicants? Evidence from California and Texas.” ILR Rev. 58 (3): 416–34. Cerdeira, J. M., L. C. Nunes, A. B. Reis, and C. Seabra. 2018. “Predictors of Student Success in Higher Education: Secondary School Internal Scores versus National Exams.” Higher Educ. Q. 72 (4): 304–13. Chade, H., G. Lewis, and L. Smith. 2014. “Student Portfolios and the College Admissions Problem.” Rev. Econ. Studies 81 (3): 971–1002. Che, Y.-K., and Y. Koh. 2016. “Decentralized College Admissions.” J.P.E. 124 (5): 1295–338. Chetty, R., J. N. Friedman, and J. E. Rockoff. 2014. “Measuring the Impacts of Teachers II: Teacher Value-Added and Student Outcomes in Adulthood.” A.E.R. 104 (9): 2633–79. Cyrenne, P., and A. Chan. 2012. “High School Grades and University Performance: A Case Study.” Econ. Educ. Rev. 31 (5): 524–42. Dale, S. B., and A. B. Krueger. 2002. “Estimating the Payoff to Attending a More Selective College: An Application of Selection on Observables and Unobservables.” Q.J.E. 117 (4): 1491–527. De Fraja, G., and P. Landeras. 2006. “Could Do Better: The Effectiveness of Incentives and Competition in Schools.” J. Public Econ. 90 (1–2): 189–213. Deming, D. J. 2017. “The Growing Importance of Social Skills in the Labor Market.” Q.J.E. 132 (4): 1593–640. DGE (Direção-Geral da Educação). 2019. “Cursos cientifico humanisticos (oferta formativa) [Humanistic scientific courses (training offer)].” http://www.dge.mec.pt/cursos-cientifico-humanisticos .DGEEC (Direção-Geral de Estatísticas da Educação e Ciência). 2019. “Registro de alunos inscritos e diplomados no ensino superior (RAIDES)” [Registry of student enrollment and graduation from higher education]. Lisbon: Ministry of Science, Technology and Higher Education, and Ministry of Education. DGES (Direção-Geral do Ensino Superior). 2019a. “Assistente de escolha do curso no ensino superior” [Course choice assistant in higher education]. http://www.dges.gov.pt/guias/assisthlp.asp .———. 2019b. “Concurso nacional de acesso” [National competition to access higher education]. Lisbon: Ministry of Science, Technology and Higher Education. Dooley, M. D., A. A. Payne, and A. L. Robb. 2012. “Persistence and Academic Success in University.” Canadian Public Policy 38 (3): 315–39. Edwards, D., H. Coates, and T. Friedman. 2012. “A Survey of International Practice in University Admissions Testing.” Higher Educ. Management and Policy 24 (1): 1–18. Epple, D., R. Romano, and H. Sieg. 2006. “Admission, Tuition, and Financial Aid Policies in the Market for Higher Education.” Econometrica 74 (4): 885–928. Feltovich, N., R. Harbaugh, and T. To. 2002. “Too Cool for School? Signalling and Countersignalling.” RAND J. Econ. 33 (4): 630–49. Freedle, R. 2003. “Correcting the SAT’s Ethnic and Social-Class Bias: A Method for Reestimating SAT Scores.” Harvard Educ. Rev. 73 (1): 1–43. Gale, D., and L. S. Shapley. 1962. “College Admissions and the Stability of Marriage.” American Math. Monthly 69 (1): 9–15. Gary-Bobo, R. J., and A. Trannoy. 2008. “Efficient Tuition Fees and Examinations.” J. European Econ. Assoc. 6 (6): 1211–43. Hanushek, E. A., and L. Woessmann. 2010. “Education and Economic Growth.” In Economics of Education, edited by Dominic J. Brewer and Patrick J. McEwan, 60–67. Amsterdam: Elsevier. Hastings, J. S., C. A. Neilson, and S. D. Zimmerman. 2013. “Are Some Degrees Worth More than Others? Evidence from College Admission Cutoffs in Chile.” Working Paper no. 19241 (July), NBER, Cambridge, MA. Holmström, B. 1979. “Moral Hazard and Observability.” Bell J. Econ. 10 (1): 74–91. ———. 1999. “Managerial Incentive Problems: A Dynamic Perspective.” Rev. Econ. Studies 66 (1): 169–82. Hoxby, C. M. 2009. “The Changing Selectivity of American Colleges.” J. Econ. Perspectives 23 (4): 95–118. Hoxby, C., and S. Turner. 2013. “Expanding College Opportunities for High-Achieving, Low Income Students.” SIEPR Discussion Paper no. 12-014, Stanford Inst. Econ. Policy Res., Stanford, CA. Jones, E. B., and J. D. Jackson. 1990. “College Grades and Labor Market Rewards.” J. Human Resources 25 (2): 253–66. Kara, T., and T. Sönmez. 1997. “Implementation of College Admission Rules.” Econ. Theory 9 (2): 197–218. Kautz, T., J. J. Heckman, R. Diris, B. ter Weel, and L. Borghans. 2014. “Fostering and Measuring Skills: Improving Cognitive and Non-cognitive Skills to Promote Lifetime Success.” Working Paper no. 20749 (December), NBER, Cambridge, MA. Kirkeboen, L. J., E. Leuven, and M. Mogstad. 2016. “Field of Study, Earnings, and Self-Selection.” Q.J.E. 131 (3): 1057–111. Kuncel, N. R., and S. A. Hezlett. 2007. “Standardized Tests Predict Graduate Students’ Success.” Science 315 (5815): 1080–81. Leonard, D. K., and J. Jiang. 1999. “Gender Bias and the College Predictions of the SATs: A Cry of Despair.” Res. Higher Educ. 40 (4): 375–407. MacLeod, W. B., E. Riehl, J. E. Saavedra, and M. Urquiola. 2017. “The Big Sort: College Reputation and Labor Market Outcomes.” American Econ. J. Appl. Econ. 9 (3): 223–61. MacLeod, W. B., and M. Urquiola. 2015. “Reputation and School Competition.” A.E.R. 105 (11): 3471–88. ———. 2019. “Is Education Consumption or Investment? Implications for School Competition.” Ann. Rev. Econ. 11:563–89. Mincer, J. 1974. Schooling, Experience, and Earnings. Human Behavior & Social Institutions, no. 2. Cambridge, MA: NBER. Niessen, A. S. M., and R. R. Meijer. 2017. “On the Use of Broadened Admission Criteria in Higher Education.” Perspectives Psychological Sci. 12 (3): 436–48. Papay, J. P., R. J. Murnane, and J. B. Willett. 2016. “The Impact of Test Score Labels on Human-Capital Investment Decisions.” J. Human Resources 51 (2): 357–88. Patnaik, A., M. J. Wiswall, and B. Zafar. 2020. “College Majors.” Working Paper no. 27645 (August), NBER, Cambridge, MA. Radunzel, J., and J. Noble. 2012. “Predicting Long-Term College Success through Degree Completion Using ACT Composite Score, ACT Benchmarks, and High School Grade Point Average.” ACT Research Report no. 2012 (5), ACT, Iowa City, IA. Roth, A. E., and M. Sotomayor. 1992. “Two-Sided Matching.” In Handbook of Game Theory with Economic Applications, vol. 1, edited by Robert Aumann and Sergiu Hart, 485–541. Amsterdam: North-Holland. Rothstein, J. M. 2004. “College Performance Predictions and the SAT.” J. Econometrics 121 (1–2): 297–317. Schmitt, N. 2012. “Development of Rationale and Measures of Noncognitive College Student Potential.” Educ. Psychologist 47 (1): 18–29. Silva, P. L., L. C. Nunes, C. Seabra, A. Balcao Reis, and M. Alves. 2020. “Student Selection and Performance in Higher Education: Admission Exams vs. High School Scores.” Educ. Econ. 28 (5): 437–54. Stemler, S. E. 2012. “What Should University Admissions Tests Predict?” Educ. Psychologist 47 (1): 5–17. Sternberg, R. J. 2010. College Admissions for the 21st Century. Cambridge, MA: Harvard Univ. Press. Sternberg, R. J., C. R. Bonney, L. Gabora, and M. Merrifield. 2012. “WICS: A Model for College and University Admissions.” Educ. Psychologist 47 (1): 30–41. Stiglitz, J. E. 2000. “The Contributions of the Economics of Information to Twentieth Century Economics.” Q.J.E. 115 (4): 1441–78. Stinebrickner, R., T. Stinebrickner, and P. Sullivan. 2019. “Job Tasks, Time Allocation, and Wages.” J. Labor Econ. 37 (2): 399–433. Teixeira, P. N., and A. Amaral. 2007. “Waiting for the Tide to Change? Strategies for Survival of Portuguese Private HEIs.” Higher Educ. Q. 61 (2): 208–22. Teixeira, P., B. Jongbloed, D. Dill, and A. Amaral, eds. 2006. Markets in Higher Education: Rhetoric or Reality? Higher Education Dynamics, vol. 6. Dordrecht: Springer. Teixeira, P., V. Rocha, R. Biscaia, and M. F. Cardoso. 2013. “Competition and Diversification in Public and Private Higher Education.” Appl. Econ. 45 (35): 4949–58. Westrick, P. A., H. Le, S. B. Robbins, J. M. Radunzel, and F. L. Schmidt. 2015. “College Performance and Retention: A Meta-analysis of the Predictive Validities of ACT Scores, High School Grades, and SES.” Educ. Assessment 20 (1): 23–45. Zwick, R. 2007. “College Admission Testing.” Arlington, VA: Nat. Assoc. College Admission Counseling. ———. 2017. Who Gets In? Cambridge, MA: Harvard Univ. Press. ———. 2019. “Assessment in American Higher Education: The Role of Admissions Tests.” Ann. American Acad. Polit. and Soc. Sci. 683 (1): 130–48. Zwick, R., and J. G. Green. 2007. “New Perspectives on the Correlation of SAT Scores, High School Grades, and Socioeconomic Factors.” J. Educ. Measurement 44 (1): 23–45.