## Hit Batsman Roundup, 2010December 26, 2010

Posted by tomflesher in Baseball.
Tags: , , , , , , , , , , ,

There’s very little more subtle and involved than the quiet elegance of a batter getting beaned. In fact, that particular strategy was invoked 1549 times in 2010, with 419 batters getting plunked at least one.

The absolute leader this season was not Kevin Youkilis or Brett Carroll but Rickie Weeks, who led with 25 HBP in 754 plate appearances. Put another way, Weeks got hit in 3.32% of his plate appearances.  That’s almost once every 30 plate appearances, or nearly four times the MLB-wide rate of 0.83% of the time. (Incidentally, that’s total HBP divided by total plate appearances. The more skewed mean percentage is 0.58%.) What leads to such a high number of plunkings?

I would assume that a few things would go into the decision to hit a batter intentionally:

• Pitchers are less likely to be hit by other pitchers.
• If a hitter is likely to get on base anyway, he’s more likely to be hit – you don’t lose anything by putting him on base, and you control the damage by limiting him to one base.
• If a batter is likely to hit for extra bases, he’s more likely to be hit.
• If a batter is likely to steal a base, he’s less likely to be hit, but there is an offsetting effect for caught stealing.
• American League batters are more likely to be hit because of the moral hazard effect of pitchers not having to bat.

With that in mind, I set up a regression in R using every player who had at least one plate appearance in 2010. I added binary variables for Pitcher (1 if the player’s primary position is pitcher, 0 otherwise) and Lg (1 if the player played the entire season in the American League, 0 otherwise), then regressed HBP/PA on Pitcher, Lg, BB, HR, OBP, SLG, SB, and CS. The results were somewhat surprising:

Call:
lm(formula = hbppa ~ Pitcher + Lg + BB + HR + OBP + SLG + SB +
CS)

Residuals:
Min         1Q     Median         3Q        Max
-0.0154027 -0.0059081 -0.0018096  0.0001845  0.1397065

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  6.847e-03  9.815e-04   6.975 5.77e-12 ***
Pitcher     -5.399e-03  9.136e-04  -5.909 4.81e-09 ***
Lg          -1.614e-03  7.054e-04  -2.289   0.0223 *
BB          -1.412e-05  3.257e-05  -0.434   0.6647
HR           1.122e-04  7.956e-05   1.411   0.1587
OBP          8.570e-03  3.477e-03   2.465   0.0139 *
SLG         -3.451e-03  2.468e-03  -1.398   0.1624
SB          -6.749e-05  8.693e-05  -0.776   0.4377
CS           1.770e-04  2.646e-04   0.669   0.5036
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.01042 on 935 degrees of freedom
Multiple R-squared: 0.08839,    Adjusted R-squared: 0.08059
F-statistic: 11.33 on 8 and 935 DF,  p-value: 2.07e-15

Created by Pretty R at inside-R.org

That’s right – only Pitcher, Lg, HR, and SLG are even marginally significant (80% level). BB, SB, and CS aren’t even close. Why not?

Well, for one, the number of stolen bases and times caught stealing are relatively small no matter what. There probably isn’t enough data. For another, there simply probably isn’t as much intent to hit batters as we’d like to pretend.

Second, American Leaguers are less likely to be hit. This baffles me a little bit.

Also, keep in mind that this model shouldn’t be expected to, and cannot, explain all or even most of the variation in hit batsman. The R-squared is about .09, meaning that it explains about 9% of the variation. It ignores probably the most important factor, physics, entirely. (That is, the model doesn’t have any way to account for accidental plunkings.) As a side note, other regressions show there might be an effect for plate appearances, meaning you’re more likely to get hit by chance alone if you take enough pitches.

Finally, there are some guys who manage to do the opposite of Weeks’ feat. Houston outfielder Hunter Pence went 156 games and 658 plate appearances without getting plunked at all. Honorable mentions go to Raul Ibanez, Scott Podsednik, Victor Martinez, and Omar Infante, all of whom went over 500 plate appearances without a beaning. Now THAT’S plate discipline.

## Diagnosing the ALDecember 22, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , , , ,

In the previous post, I crunched some numbers on a previous forecast I’d made and figured out that it was a pretty crappy forecast. (That’s the fun of forecasting, of course – sometimes you’re right and sometimes you’re wrong.) The funny part of it, though, is that the predicted home runs per game for the American League was so far off – 3.4 standard errors below the predicted value – that it’s highly unlikely that the regression model I used controls for all relevant variables. That’s not surprising, since it was only a time trend with a dummy variable for the designated hitter.

There are a couple of things to check for immediately. The first is the most common explanation thrown around when home runs drop – steroids. It seems to me that if the drop in home runs were due to better control of performance-enhancing drugs, then it should mostly be home runs that are affected. For example, intentional walks should probably be below expectation, since intentional walks are used to protect against a home run hitter. Unintentional walks should probably be about as expected, since walks are a function of plate discipline and pitcher control, not of strength. On-base percentage should probably drop at a lower magnitude than home runs, since some hits that would have been home runs will stay in the park as singles, doubles, or triples rather than all being fly-outs. There will be a drop but it won’t be as big. Finally, slugging average should drop because a loss in power without a corresponding increase in speed will lower total bases.

I’ll analyze these with pretty new R code behind the cut.

## What Happened to Home Runs This Year?December 22, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , , ,
1 comment so far

I was talking to Jim, the writer behind Apparently, I’m An Angels Fan, who’s gamely trying to learn baseball because he wants to be just like me. Jim wondered aloud how much the vaunted “Year of the Pitcher” has affected home run production. Sure enough, on checking the AL Batting Encyclopedia at Baseball-Reference.com, production dropped by about .15 home runs per game (from 1.13 to .97). Is that normal statistical variation or does it show that this year was really different?

In two previous posts, I looked at the trend of home runs per game to examine Stuff Keith Hernandez Says and then examined Japanese baseball’s data for evidence of structural break. I used the Batting Encyclopedia to run a time-series regression for a quadratic trend and added a dummy variable for the Designated Hitter. I found that the time trend and DH control account for approximately 56% of the variation in home runs per year, and that the functional form is

$\hat{HR} = .957 - .0188 \times t + .0004 \times t^2 + .0911 \times DH$

with t=1 in 1955, t=2 in 1956, and so on. That means t=56 in 2010. Consequently, we’d expect home run production per game in 2010 in the American League to be approximately

$\hat{HR} = .957 - .0188 \times 56 + .0004 \times 3136 + .0911 \approx 1.25$

That means we expected production to increase this year and it dropped precipitously, for a residual of -.28. The residual standard error on the original regression was .1092, so on 106 degrees of freedom, so the t-value using Texas A&M’s table is 1.984 (approximating using 100 df). That means we can be 95% confident that the actual number of home runs should fall within .1092*1.984, or about .2041, of the expected value. The lower bound would be about 1.05, meaning we’re still significantly below what we’d expect. In fact, the observed number is about 3.4 standard errors below the expected number. In other words, we’d expect that to happen by chance less than .1% (that is, less than one tenth of one percent) of the time.

Clearly, something else is in play.

## More on Home Runs Per GameJuly 9, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , , , ,

In the previous post, I looked at the trend in home runs per game in the Major Leagues and suggested that the recent deviation from the increasing trend might have been due to the development of strong farm systems like the Tampa Bay Rays’. That means that if the same data analysis process is used on data in an otherwise identical league, we should see similar trends but no dropoff around 1995. As usual, for replication purposes I’m going to use Japan’s Pro Baseball leagues, the Pacific and Central Leagues. They’re ideal because, just like the American Major Leagues, one league uses the designated hitter and one does not. There are some differences – the talent pool is a bit smaller because of the lower population base that the leagues draw from, and there are only 6 teams in each league as opposed to MLB’s 14 and 16.

As a reminder, the MLB regression gave us a regression equation of

$\hat{HR} = .957 - .0188 \times t + .0004 \times t^2 + .0911 \times DH$

where $\hat{HR}$ is the predicted number of home runs per game, t is a time variable starting at t=1 in 1955, and DH is a binary variable that takes value 1 if the league uses the designated hitter in the season in question.

Just examining the data on home runs per game from the Japanese leagues, the trend looks significantly different.  Instead of the rough U-shape that the MLB data showed, the Japanese data looks almost M-shaped with a maximum around 1984. (Why, I’m not sure – I’m not knowledgeable enough about Japanese baseball to know what might have caused that spike.) It reaches a minimum again and then keeps rising.

After running the same regression with t=1 in 1950, I got these results:

 Estimate Std. Error t-value p-value Signif B0 0.2462 0.0992 2.481 0.0148 0.9852 t 0.0478 0.0062 7.64 1.63E-11 1 tsq -0.0006 0.00009 -7.463 3.82E-11 1 DH 0.0052 0.0359 0.144 0.8855 0.1145

This equation shows two things, one that surprises me and one that doesn’t. The unsurprising factor is the switching of signs for the t variables – we expected that based on the shape of the data. The surprising factor is that the designated hitter rule is insignificant. We can only be about 11% sure it’s significant. In addition, this model explains less of the variation than the MLB version – while that explained about 56% of the variation, the Japanese model has an $R^2$ value of .4045, meaning it explains about 40% of the variation in home runs per game.

There’s a slightly interesting pattern to the residual home runs per game ($Residual = \hat{HR} - HR$. Although it isn’t as pronounced, this data also shows a spike – but the spike is at t=55, so instead of showing up in 1995, the Japan leagues spiked around the early 2000s. Clearly the same effect is not in play, but why might the Japanese leagues see the same effect later than the MLB teams? It can’t be an expansion effect, since the Japanese leagues have stayed constant at 6 teams since their inception.

Incidentally, the Japanese league data is heteroskedastic (Breusch-Pagan test p-value .0796), so it might be better modeled using a generalized least squares formula, but doing so would have skewed the results of the replication.

In order to show that the parameters really are different, the appropriate test is Chow’s test for structural change. To clean it up, I’m using only the data from 1960 on. (It’s quick and dirty, but it’ll do the job.) Chow’s test takes

$\frac{(S_C -(S_1+S_2))/(k)}{(S_1+S_2)/(N_1+N_2-2k)} \sim\ F_{k,N_1+N_2-2k}$

where $S_C = 6.3666$ is the combined sum of squared residuals, $S_1 = 1.2074$ and $S_2 = 2.2983$ are the individual (i.e. MLB and Japan) sum of squared residuals, $k=4$ is the number of parameters, and $N_1 = 100$ and $N_2 = 100$ are the number of observations in each group.

$\frac{(6.3666 -(1.2074 + 2.2983))/(4)}{(100+100)/(100+100-2\times 4)} \sim\ F_{4,100+100-2 \times 4}$

$\frac{(6.3666 -(3.5057))/(4)}{(200)/(192)} \sim\ F_{4,192}$

$\frac{2.8609/4}{1.0417)} \sim\ F_{4,192}$

$\frac{.7152}{1.0417)} \sim\ F_{4,192}$

$.6866 \sim\ F_{4,192}$

The critical value for 90% significance at 4 and 192 degrees of freedom would be 1.974 according to Texas A&M’s F calculator. That means we don’t have enough evidence that the parameters are different to treat them differently. This is probably an artifact of the small amount of data we have.

In the previous post, I looked at the trend in home runs per game in the Major Leagues and suggested that the recent deviation from the increasing trend might have been due to the development of strong farm systems like the Tampa Bay Rays’. That means that if the same data analysis process is used on data in an otherwise identical league, we should see similar trends but no dropoff around 1995. As usual, for replication purposes I’m going to use Japan’s Pro Baseball leagues, the Pacific and Central Leagues. They’re ideal because, just like the American Major Leagues, one league uses the designated hitter and one does not. There are some differences – the talent pool is a bit smaller because of the lower population base that the leagues draw from, and there are only 6 teams in each league as opposed to MLB’s 14 and 16.

As a reminder, the MLB regression gave us a regression equation of

$\hat{HR} = .957 - .0188 \times t + .0004 \times t^2 + .0911 \times DH$

where $\hat{HR}$ is the predicted number of home runs per game, t is a time variable starting at t=1 in 1954, and DH is a binary variable that takes value 1 if the league uses the designated hitter in the season in question.

Just examining the data on home runs per game from the Japanese leagues, the trend looks significantly different.  Instead of the rough U-shape that the MLB data showed, the Japanese data looks almost M-shaped with a maximum around 1984. (Why, I’m not sure – I’m not knowledgeable enough about Japanese baseball to know what might have caused that spike.) It reaches a minimum again and then keeps rising.

After running the same regression with t=1 in 1950, I got these results:

 Estimate Std. Error t-value p-value Signif B0 0.2462 0.0992 2.481 0.0148 0.9852 t 0.0478 0.0062 7.64 1.63E-11 1 tsq -0.0006 0.00009 -7.463 3.82E-11 1 DH 0.0052 0.0359 0.144 0.8855 0.1145

This equation shows two things, one that surprises me and one that doesn’t. The unsurprising factor is the switching of signs for the t variables – we expected that based on the shape of the data. The surprising factor is that the designated hitter rule is insignificant. We can only be about 11% sure it’s significant. In addition, this model explains less of the variation than the MLB version – while that explained about 56% of the variation, the Japanese model has an $R^2$ value of .4045, meaning it explains about 40% of the variation in home runs per game.

There’s a slightly interesting pattern to the residual home runs per game ($Residual = \hat{HR} - HR$. Although it isn’t as pronounced, this data also shows a spike – but the spike is at t=55, so instead of showing up in 1995, the Japan leagues spiked around the early 2000s. Clearly the same effect is not in play, but why might the Japanese leagues see the same effect later than the MLB teams? It can’t be an expansion effect, since the Japanese leagues have stayed constant at 6 teams since their inception.

Incidentally, the Japanese league data is heteroskedastic (Breusch-Pagan test p-value .0796), so it might be better modeled using a generalized least squares formula, but doing so would have skewed the results of the replication.

In order to show that the parameters really are different, the appropriate test is Chow’s test for structural change. To clean it up, I’m using only the data from 1960 on. (It’s quick and dirty, but it’ll do the job.) Chow’s test takes

$\frac{(S_C -(S_1+S_2))/(k)}{(S_1+S_2)/(N_1+N_2-2k)} ~ F$

## Back when it was hard to hit 55…July 8, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , , , ,

Last night was one of those classic Keith Hernandez moments where he started talking and then stopped abruptly, which I always like to assume is because the guys in the truck are telling him to shut the hell up. He was talking about Willie Mays for some reason, and said that Mays hit 55 home runs “back when it was hard to hit 55.” Keith coyly said that, while it was easy for a while, it was “getting hard again,” at which point he abruptly stopped talking.

Keith’s unusual candor about drug use and Mays’ career best of 52 home runs aside, this pinged my “Stuff Keith Hernandez Says” meter. After accounting for any time trend and other factors that might explain home run hitting, is there an upward trend? If so, is there a pattern to the remaining home runs?

The first step is to examine the data to see if there appears to be any trend. Just looking at it, there appears to be a messy U shape with a minimum around t=20, which indicates a quadratic trend. That means I want to include a term for time and a term for time squared.

Using the per-game averages for home runs from 1955 to 2009, I detrended the data using t=1 in 1955. I also had to correct for the effect of the designated hitter. That gives us an equation of the form

$\hat{HR} = \hat{\beta_{0}} + \hat{\beta_{1}}t + \hat{\beta_{2}} t^{2} + \hat{\beta_{3}} DH$

The results:

 Estimate Std. Error t-value p-value Signif B0 0.957 0.0328 29.189 0.0001 0.9999 t -0.0188 0.0028 -6.738 0.0001 0.9999 tsq 0.0004 0.00005 8.599 0.0001 0.9999 DH 0.0911 0.0246 3.706 0.0003 0.9997

We can see that there’s an upward quadratic trend in predicted home runs that together with the DH rule account for about 56% of the variation in the number of home runs per game in a season ($R^2 = .5618$). The Breusch-Pagan test has a p-value of .1610, indicating a possibility of mild homoskedasticity but nothing we should get concerned about.

Then, I needed to look at the difference between the predicted number of home runs per game and the actual number of home runs per game, which is accessible by subtracting

$Residual = HR - \hat{HR}$

This represents the “abnormal” number of home runs per year. The question then becomes, “Is there a pattern to the number of abnormal home runs?”  There are two ways to answer this. The first way is to look at the abnormal home runs. Up until about t=40 (the mid-1990s), the abnormal home runs are pretty much scattershot above and below 0. However, at t=40, the residual jumps up for both leagues and then begins a downward trend. It’s not clear what the cause of this is, but the knee-jerk reaction is that there might be a drug use effect. On the other hand, there are a couple of other explanations.

The most obvious is a boring old expansion effect. In 1993, the National League added two teams (the Marlins and the Rockies), and in 1998 each league added a team (the AL’s Rays and the NL’s Diamondbacks). Talent pool dilution has shown up in our discussion of hit batsmen, and I believe that it can be a real effect. It would be mitigated over time, however, by the establishment and development of farm systems, in particular strong systems like the one that’s producing good, cheap talent for the Rays.

## Modeling Run ProductionJune 19, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , ,

A baseball team can be thought of as a factory which uses a single crew to operate two machines. The first machine produces runs while the team bats, and the second machine produces outs while the team is on fields. This is a somewhat abstract way to look at the process of winning games, because ordinarily machines have a fixed input and a fixed output. In a box factory, the input comprises man-hours and corrugated board, and the output is a finished box. Here, the input isn’t as well-defined.

Runs are a function of total bases, certainly, but total bases are functions of things like hits, home runs, and walks. Basically, runs are a function of getting on base and of advancing people who are already on base. Obviously, the best measure of getting on base is On-Base Percentage, and Slugging Average (expected number of bases per at-bat) is a good measure of advancement.

OBP wraps up a lot of things – walks, hits, and hit-by-pitch appearances – and SLG corrects for the greater effects of doubles, triples, and home runs. That doesn’t account for a few other things, though, like stolen bases, sacrifice flies, and sacrifice hits. It also doesn’t reflect batter ability directly, but that’s okay – the stats we have should represent batter ability since the defensive side is trying to prevent run production. The model might look something like this, then:

$\hat{Runs} = \hat{\beta_0} + \hat{\beta_1} OBP + \hat{\beta_2} SLG + \hat{\beta_3} SB + \hat{\beta_4} SF + \hat{\beta_5} SH$

This is the simplest model we can start with – each factor contributes a discrete number of runs. If we need to (and we probably will), we can add terms to capture concavity of the marginal effect of different stats, or (more likely) an interaction term for SLG and, say, SB, so that a stolen base is worth more on a team where you’re more likely to be brought home by a batter because he’s more likely to give you extra bases. As it is, however, we can test this model with linear regression. The details of it are behind the cut. (more…)

## Trends in DH useJune 11, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , , ,

Last night, Keith Hernandez was talking about how the Mets are scheduled to play in American League parks starting, well, today. He pointed out that the Mets will be in a bit of a pickle because they aren’t built, as AL teams are, to carry one big hitter to be the full-time DH. Instead, an NL team will be forced to spread the wealth among lighter hitters who are carried for their defensive acumen as well as their offensive prowess. Keith then corrected himself and said that AL managers are using the DH differently – to rest individual players instead of having an everyday DH.

That pinged my “Stuff Keith Hernandez says” meter, and so I decided to crunch some numbers and see if that’s true. I interpreted Keith’s statement as implying that the number of designated hitters should be increasing, since managers are moving away from an everyday DH and toward spreading the DH assignments around a bit more. The crunching also needs to account for interleague play, which should obviously increase the number of DHes. So, after controlling for interleague play, does DH use show an increasing trend with time?

## The DH Redux: JapanJune 7, 2010

Posted by tomflesher in Baseball.
Tags: , , , , , , ,

In an earlier post, I analyzed team-level data from Major League Baseball to determine the size of the effect that the Designated Hitter rule has on on-base percentage. The conclusion I came to was that, if the model is properly specified, the effect of the designated hitter rule is about .008 in on-base percentage. If the reasoning was correct, then when there are no other confounding variables, the effect should be similar in size for any other professional league.

Of course, the other major professional league is Nippon Professional Baseball, the major leagues of Japan. Since it produces players at a level similar to MLB, and the other factors are similar – the DH rule was adopted in 1975 by one, but not both, of the two major leagues – NPB is an ideal place to try to test the model I specified in this post.

## Does the DH Rule Cause Batters to be Hit?June 2, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , ,

In an earlier post, I crunched some numbers on the Designated Hitter rule and came to the conclusion that the DH adds about .3 extra trips to first base per game after accounting for trend. I’m going to play around with another stat that a lot of people seem to think should be affected indirectly by the DH rule.

The Conventional Wisdom™ is that the DH should increase hit batsman. The argument is that pitchers don’t bear the costs of hitting a batter with a pitch because they don’t bat, so they’ll be less careful to avoid hitting a batter or more likely to plunk a batter out of malice. Do the numbers bear that out?

## What is the effect of the Designated Hitter?May 30, 2010

Posted by tomflesher in Baseball.
Tags: , , ,