jump to navigation

Edwin Jackson, Fourth No-Hitter of 2010 June 25, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , , , , , , ,
2 comments

Tonight, Edwin Jackson of the Arizona Diamondbacks pitched a no-hitter against the Tampa Bay Rays. That’s the fourth no-hitter of this year, following Ubaldo Jimenez and the perfect games by Dallas Braden and Roy Halladay.

Two questions come to mind immediately:

  1. How likely is a season with 4 no-hitters?
  2. Does this mean we’re on pace for a lot more?

The second question is pretty easy to dispense with. Taking a look at the list of all no-hitters (which interestingly enough includes several losses), it’s hard to predict a pattern. No-hitters aren’t uniformly distributed over time, so saying that we’ve had 4 no-hitters in x games doesn’t tell us anything meaningful about a pace.

The first is a bit more interesting. I’m interested in the frequency of no-hitters, so I’m going to take a look at the list of frequencies here and take a page from Martin over at BayesBall in using the Poisson distribution to figure out whether this is something we can expect.

The Poisson distribution takes the form

f(n; \lambda)=\frac{\lambda^n e^{-\lambda}}{n!}

where \lambda is the expected number of occurrences and we want to know how likely it would be to have n occurrences based on that.

Using Martin’s numbers – 201506 opportunities for no-hitters and an average of 4112 games per season from 1961 to 2009 – I looked at the number of no-hitters since 1961 (120) and determined that an average season should return about 2.44876 no-hitters. That means

\lambda =  2.44876

and

f(n; \lambda = 2.44876)=\frac{2.44876^n  (.0864)}{n!}

Above is the distribution. p is the probability of exactly n no-hitters being thrown in a single season of 4112 games; cdf is the cumulative probability, or the probability of n or fewer no-hitters; p49 is the predicted number of seasons out of 49 (1961-2009) that we would expect to have n no-hitters; obs is the observed number of seasons with n no-hitters; cp49 is the predicted number of seasons with n or fewer no-hitters; and cobs is the observed number of seasons with n or fewer no-hitters.

It’s clear that 4 or even 5 no-hitters is a perfectly reasonable number to expect.

2.448760831

Welcome to the Majors, Jay June 22, 2010

Posted by tomflesher in Baseball.
Tags: , , , , , , , , , , , , , , , , , , , ,
add a comment

Jay Sborz had a rough debut in relief of Justin Verlander during tonight’s Tigers at Mets game when there was a rain delay in the top of the 3rd. He faced seven batters in two-thirds of an inning, plunking the first two – Rod Barajas and Jeff Francoeur – and giving up hits to the last three. As Sborz, who was obviously struggling with nerves, tried to pitch his way out of the inning, Mets commentator Gary Cohen was mocking him mercilessly. “That’s got to be some kind of record,” for one.

Though Gary said it, that pinged my “Stuff Keith Hernandez Says” meter, and I trotted off to Baseball-Reference.com to look it up. Since 1973, six other pitchers who debuted in relief have two hit batsman. Were any of them as bad as Sborz?

We don’t have to go back too far to find someone who was. In 2002, Justin Miller of the Blue Jays made his debut against the Devil Rays and hit Chris Gomez, then Jason Tyner. Miller deserves special recognition – after that beautiful start, he held on to pitch 2 2/3 and got the win!

Honorable mention goes to Mitch Stetter of the Brewers. In a 2007 game against the Pirates, Stetter debuted in the last inning of a 12-2 blowout. He was on the winning side, though it ended up 12-3. Stetter hit Jack Wilson. He threw a wild pitch in the process of walking Nyjer Morgan, then iced the cake by plunking Nate McLouth. That was followed up with a groundout that scored Wilson and a merciful game-ending double play.

At the other end… June 22, 2010

Posted by tomflesher in Baseball.
Tags: , , , , , , , , , , , , , ,
add a comment

Although AJ Burnett had a bad first inning last night, the Oakland As had a bad tenth inning. After taking a 2-2 game into extra innings, the Cincinnati Reds knocked three out of the park against pitchers Michael Wuertz and Cedrick Bowers. The first was hit by Ramon Hernandez; Joey Votto and Scott Rolen went deep back to back. Although extra-inning home runs aren’t very rare (there have been 35 so far this year), only three pitchers have surrendered more than one, and neither of the other two (Chad Durbin and Matt Belisle) gave them both up on the same night.

Last year, everyone’s favorite balk-off artist, Arizona’s Esmerling Vasquez, gave up two home runs in extra innings against the Texas Rangers on June 25th. Those were two of 83 free-baseball homers in 2009. Extra-innings home runs are more common in the tops of innings, because in a tied game a home run for the home team is a walk-off whereas the road team will get the chance to capitalize on their momentum, but I would have expected the proportions to be much more different than they are. In 2009, for example, of those 83, only 44 were hit by the away team with 39 hit by the home team (and 33 of those were game-enders).

So far, no batter has more than one extra-innings home run this year, but last year there were several. Andre Ethier led the pack with 3, with a bunch of batters who had 2.

AJ Burnett: Statistical Anomaly June 21, 2010

Posted by tomflesher in Baseball.
Tags: , , , , , , , ,
2 comments

Tonight, A.J. Burnett had a weird first inning in a game that’s still going on as I write this. He got the first two outs fairly easily, and then surrendered home runs to Justin Upton, Adam LaRoche, and Mark Reynolds. Before he knew it, he was down 5-0 in the bottom of the first. That can’t happen very often.

I queried Baseball-Reference.com’s event finder for home runs, then narrowed it down to first inning home-runs with two outs this year. Prior to tonight, there had been 82. None of them came in three-homer games – that answers that.

Just for fun, I checked 2009 as well. In total, there were 209 2-out, first-inning home runs in 2009. Only one of those home runs happened in a three-homer game, so it didn’t happen then, either.

Poor AJ.

Carlos Zambrano, Ace Pinch Hitter? June 21, 2010

Posted by tomflesher in Baseball.
Tags: , , , , , , , , , , , ,
1 comment so far

Earlier this year, Chicago Cubs manager Lou Piniella experimented with moving starting pitcher and relatively big hitter Carlos Zambrano to the bullpen, briefly making him the Major Leagues’ best-paid setup man. Zambrano is back in the rotation as of the beginning of June. I’m curious what the effect of moving him to the bullpen was.

The thing is that not only is Zambrano an excellent pitcher (though he was slumping at the time), he’s also a regarded as a very good hitter for a pitcher. He’s a career .237 hitter, with a slump last year at “only” .217 in 72 plate appearances (17th most in the National League), which was 6th in the National League among pitchers with at least 50 plate appearances. He didn’t walk enough (his OBP was 13th on the same list), but he was 9th of the 51 pitchers on the list in terms of Base-Out Runs Added (RE24) with about 5.117 runs below a replacement-level batter. Ubaldo Jimenez was also up there with a respectable .220 BA, .292 OBP, but -8.950 RE24.

It should be pointed out that pitcher RE24 is almost always negative for starters – the best RE24 on that list is Micah Owings with -2.069. Zambrano’s run contribution was negative, sure, but it was a lot less negative than most starters. Zambrano also lost a bit of flexibility as an emergency pinch hitter (something that Owings is going through right now due to his recent move to the bullpen) – he’s more valuable as a reliever, so they won’t use him to pinch hit. As a result, he loses at-bats, and that not only keeps him from amassing hits. It also allows him to get rusty.

It’s hard to precisely value the loss of Zambrano’s contribution, although he’s already on pace for -6.1 batting RE24. It’s likely, in my opinion, that his RE24 will rise as he continues hitting over the course of the year. His pitching value is also negative, however, which is unusual. He’s always been very respectable among Cubs starters. It’s possible that although he was pitching very well in relief, the fact that he has the ability to go long means that it’s inefficient to use him as a reliever. This is the opposite of, say, Joba Chamberlain, who is overpowering in relief but struggles as a starter.

As a starter, Zambrano has never been a net loss of runs. He needs to stay out of the bullpen, and Joba needs to stay there.

E-Reader Price Wars June 21, 2010

Posted by tomflesher in Economics.
Tags: , , , , , , , , , , ,
1 comment so far

Holy cow… two non-baseball updates in a row! I’ll have to fix that later on.

The news all over is that Amazon has cut the price of the Kindle from $259 to $189. By all accounts, this was prompted by the $60 price cut that Barnes & Noble gave the Nook ($259 to $199), which in turn was prompted by the low price of the Borders brand Kobo ($149). The availability of the iPad, an augmented substitute good for e-readers, will also potentially cause trouble, but the mere existence of the iPad doesn’t necessarily create downward price pressure in and of itself.

The Nook, Kindle, and Kobo are all extremely similar goods. I’d go so far as to say they’re perfect substitutes, if we consider this Kobo advertising table. Taking the American market, the price differential will disappear when the new price cuts take effect. The weight and thickness differences are negligible. The memory is similar. The only major difference is that the Kobo can use Bluetooth, while the Nook uses Wi-Fi and 3G, and the Kindle uses 3G. This difference is probably not going to result in significant market segmentation and no one will be likely to buy a Nook and a Kindle to take advantage of the Nook’s Wi-Fi capabilities, so it’s fair to consider these substitute goods with negative cross-elasticities of demand.

When prices for substitute goods with different producers move together, there are three options, two of which are sensible in a rational market:

  1. The firms could be colluding.
  2. The firms could be in a price war.
  3. The price change could be coincidental.

Coincidence isn’t very likely or very interesting, so we’ll only consider options 1 and 2. Collusion is fun to consider, but probably not relevant here. For one, when prices move due to collusion, they generally move up because firms are no longer attempting to price each other out of the market. Tacit collusion might be the reason that about $200 is the floor for 3G devices, but it’s unlikely to be the reason both firms cut prices.

The price war would explain the fact that the changes in price are negative and that they’re meeting at a similar level. Price war means increased competition. Assuming demand doesn’t change (it will), the firm with the lower price will sell its product. Assuming demand increases as price decreases (it will), each lowering of price should bring additional marginal consumers to the pool of people willing to buy these devices, so while prices fall, profits may or may not increase. If profits increase, however, it will likely be profitable to cut the price even further, because additional consumers can still be reached, and there will be downward pressure from other firms trying to keep up. As a result, price will approach the cost of production. Price won’t reach the marginal cost of production, however, since there are barriers to entry into the e-reader market (including specialized equipment, R&D for a new device since the current devices are protected by patents and trade secrets, and acquisition of rights to books).

A quick rule of thumb to see if we’re dealing with price war or collusion is to check the stock prices of the producer companies. All things being equal, if a price move increases stock price, then the move is the result of anti-competitive measures like collusion, because there will be higher profits. If a price move decreases stock price, then the move is likely to increase competition and lower profits will result. Here, to quote KTTC:

Barnes&Noble shares fell 55 cents, or 3.2 percent, to finish trading at $16.52. Amazon shares declined $3.28, or 2.6 percent, to $122.55.

(Apple’s stock, for the record, ticked down today without much else to explain the drop.) This is probably a pro-competition move. The likely winners fall into two groups:

  1. E-reader consumers, who will benefit from lower prices and more competition for amenities. The producers will likely be fighting for contracts with publishing houses, and a larger selection of books may be forthcoming.
  2. iPad users. E-readers are an imperfect substitute for the iPad, so in order for the iPad to remain a rational choice after the price cuts, it will have to become a better product to avoid losing out to people who will get a better value by buying a cheaper product. This should mean more of a focus on the differential aspects of the iPad like the App Store, iTunes, and (yes) iBooks.

This should be fun to watch.

What would the House of Commons look like under a Liberal-NDP merger? June 20, 2010

Posted by tomflesher in Academia, Canada.
Tags: , , , , , , , , , , ,
add a comment

It’s been a while since I did any Canadian politics ranting.

Coalition government and/or a left-wing merger in Canada is all the rage at the Globe and Mail, with a Globe and Mail editorial discussing the ramifications of a shift left, Jeffrey Simpson arguing that the whole thing is a stupid idea, and Neil Reynolds talking about the Whigs for no good reason. The arguments on all sides contemplate a merger or coalition of the Liberals and the New Democratic Party, which is the most logical assumption considering that the Greens are nonviable nationally (although I did enjoy discussing the “hypothetical Mango Coalition” that could result from the 2008 election if the red, orange, and green parties joined up).

I’m interested in the effect of a merger, so I’m going to make some assumptions, not all of which are warranted:

  • The Bloc Québecois is not party to any coalition. BQ voters will always vote for the BQ. (This is probably the weakest assumption, since the BQ actively campaigns for votes and almost certainly won marginal candidates.)
  • The Green Party is not party to any coalition. Green voters will always vote for the Greens. (Again, this is a fairly weak assumption and I might examine the hypothetical Mango Coalition in a later post, but they’re not considered relevant by the editorialists so I’ll ignore them. However, they would have made quite a difference in the model below.)
  • Ridings won with a majority by any party remain with that party.
  • A riding won with a plurality by a Liberal or NDP candidate would remain with the merged party regardless of the current vote split.
  • A riding won with a plurality by a Conservative or BQ candidate needs to be reconsidered. I’ll do so by assuming that 66% of the NDP vote goes to the merged party and the other 34% evaporates (to model voters being displeased by a perceived shift to the middle and staying home). Based on those numbers, the party with a plurality takes the seat.

There were some surprising results. The Liberal-NDP merged party ended up poaching 24 seats in total, including 17 from the Conservatives and 7 from the Bloc Québecois. In total, that puts the parties at:Pie chart of the House of Commons under a hypothetical merger

  • Liberal Democrats: 137 seats
  • Conservatives: 127 seats
  • Bloc Québecois: 41 seats
  • Independent: 3 seats

This puts a different spin on the current House. However, we must take into account the Bloc’s behavior. After the 2008 election, there was discussion of a Liberal-NDP-Bloc coalition government. However, it is not in the Bloc’s interest to form a coalition, since the Grits’ position on Québec sovereignty is not compatible with the Bloc’s. As a result, we must consider this a non-coalition government – a majority run by the Grits.

It’s difficult to imagine this situation as being much better for the Grits. Dion would have run a minority government, but as a weak leader he likely would have been forced into an election some time between October 2008 and now. Ignatieff would still have been waiting in the wings to take over the leadership of the party in the ensuing chaos.

A merger is not a panacea.

Modeling Run Production June 19, 2010

Posted by tomflesher in Baseball, Economics.
Tags: , , , ,
add a comment

A baseball team can be thought of as a factory which uses a single crew to operate two machines. The first machine produces runs while the team bats, and the second machine produces outs while the team is on fields. This is a somewhat abstract way to look at the process of winning games, because ordinarily machines have a fixed input and a fixed output. In a box factory, the input comprises man-hours and corrugated board, and the output is a finished box. Here, the input isn’t as well-defined.

Runs are a function of total bases, certainly, but total bases are functions of things like hits, home runs, and walks. Basically, runs are a function of getting on base and of advancing people who are already on base. Obviously, the best measure of getting on base is On-Base Percentage, and Slugging Average (expected number of bases per at-bat) is a good measure of advancement.

OBP wraps up a lot of things – walks, hits, and hit-by-pitch appearances – and SLG corrects for the greater effects of doubles, triples, and home runs. That doesn’t account for a few other things, though, like stolen bases, sacrifice flies, and sacrifice hits. It also doesn’t reflect batter ability directly, but that’s okay – the stats we have should represent batter ability since the defensive side is trying to prevent run production. The model might look something like this, then:

\hat{Runs} = \hat{\beta_0} + \hat{\beta_1} OBP + \hat{\beta_2} SLG + \hat{\beta_3} SB + \hat{\beta_4} SF + \hat{\beta_5} SH

This is the simplest model we can start with – each factor contributes a discrete number of runs. If we need to (and we probably will), we can add terms to capture concavity of the marginal effect of different stats, or (more likely) an interaction term for SLG and, say, SB, so that a stolen base is worth more on a team where you’re more likely to be brought home by a batter because he’s more likely to give you extra bases. As it is, however, we can test this model with linear regression. The details of it are behind the cut. (more…)

Leadoff Home Runs June 19, 2010

Posted by tomflesher in Baseball.
Tags: , , , , , , ,
add a comment

Jose Reyes led off today’s Mets-Yankees game with a home run off Phil Hughes. That’s the eleventh leadoff home run of the year. That’s a little over half as many as there were last year on June 19, when Nate McLouth hit the 19th leadoff home run of 2009.

Last year, there were 51 leadoff home runs over roughly 6 months (early April through the first week of October), which puts uniformly distributed homers at  8.5 per month (so McLouth’s #19 on June 19 was about 2.25 behind pace). So far, with eleven over 2.5 months, that puts us on pace for 26.4, or, to be generous, about 30 leadoff home runs.

The change probably isn’t indicative of anything other than chance, but in 2008 #24 of 52 came on June 20, and in 2007 they were already up to 28 of 59 by June 19. Over the past few years there’s been a slowing of leadoff home runs which may be due to chance or may be due to some other factor. Who knows? It’s way too small a sample to say anything about.

Cell Phone Insurance June 18, 2010

Posted by tomflesher in Economics.
Tags: , , ,
add a comment

Yesterday, I bought a new phone. It’s a Samsung Gravity 2 and with a two-year contract it cost $79.99 – it came with some accessories that aren’t of interest for now. The salesman tried to sell me insurance at a whopping $4.99 per month over the course of the contract. I told him I’d do $4.99 total, because I’m an economist, but he didn’t bite. (Sigh.)

How bad a deal is that? Well, I wanted to find out. First, I made some assumptions:

  • The appropriate interest rate is 1.25 APY (.1042 MPY), which is roughly what my bank account is paying. I could put some amount of money in the bank right now and earn interest at that rate and it would be enough for me to pay the insurance. This is called the Net Present Value, and over 24 months at 4.99 per month it’s about $118.34.
  • The likelihood of something happening to my phone is entirely random, so I can’t take it into account when determining whether the insurance is a good buy.
  • My phone depreciates at a rate of e^{-.998058*t} , where t is the number of the month (so this month is month 1, next month is month 2, etc.). This puts my discount rate at exactly my APY. It makes for a quick depreciation, with the phone getting within a dollar of its resale value within about 4 months. It caputres the quick drop in depreciation an the slow leveling off quite nicely.
  • The definition of ‘good value’ is that at the time I turn in a damaged phone, its depreciated value is less than the cost of all the premiums I’ve paid. I chose to use the depreciated value rather than the cost of a new phone because it reflects that I’ve gotten some use out of the phone.

The long and the short of it is that if I damage the phone before about the 7th month, it’s a good value. After that, it’s all gravy for T-Mobile.

I ended up telling the salesman that I’m an economist and so paying that much for insurance is against my religion.

For those who are interested in the chart, it’s behind the cut. It lists monthly payment, month ordinal, the effective interest rate, present value of that payment, NPV as sum of the present values, the depreciated value of the phone, the depreciation factor, and the instantaneous depreciation.

(more…)