For my MIT class this semester we have to create a web site with at least 10 HTML forms. It should link back to a SQL database those forms can interact with. Griffin Pierce and I have chosen to do a project involving college football. We have a database of ESPNs play-by-play data for all college football games since 2002. Check it out, SaturdayCoach.com
Recently I finished reading the book Get in the Boat, by Pat Bodin.
The point of this book is that technical people are not in the boat with corporate leaders, because they speak a different language and have different priorities and risks.
Well, that’s nothing new.
However, the part of this book I found particularly enlightening was the treatment between technologists and IT. That hit me like a 2×4 square to the jaw. I got into this field 20+ years ago because coming out of college I saw with wonder the way Cisco Systems was connecting and changing the world. It was clear to me, even back in 1992, that people in the wake of Cisco Systems were the movers and shakers of the world, and only good things came from being associated with this company of strength. At that point in time central IT did not exist, not like today anyways, and central IT was being elevated — from a cost center to a strategic focus for the business. Very relevant. Somewhere along the way IT became a burden — divorced from the leading-edge technology that changes business for the better and gives each business who properly digests technology a competitive advantage against their peers and now married into a me-too table-stakes of basic uptime and SLA fulfillment.
For example, I know a VP at a leading higher education institution who has a main job of approval of emails that have to go out campus wide. Think about that for a minute- when did IT go from creating a project where a person in London could color-match a car manufactured in Berlin to bureaucratic email approver? And this is a vice-president. Makes you wonder how relevant the lowlife IT individual contributor is to the president of the University?
The book does a great job of understanding value-chaining: How can your actions at the Red level impact Blue, and impact Green. Don’t understand those colors? Read the book! It reinforces the basic message we already know inside – You are not relevant because of what you do, but because of how you affect other people.
When you talk to green about “technologists” they equate that word with blue people – lines of business who practice “shadow IT”. When you talk to Red about “technologists”, they will think IT. Big mistake. Even the way we as IT talk about Blue people is in a way to delegitimize and dirty them (again, shadow IT). We have to be those Blue people, not bash them!
For anyone who has read the Phoenix Project, this book is a great 2.0 read to that book. A lot of the principles and messaging connect.
This is a fantastic read for anyone who works in my field of technology, especially those who work or sell into information technology.
In my line of work (Higher Education) – we may have 5,000 full time equivalents in an organization. Of those, Id say Red is about 200 (IT people), Blue is 4,795 (faculty / staff) and Green is about 5 people. 5 People that’s all. A lot of titles like VP of applications may seem to the untrained eye to be Green, but they are blue. A teacher who is leading edge and consumes technology in her classroom in a way years ahead of her peers and gets better grades for students – Blue, not Red. I think of Red as lone-wolves in centralized IT. Period.
Get in the Boat is available on Amazon for $17.95.
Cisco GSX (our annual sales meeting) was, as always, an outstanding experience! There was one line spoken on the main stage that got the largest applause I have seen in years – and deservedly so. Gerri Elliott (Cisco’s new sales and marketing boss, and also my boss’s boss’s boss’s boss’s boss’s boss’s boss :/ – wish I was closer to the top! ) pointed out to Chuck Robbins on stage that this meeting marked his 3-year anniversary as Cisco CEO and in that time, Cisco stock price is up… 82%! That brought the house down, as it should.
82% seemed incredible to me when I heard it. I know the price has risen – but wow, a near doubling? That means under Chuck’s leadership Cisco has added $90B of market cap. I know that’s small compared to the top 5 stocks, but it is an amazing amount of money in its own right. John Chambers, when he became CEO in 1995 had a $5B company, and left it 20 years later at $130B — $125B of market cap in 20 years. Chuck has done $90B in just 3 years.
So, how it is possible that Cisco stock has risen so much in 3 years? From a sales point of view we have not materially grown sales at all in the last 5 years, not even at the rate of inflation, even with new acquisitions. Here are the numbers for the last 10 years of Cisco earnings:
|Quarter Ending||revenue ($m)||net income ($m)||dividends paid||Shares outstanding (billions)||stock price on Q end||market cap ($B)|
What you should take away from the above table:
commentary: Even with a paltry 2% inflation over 3 years that $12.3B should have grown to $13B. On top of that, Cisco has acquired 25 companies over the last 3 years. That alone should add another $500M of revenue. So at 0% real growth, Cisco revenue for FY18 should have been $13.5B, but it was not — only $12.3B — in real terms that’s a contraction of 9%. Yuck!
commentary 2: In Q2 of FY18 Cisco actually reported a net loss of $8.8B — we took a one time charge associated with repatriating overseas capital for $11.1B. I have not added in that $11.1B into the table above, otherwise the P/Es would be negative.
So based on #1 alone, Cisco’s stock price should have gone down 10% in the last 3 years, not up 82%. Let’s move on…
2. Net Income per year is up slightly in three years, maybe. Take the above table and compute yearly numbers. Here are the year over year numbers for trailing 4 quarters net income:
FY18 has grown to $11.2B from FY15’s $9B (25%), but that’s not entirely accurate. See the note about the Q2 charge. With that included, Cisco earned $0 for 2018.
So the company is not selling more, but it is making more money on the same volume of stuff sold. Good! Let’s say that should increase the stock price, 10-20%. So where do we get 82%? Read on…
3. Shares are contracting. The company continues to use profits to retire shares, going from about 6B shares a decade ago to under 5B shares now. That is big. Same income spread across fewer shares yields higher EPS. The reduction in shares from FY15 (5.1B to FY18 4.7B) is about 8%, so there is another 10% of that 82% we are looking for
4. Multiple expansion. This is the real generator of wealth. In FY15 Cisco had a P/E of 14.7 trailing (12.3 on a forward basis). That was, just simply way too low. Now we get a 19.8 trailing P/E and a forward estimate of about 18 — which is in line with the market. I personally think given the opportunity and space Cisco plays in we should get a rich premium over market multiples — not back to 2001’s 120x, but to 30x? Sure! The Internet of Things is a big opportunity and we are poised to capture it.
That expansion from 14.7 to 19.8 is 35% — so that’s 35% of the 82%. Or is it? Look more closely:
1.25 * 1.08 * 1.35 = 1.82
Did you see it? There is your 82%.
Increase earnings 25%
Decrease shares outstanding 8%
and expand the whole multiple 35%
— its a multiplicative effect — the net is 25% + 8% + 35% = 82% — it is geometric — the multiple applies not only to the new income, but to all income. That’s the magic.
So in a real sense the Trump tax cuts are what has Cisco up 82% in 3 years. As the animal spirits come out and the multiples expand there is a real wealth effect generated. Again, this is all in the backdrop of real revenue down 10% in 3 years.
How does the future set up? Very well. If the new revenues get to ~$14B per year, that’s 20% more than now. If the shares keep dwindling at a similar rate. that’s another 10%, and if the political backdrop remains the same and multiples continue to expand to 25 for Cisco — then that (1.2 * 1.1 * 1.25) = 65% — which would be $82.5 per share, which would tie the all-time high for the company set on 28 March 2000.
I am changing the computer rankings formula on playoffPredictor.com to reflect margin of victory starting with 2018. This is big change to the core beliefs of the playoffPredictor.com model which have always been based on simplicity. To this point the model only considered wins and losses with no regard to margin of victory, away/home/neutral site for game, offensive or defensive stats, or month when game was played. A model that is this simple, this mathematical, and has excellent correlation to the final AP rankings year after year should not be tinkered with lightly.
By making this change to include strength of schedule I am hoping to accomplish 2 things:
First, this change should make early season rankings more in line with human polls starting from about weeks 3-4. Currently since margin of victory does not matter the formula can not really distinguish between a 3-0 Baylor team and 3-0 Alabama team. It is only later in the season when there is more connectedness between Baylor’s and Alabama’s opponents or opponents opponents that the model can see Alabama’s wins to be superior to Baylor’s. Now, with margin of victory the model will be able to reward a 60-0 Alabama win vs an average Vanderbilt team earlier in the season.
The 2nd goal deals with Auburn and the final 2017 committee prediction. After 3 very successful years of nailing the playoff committee rankings before they came out, last year was a bust for the playoffPredictor methodology when it came to Ohio State / Alabama and the final rankings. The model put Ohio State at #4 in the final rankings, when the playoff committee had them at #5. So what happened? A lot of it has to do with Auburn. Even after Auburn lost to Georgia in the SEC championship game, the computer did not punish Auburn much. Going into the game the computer had them at #11 and after the game the computer had them at #12. So they only dropped one spot in the eyes of the computer. But the humans dropped them from #2 pre-game to #7 post game. Because the formula uses this week’s computer rankings plus last week’s average bias, Auburn’s bias was so high (9 spots between computer at #11 and committee at #2) that when the computer only dropped them from #11 to #12, it expected the committee would similarly drop them from #2 to about #3 — what happened is that the computer was right before the committee saw it.
Let’s take a closer look — here are the week 13 computer and human rankings for 2017. Week 13 is post Auburn-Alabama game (where Auburn beat Alabama) but pre SEC championship game. Note under the old formula (which does not take in margin of victory) Auburn is #11 in the computer. and #2 in the humans.
Now here are the week 14 computer and human rankings. Week 14 is post SEC championship game, where Georgia solidly beat Auburn by 21 points. Again, under the old formula Auburn has moved from #11 only to #12 in the computer, and moved from #2 to #7 in the humans.
Clearly Auburn did not deserve to move from #11 in the computer to say #20 just because they lost to Georgia. Yes, they had 3 losses, but the losses were to Clemson (the #1 team in the final estimation of the committee), Georgia (played for the national championship) and LSU (average team), balanced with wins against Georgia and Alabama, who both played for the national championship. Clearly that is a team resume that should have been right where the computer said (around 10) and not around 20. So there is no fault in the computer here — it is the fault of the committee for not seeing what the computer saw earlier.
Now let’s look at how 2017 would have played out if margin of victory was part of the computer formula all along. At week 13 Auburn is #4 in the computer. Of course they will still be #2 in the humans — so their bias will be a lot lower – only a 2 spot bias.
At week 14 with the new formula, Auburn moves to #11 in the computer. That coupled with the more normal team bias would have put them squarely out of the final top 4 in the models calculus, accomplishing the stated goals.
The other goal that adding strength of schedule will accomplish is get a more accurate computer ranking earlier in the season. Back to 2017, here is the old model computer rankings for week 4
and here is what it would have been with the new margin of victory components included:
and finally here is what the AP poll was at that time:
Note the details like Wisconsin is #7 in the new method, outside of the top 15 in the old. Alabama is at #3 instead of #5. Mathematically looking at the top 10 in all 3 lists, the average delta of old to AP is 5.0 and the average delta of new to AP is 4.1, indicating about a 25% improvement in computer to human by week 4. The correlation of the top 15 improves from .65 to .67.
Now, the method how I am incorporating strength of schedule is: 1 win is given for games where the final margin of victory is 16 or less points, 2 wins given for 17-32 points, and 3 wins given for a margin of victory 33 points or more. I don’t like this, but it is a crude way to start this process and get the desired effect. I feel there is a differentiation between a team down 16 and down 17. At 16 points down, even late in the 4th quarter, that’s just a two score game. Anything is possible in one play, so even if the offense has the ball and a 16 point lead, a pick six followed by a two point conversion makes a compelling game, and that is always one play away. However, at 17 points (3 scores) down, I feel the other team will tend to give up a little bit more — you have really beaten a team when you are wining by 17 points with just 5 minutes left to play and you control the ball. The ideal formula will take all these into consideration — If I have a 1st down, I am up by 9 points, the other team has no timeouts, and there is 3 minutes on the clock — that should all come into play. I may use ESPNs in-game probabilities as the margin of victory component (when ESPN says team A has a 99.9% chance of winning, call the game then, and if that happens at 45:00 minutes of game time vs 59:40 minutes of game time — that is how the team earns margin of victory — but I may wait till next year to implement that. I’m all for suggestions! Drop me a line — firstname.lastname@example.org or at reddit under /r/cfbanalysis
I’ve always wondered… and want a real economist to tell me the answer. I am about to head to Vegas and have a burger, fries and shake at Shake Shack. For $18. At the same day, no doubt, the BLS will release some nutty data that inflation measured at the CPI level grew only by 2% this year. The McDonalds hamburger, fries and shake that I bought in ~2010 (for $5) to the $18 Shake Shack equivalent is clearly not 2% each year, its more like 20% each year.
OK, So I get that an economist would see the Shake Shack burger and the McDonald’s burger as different items, so inflation would not apply. This got me to thinking — how would an economist view this logic:
Baseline: In a 1 town global economy with 100 people and 1 restaurant (a McDonalds). They sell a quarter pound burger for $1.00. All 100 residents eat one of these burgers every year. Year 1 CPI=100, which also equals the GDP.
At the beginning of year 2 this hypothetical economy gets a new restaurant – a Shake Shack. It charges $2.00 for a quarter pound burger. However, they have no sales for the year. All 100 people still eat one burger at McDonalds every year. Year 2 CPI = 100, GDP=100. <- no inflation in this economy.
During year 3, ten people switch eating their annual hamburger from McD to Shake Shack. GDP = 110 (90 from McD, 20 from SS). However, CPI = 100, since the burger at SS is considered a “different product” or has “productivity gains” or some other such garbage. After all, if they were interchangeable products no rational consumer would pay $2 for something they could get for $1 down the street <- no inflation in this economy.
Year 4, all people stop eating at McD and eat at SS. GDP = 200. CPI remains at 100, since in theory, these 100 consumers could have eaten at McD. <-still no inflation in this economy
Year 5, the McDonalds closes down. GDP=200, CPI=100. Even though people are still eating a burger, that is now twice as expensive, and there are no other options, there is still no inflation since theoretically someone could open a McD? <–?
Year 6, McDonalds corporate buys out Shake Shack in a hostile takeover. They remodel the Shake Shack restaurant, bringing back all McDonalds decorations and “classic” recipes for the burger. However, they keep the price at $2 each. GDP=200, CPI =100.
Note that in year 6 you have the exact same conditions as year 1, same product, 2x as expensive, however there has been no inflation at all in this scenario.
How would an economist react to this line of thought?
Idea that hit me today while driving — there is a lot of timing bias in the behavior of an individual stock due to the fact 1) humans are on a daily cycle and 2) opening prices gap from yesterdays close / close positioning. There is also the fact of after market hours news to move prices.
Thing that I am looking for — model a stock performance as a random variable that is *normally distributed* <- we find that modeling the daily return of $AAPL or $MSFT is not normally distributed (because of things like October 1987 <- that is an event that is so many standard deviations off the curve that it olny had a 10^-79 probability event, but it happened anyways). Hypothesis: We know daily price movements are NOT normally distributed, but perhaps the price movements from, say 11am to 1pm ARE normally distributed.
Check the correlation of $XXX from daily performance to 11am-1pm performance. Are they correlated for something like $AAPL? What is the 1 year return of $AAPL using only 11am-1pm vs full day performance? Need to test this and report the findings here later.
My parents have been to 52 countries. Here is the list:
Argentina 1973, 1998
Australia 1989, 1997, 2014
Bolivia 1973 Landed at worlds highest airport La Paz
Brazil 1972, 1973, 1997 (lived here)
Canada 1962, 1966, 2017
Chile 1973, 1998, 2012
China 1978, 1996
Egypt 1959, 1965, 1999
India (lived here)
Iran (lived here)
Mexico (lived here)
United States 1959-1963, 1965-1972, 1973-1977, 1979 to present (lives here)
Venezuela 1972 at Caracas airport on way to Brazil
Yemen 1959, 1965
The Netherlands a.k.a Holland
They have also been to 3 other places that are not UN member states:
For further reading on the subject, pick up a copy of “Such a Wonderful Journey” by Hoshi Aga. It is available on Amazon.com
Today during the OU PMBA icebreakers someone stated they have been to 34 different countries. I confidently said, “yeah, I’ve been to at least 34”. I decided to count them up today, with a map for the last year I was in said country.
I was wrong, I have only been to 32. Here is my list:
Hong Kong (1978)
Saint Martin (2012)
Sint Maarten (2012)
Green = 2010s
Light green = 2000s
Yellow = 1990s
Orange = 1980s
Red = 1970s
So sorry classmate who has been to 34 (or did you say 36) — you are the real globetrotter!
Today I placed an order to buy TQQQ 76.67C, Jan 2019, and another order to sell the same call option. I did this to see what the true market bid/ask spreads are.
In the morning, Schwab was publishing email@example.com, firstname.lastname@example.org and ask at 3.80. There had been no volume on this contract for several days, and the market has been up over the last few days/weeks.
I started placing orders to buy at $1.40, going up in 20c increments until the bid dropped again when I removed my order. The marketmaker bids rose and stayed elevated to $2.50, at which point my bid became best when my order was in, and the bid dropped to $2.50 when I removed my order. I got filled at $3.50. I bought 10 contracts.
Then I sold 1 contract. Started at $3.50, got filled at $3.30.
So the real spread was $3.30-$3.50 (about 6%), and not the $1.40-$3.80 (46%) that the platform said at the beginning of the day.
Compare to QQQ options for the same date (Jan 2019) — the same % out of the money (7% for QQQ, 21% for TQQQ) is 190 strike. Before starting, bid is $3.28 to ask of $3.35 (2%) . Using the same methodology buying 32 contracts and selling 2 I got filled buying at $3.29 and selling at $3.28 (0.3%).
Above is the view before starting QQQ trade
Above is the view after completing both QQQ trades (buying and selling). Notice I am all of the volume. Started at 405, I bought 32 and sold 2, ending volume is 439.
So, the final analysis is as following: