Looking back, 2019 went well for me. I’m grateful for the opportunities I was given and the fun that I had. The main highlights for me were TA’ing my first two classes (Machine Learning for Healthcare and Foundations of Internet Policy), interning at Aledade for the summer, working with Dracut Public Schools on a few extracurricular opportunities (including Dracut DI), and finding a little bit of time to get some research done. On a personal note, I also found a great partner in DC for the summer, and have been seeing her since June. I read lots of really great books, including Dignity by Donna Hicks, Doing Good Better by William MacAskill, The Book of Why by Judea Pearl and Dana Mackenzie, The Brethren by Bob Woodward and Scott Armstrong, and Switch by Dan Heath and Chip Heath.
I was able to serve the MIT Community with various roles as President of the CSAIL Student Social Committee, Discussion Chair of the Science Policy Initiative, and Treasurer of the Association of Student Activities. And I learned some really valuable lessons about power and responsibility: power can arise in all kinds of ways (seniority within a group, gender dynamics, instructor-student interactions, or otherwise) and when communication breaks down, the responsibility falls on the person in power to make sure everyone is on the same page.
But 2019 wasn’t just a series of successes. Part of that success came from privilege and luck, of course, but still other parts of it came from taking chances. For the rest of this reflection post, I want to tell you two stories from this year, a success and a failure.
Working at Aledade
At the beginning of the year, I wanted to find a summer internship where I could help make the world a better place. There were some opportunities in research — which seemed interesting — but I wanted to do something more directly impactful, especially since I already do research as a grad student.
I heard about the company Aledade, which was working with independent primary care doctors to show that a new kind of business model can work in healthcare: value-based care. I learned more about their specific business model (Accountable Care Organizations) from their podcast, The ACO Show. Essentially, the organization works to align the incentives so that doctors win when patients win. Corporate business decisions can get in the way of patient care, like when a hospital orders unnecessary tests & profits from providing those services. In value-based care, the insurer pays for outcomes, not volume, which encourages preventative care (which is much cheaper than reactionary treatment) to make sure patients stay healthy and don’t slip through the cracks.
Aledade’s mantra is to make decisions that are “good for doctors, good for patients, and good for society.” It’s a really interesting concept: creating a market where doing the profitable thing means doing the right thing! I was interested in trying to help & to learn from them.
I submitted a resume online. A little while went by, and I had other offers I needed to get back to as quickly as I could, but I still hadn’t heard from Aledade. This is where I tried something new for me. The CEO and co-founder of the company, Farzad Mostashari, is a very friendly guy and is also active on Twitter. So I decided to tweet at him and asked if I could DM him my resume. I figured the worst case scenario was that he thought it’d be weird and wouldn’t respond (which would result in the same not-having-the-job for me as if I didn’t reach out at all). Fortunately for me, he got back to me, and we setup an interview.
I ended up spending my summer at Aledade, and it was a really great experience. The project was meaningful, and the people were great! I even got to help out with the podcast (I asked if there was anything I could do to help & they wanted to give me something to do, so I got to send a couple cold-call emails to potential guests). Often times, it doesn’t hurt to ask!
Applying for Tech Congress
Throughout my service in MIT organizations, I’ve learned that governing is hard: even when you want to do the most good for the most amount of people, it’s hard to know what the right answers are. No one is an expert in everything, because there are so many topics and fields (e.g. tech, healthcare, education, labor, criminal justice, immigration, agriculture, foreign affairs, climate change, etc), which is why the government relies on experts to help them understand what options to consider for making the best policies. This has been hurt by ideological efforts to actively undermine government, like when Newt Gingrich was able to dissolve the Office of Technology Assessment in the 1990s. Is anyone now surprised when Congress lacks technical expertise on issues like regulating Facebook and protecting consumers online?
My friend Andy helped write a blog post about the importance of technical experts serving a Civic “Tour of Duty” in government, both to gain experience/perspective for oneself and also to help the government make well-informed decisions. Justice Stephen Breyer believes that citizen participation is crucial to making a workable society; without everyone doing their part, it won’t work for the people.
That is why I was very excited to hear about the Tech Congress program! It was created a few years ago to place technical experts on Capitol Hill as staffers in Congress for a year. Additionally, the program emphasizes the importance of diversity, and offers a competitive stipend in order to attract a broader pool of applicants (because it doesn’t require independent wealth in order to support oneself). I think this program is a great idea!
I applied to Tech Congress. As part of the application process I wrote a brief essay about a technical topic, along with recommendations for how Congress should handle it. I wrote about algorithmic bias, which is currently unregulated, and deeply problematic. My initial attempt at the essay began with:
In her popular book Weapons of Math Destruction, Cathy O’Neil identifies key elements that can make algorithms dangerous. Opaque algorithms do not have to explain their decisions, which makes it difficult to identify or correct bias. Unaccountable algorithms do not have to answer for mistakes they make or correct for future decisions. Scale is what transforms a nuisance into a catastrophe.
If someone is denied a loan for being left-handed, that seems wrong but probably not worth correcting with policy; perhaps other people will be denied loans for being right-handed, and it could “even out.” But if things do not even out — if the bias is always directed one way, perhaps by race — then society should intervene. Technology at scale makes it easier for biases to correlate, often times invisibly until it’s too late.
This usually boils down to an imbalanced dataset (e.g. teaching a model “What does a criminal look like?” on a dataset collected from over-policing some neighborhoods). But it can get very subtle (e.g. a healthcare risk model could be biased by socioeconomic status if it learns to identify expensive brand name drugs as top predictors but doesn’t recognize their low-cost generic equivalents). These things can be very hard to see coming beforehand.
Because my goal was to make the essay understandable to a non-technical audience, I decided to ask for help! I posted my first draft essay on Facebook, and asked friends to give me feedback so that I could make it better. In total, that post got 22 comments, and a lot of really useful feedback, including eliminating jargon-y phrases and questioning assumptions that I took for granted (such as when people ask “How can math be biased?”). The result of the feedback helped me write something that I was very proud of! I thought it was clear and informative.
“How can math be biased?” That is like asking how cars can crash; math is a tool and it can be misused if you’re not careful. In the last few years, there have been many high-profile examples of biased AI, including Google’s facial recognition thinking black people were “gorillas” or Amazon’s automatic resume filter down-weighting resumes from women’s colleges. Responsible scientists have created conferences to study this issue: the most well-known one to date is called FAT/ML (Fairness, Accountability, and Transparency in Machine Learning).
AI learns to make decisions by finding patterns in data; if the dataset has biases, then the model likely will too. Some biases are “obvious” to see coming (e.g. teaching a model “What does a criminal look like?” on a dataset collected by over-policing some neighborhoods). However, other biases can be more subtle (e.g. a healthcare risk model could be biased by socioeconomic status if it learns to identify expensive brand name drugs as top predictors but doesn’t recognize their low-cost generic equivalents).
The scale of technology makes it easier for biases affect thousands or even millions of people, often times invisibly until it’s too late.
Unfortunately, I was not accepted to Tech Congress’s program this year. They received hundreds of applications (which was bad news for me, but good news for America, I guess).
It was, of course, disappointing to not be selected, but after some thought, I was okay.
- For one, I am still a PhD student at my dream school, doing really exciting research, and surrounded by great people. I have a lot to be thankful for already.
- Additionally, my application wasn’t what they were looking for right now, but I still had fun writing that essay. Getting crowdsourced feedback from my non-EECS friends was a really great opportunity to help me examine my own assumptions & get experience communicating important topics.
- But there was a third reason why I realized I was going to be okay: rejections mean that I’m reaching & trying to grow. If you don’t ask, then the answer is automatically no. Kim Liao argues that you should aim to get 100 rejections a year to keep yourself from letting failure stop you before you even start. This perspective helped me accept that not everything pans out successfully.
Overall, 2019 was a really good year for me. Here’s hoping 2020 will be anywhere near as good. I’m obviously very fortunate to now be at a great school like MIT, which will (deservedly or not) open some doors for me that wouldn’t be available if I was still at UMass Lowell. But regardless of where I’m at, I can always shoot for the goal of “better”. And part of that is putting myself out there and asking for what I want.